00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 1821 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3087 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.060 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.060 The recommended git tool is: git 00:00:00.060 using credential 00000000-0000-0000-0000-000000000002 00:00:00.062 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.095 Fetching changes from the remote Git repository 00:00:00.096 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.133 Using shallow fetch with depth 1 00:00:00.133 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.133 > git --version # timeout=10 00:00:00.172 > git --version # 'git version 2.39.2' 00:00:00.172 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.173 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.173 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.241 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.254 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.266 Checking out Revision 10da8f6d99838e411e4e94523ded0bfebf3e7100 (FETCH_HEAD) 00:00:05.266 > git config core.sparsecheckout # timeout=10 00:00:05.277 > git read-tree -mu HEAD # timeout=10 00:00:05.295 > git checkout -f 10da8f6d99838e411e4e94523ded0bfebf3e7100 # timeout=5 00:00:05.316 Commit message: "scripts/create_git_mirror: Update path to xnvme submodule" 00:00:05.316 > git rev-list --no-walk 10da8f6d99838e411e4e94523ded0bfebf3e7100 # timeout=10 00:00:05.397 [Pipeline] Start of Pipeline 00:00:05.411 [Pipeline] library 00:00:05.412 Loading library shm_lib@master 00:00:05.413 Library shm_lib@master is cached. Copying from home. 00:00:05.428 [Pipeline] node 00:00:20.430 Still waiting to schedule task 00:00:20.430 ‘GP10’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.430 ‘GP15’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.430 ‘GP16’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.430 ‘GP18’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.430 ‘GP19’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.431 ‘GP1’ is offline 00:00:20.431 ‘GP20’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.431 ‘GP21’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.431 ‘GP22’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.431 ‘GP24’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.431 ‘GP4’ is offline 00:00:20.431 ‘GP5’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.431 ‘ImageBuilder1’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.431 ‘Jenkins’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.431 ‘ME1’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.431 ‘ME3’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.431 ‘PE5’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.431 ‘SM10’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.431 ‘SM11’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.431 ‘SM1’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.431 ‘SM28’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.431 ‘SM29’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.431 ‘SM2’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.431 ‘SM30’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.431 ‘SM31’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.431 ‘SM32’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.431 ‘SM33’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.431 ‘SM34’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.431 ‘SM35’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.431 ‘SM5’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.431 ‘SM6’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.431 ‘SM7’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.431 ‘SM8’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.431 ‘VM-host-PE1’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.431 ‘VM-host-PE2’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.431 ‘VM-host-PE3’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.431 ‘VM-host-PE4’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.431 ‘VM-host-SM18’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.431 ‘VM-host-WFP1’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.431 ‘VM-host-WFP25’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.431 ‘WCP0’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.431 ‘WCP2’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.431 ‘WCP4’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.431 ‘WFP12’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.431 ‘WFP13’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.431 ‘WFP16’ is offline 00:00:20.431 ‘WFP17’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.432 ‘WFP21’ is offline 00:00:20.432 ‘WFP24’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.432 ‘WFP2’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.432 ‘WFP36’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.432 ‘WFP45’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.432 ‘WFP49’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.432 ‘WFP4’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.432 ‘WFP63’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.432 ‘WFP64’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.432 ‘WFP6’ is offline 00:00:20.432 ‘WFP9’ is offline 00:00:20.432 ‘ipxe-staging’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.432 ‘prc_bsc_waikikibeach64’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.432 ‘spdk-pxe-01’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.432 ‘spdk-pxe-02’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:09:38.145 Running on GP6 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:09:38.147 [Pipeline] { 00:09:38.163 [Pipeline] catchError 00:09:38.165 [Pipeline] { 00:09:38.181 [Pipeline] wrap 00:09:38.194 [Pipeline] { 00:09:38.204 [Pipeline] stage 00:09:38.206 [Pipeline] { (Prologue) 00:09:38.417 [Pipeline] sh 00:09:38.708 + logger -p user.info -t JENKINS-CI 00:09:38.731 [Pipeline] echo 00:09:38.733 Node: GP6 00:09:38.743 [Pipeline] sh 00:09:39.042 [Pipeline] setCustomBuildProperty 00:09:39.056 [Pipeline] echo 00:09:39.058 Cleanup processes 00:09:39.064 [Pipeline] sh 00:09:39.346 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:39.346 2027688 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:39.361 [Pipeline] sh 00:09:39.644 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:39.644 ++ grep -v 'sudo pgrep' 00:09:39.644 ++ awk '{print $1}' 00:09:39.644 + sudo kill -9 00:09:39.644 + true 00:09:39.661 [Pipeline] cleanWs 00:09:39.672 [WS-CLEANUP] Deleting project workspace... 00:09:39.672 [WS-CLEANUP] Deferred wipeout is used... 00:09:39.678 [WS-CLEANUP] done 00:09:39.684 [Pipeline] setCustomBuildProperty 00:09:39.701 [Pipeline] sh 00:09:39.984 + sudo git config --global --replace-all safe.directory '*' 00:09:40.062 [Pipeline] nodesByLabel 00:09:40.064 Found a total of 1 nodes with the 'sorcerer' label 00:09:40.076 [Pipeline] httpRequest 00:09:40.080 HttpMethod: GET 00:09:40.081 URL: http://10.211.164.101/packages/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:09:40.085 Sending request to url: http://10.211.164.101/packages/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:09:40.088 Response Code: HTTP/1.1 200 OK 00:09:40.089 Success: Status code 200 is in the accepted range: 200,404 00:09:40.089 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:09:40.228 [Pipeline] sh 00:09:40.507 + tar --no-same-owner -xf jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:09:40.526 [Pipeline] httpRequest 00:09:40.531 HttpMethod: GET 00:09:40.532 URL: http://10.211.164.101/packages/spdk_4506c0c368f63ba9b9b013ecff216cef6ee8d0a4.tar.gz 00:09:40.532 Sending request to url: http://10.211.164.101/packages/spdk_4506c0c368f63ba9b9b013ecff216cef6ee8d0a4.tar.gz 00:09:40.534 Response Code: HTTP/1.1 200 OK 00:09:40.535 Success: Status code 200 is in the accepted range: 200,404 00:09:40.535 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_4506c0c368f63ba9b9b013ecff216cef6ee8d0a4.tar.gz 00:09:42.658 [Pipeline] sh 00:09:42.935 + tar --no-same-owner -xf spdk_4506c0c368f63ba9b9b013ecff216cef6ee8d0a4.tar.gz 00:09:45.526 [Pipeline] sh 00:09:45.808 + git -C spdk log --oneline -n5 00:09:45.808 4506c0c36 test/common: Enable inherit_errexit 00:09:45.808 b24df7cfa test: Drop superfluous calls to print_backtrace() 00:09:45.808 7b52e4c17 test/scheduler: Meassure utime of $spdk_pid threads as a fallback 00:09:45.808 1dc065205 test/scheduler: Calculate median of the cpu load samples 00:09:45.808 b22f1b34d test/scheduler: Enhance lookup of the $old_cgroup in move_proc() 00:09:45.827 [Pipeline] withCredentials 00:09:45.836 > git --version # timeout=10 00:09:45.851 > git --version # 'git version 2.39.2' 00:09:45.867 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:09:45.869 [Pipeline] { 00:09:45.877 [Pipeline] retry 00:09:45.879 [Pipeline] { 00:09:45.893 [Pipeline] sh 00:09:46.173 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:09:46.185 [Pipeline] } 00:09:46.208 [Pipeline] // retry 00:09:46.213 [Pipeline] } 00:09:46.234 [Pipeline] // withCredentials 00:09:46.247 [Pipeline] httpRequest 00:09:46.251 HttpMethod: GET 00:09:46.252 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:09:46.254 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:09:46.257 Response Code: HTTP/1.1 200 OK 00:09:46.258 Success: Status code 200 is in the accepted range: 200,404 00:09:46.258 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:09:47.939 [Pipeline] sh 00:09:48.219 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:09:50.125 [Pipeline] sh 00:09:50.404 + git -C dpdk log --oneline -n5 00:09:50.404 caf0f5d395 version: 22.11.4 00:09:50.404 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:09:50.404 dc9c799c7d vhost: fix missing spinlock unlock 00:09:50.404 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:09:50.404 6ef77f2a5e net/gve: fix RX buffer size alignment 00:09:50.414 [Pipeline] } 00:09:50.431 [Pipeline] // stage 00:09:50.440 [Pipeline] stage 00:09:50.442 [Pipeline] { (Prepare) 00:09:50.462 [Pipeline] writeFile 00:09:50.475 [Pipeline] sh 00:09:50.754 + logger -p user.info -t JENKINS-CI 00:09:50.765 [Pipeline] sh 00:09:51.044 + logger -p user.info -t JENKINS-CI 00:09:51.058 [Pipeline] sh 00:09:51.337 + cat autorun-spdk.conf 00:09:51.337 SPDK_RUN_FUNCTIONAL_TEST=1 00:09:51.337 SPDK_TEST_NVMF=1 00:09:51.337 SPDK_TEST_NVME_CLI=1 00:09:51.337 SPDK_TEST_NVMF_TRANSPORT=tcp 00:09:51.337 SPDK_TEST_NVMF_NICS=e810 00:09:51.337 SPDK_TEST_VFIOUSER=1 00:09:51.337 SPDK_RUN_UBSAN=1 00:09:51.337 NET_TYPE=phy 00:09:51.337 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:09:51.337 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:09:51.343 RUN_NIGHTLY=1 00:09:51.349 [Pipeline] readFile 00:09:51.376 [Pipeline] withEnv 00:09:51.378 [Pipeline] { 00:09:51.393 [Pipeline] sh 00:09:51.675 + set -ex 00:09:51.675 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:09:51.675 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:09:51.675 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:09:51.675 ++ SPDK_TEST_NVMF=1 00:09:51.675 ++ SPDK_TEST_NVME_CLI=1 00:09:51.675 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:09:51.675 ++ SPDK_TEST_NVMF_NICS=e810 00:09:51.675 ++ SPDK_TEST_VFIOUSER=1 00:09:51.675 ++ SPDK_RUN_UBSAN=1 00:09:51.675 ++ NET_TYPE=phy 00:09:51.675 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:09:51.675 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:09:51.675 ++ RUN_NIGHTLY=1 00:09:51.675 + case $SPDK_TEST_NVMF_NICS in 00:09:51.675 + DRIVERS=ice 00:09:51.675 + [[ tcp == \r\d\m\a ]] 00:09:51.675 + [[ -n ice ]] 00:09:51.675 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:09:51.675 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:09:51.675 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:09:51.675 rmmod: ERROR: Module irdma is not currently loaded 00:09:51.675 rmmod: ERROR: Module i40iw is not currently loaded 00:09:51.675 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:09:51.675 + true 00:09:51.675 + for D in $DRIVERS 00:09:51.675 + sudo modprobe ice 00:09:51.675 + exit 0 00:09:51.685 [Pipeline] } 00:09:51.706 [Pipeline] // withEnv 00:09:51.713 [Pipeline] } 00:09:51.730 [Pipeline] // stage 00:09:51.739 [Pipeline] catchError 00:09:51.741 [Pipeline] { 00:09:51.759 [Pipeline] timeout 00:09:51.759 Timeout set to expire in 40 min 00:09:51.761 [Pipeline] { 00:09:51.779 [Pipeline] stage 00:09:51.781 [Pipeline] { (Tests) 00:09:51.799 [Pipeline] sh 00:09:52.081 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:09:52.081 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:09:52.081 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:09:52.081 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:09:52.081 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:52.081 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:09:52.081 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:09:52.081 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:09:52.081 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:09:52.081 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:09:52.081 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:09:52.081 + source /etc/os-release 00:09:52.081 ++ NAME='Fedora Linux' 00:09:52.081 ++ VERSION='38 (Cloud Edition)' 00:09:52.081 ++ ID=fedora 00:09:52.081 ++ VERSION_ID=38 00:09:52.081 ++ VERSION_CODENAME= 00:09:52.081 ++ PLATFORM_ID=platform:f38 00:09:52.081 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:09:52.081 ++ ANSI_COLOR='0;38;2;60;110;180' 00:09:52.081 ++ LOGO=fedora-logo-icon 00:09:52.081 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:09:52.081 ++ HOME_URL=https://fedoraproject.org/ 00:09:52.081 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:09:52.081 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:09:52.081 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:09:52.081 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:09:52.081 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:09:52.081 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:09:52.081 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:09:52.081 ++ SUPPORT_END=2024-05-14 00:09:52.081 ++ VARIANT='Cloud Edition' 00:09:52.081 ++ VARIANT_ID=cloud 00:09:52.081 + uname -a 00:09:52.081 Linux spdk-gp-06 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:09:52.081 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:09:53.456 Hugepages 00:09:53.456 node hugesize free / total 00:09:53.456 node0 1048576kB 0 / 0 00:09:53.456 node0 2048kB 0 / 0 00:09:53.456 node1 1048576kB 0 / 0 00:09:53.456 node1 2048kB 0 / 0 00:09:53.456 00:09:53.456 Type BDF Vendor Device NUMA Driver Device Block devices 00:09:53.456 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:09:53.456 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:09:53.456 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:09:53.456 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:09:53.456 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:09:53.456 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:09:53.456 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:09:53.456 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:09:53.456 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:09:53.456 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:09:53.456 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:09:53.456 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:09:53.456 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:09:53.456 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:09:53.456 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:09:53.456 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:09:53.456 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:09:53.456 + rm -f /tmp/spdk-ld-path 00:09:53.456 + source autorun-spdk.conf 00:09:53.456 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:09:53.456 ++ SPDK_TEST_NVMF=1 00:09:53.456 ++ SPDK_TEST_NVME_CLI=1 00:09:53.456 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:09:53.456 ++ SPDK_TEST_NVMF_NICS=e810 00:09:53.456 ++ SPDK_TEST_VFIOUSER=1 00:09:53.456 ++ SPDK_RUN_UBSAN=1 00:09:53.456 ++ NET_TYPE=phy 00:09:53.456 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:09:53.456 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:09:53.456 ++ RUN_NIGHTLY=1 00:09:53.456 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:09:53.456 + [[ -n '' ]] 00:09:53.456 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:53.456 + for M in /var/spdk/build-*-manifest.txt 00:09:53.456 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:09:53.456 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:09:53.456 + for M in /var/spdk/build-*-manifest.txt 00:09:53.456 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:09:53.456 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:09:53.456 ++ uname 00:09:53.456 + [[ Linux == \L\i\n\u\x ]] 00:09:53.456 + sudo dmesg -T 00:09:53.456 + sudo dmesg --clear 00:09:53.456 + dmesg_pid=2028483 00:09:53.456 + [[ Fedora Linux == FreeBSD ]] 00:09:53.456 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:53.456 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:53.456 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:09:53.456 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:09:53.456 + sudo dmesg -Tw 00:09:53.456 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:09:53.456 + [[ -x /usr/src/fio-static/fio ]] 00:09:53.456 + export FIO_BIN=/usr/src/fio-static/fio 00:09:53.456 + FIO_BIN=/usr/src/fio-static/fio 00:09:53.456 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:09:53.456 + [[ ! -v VFIO_QEMU_BIN ]] 00:09:53.456 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:09:53.456 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:53.456 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:53.456 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:09:53.456 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:53.456 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:53.456 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:09:53.456 Test configuration: 00:09:53.456 SPDK_RUN_FUNCTIONAL_TEST=1 00:09:53.456 SPDK_TEST_NVMF=1 00:09:53.456 SPDK_TEST_NVME_CLI=1 00:09:53.456 SPDK_TEST_NVMF_TRANSPORT=tcp 00:09:53.456 SPDK_TEST_NVMF_NICS=e810 00:09:53.456 SPDK_TEST_VFIOUSER=1 00:09:53.456 SPDK_RUN_UBSAN=1 00:09:53.456 NET_TYPE=phy 00:09:53.456 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:09:53.456 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:09:53.456 RUN_NIGHTLY=1 08:35:48 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:53.456 08:35:48 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:09:53.456 08:35:48 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:53.456 08:35:48 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:53.456 08:35:48 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.456 08:35:48 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.457 08:35:48 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.457 08:35:48 -- paths/export.sh@5 -- $ export PATH 00:09:53.457 08:35:48 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.457 08:35:48 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:09:53.457 08:35:48 -- common/autobuild_common.sh@437 -- $ date +%s 00:09:53.716 08:35:48 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715754948.XXXXXX 00:09:53.716 08:35:48 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715754948.gxUqcz 00:09:53.716 08:35:48 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:09:53.716 08:35:48 -- common/autobuild_common.sh@443 -- $ '[' -n v22.11.4 ']' 00:09:53.716 08:35:48 -- common/autobuild_common.sh@444 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:09:53.716 08:35:48 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:09:53.716 08:35:48 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:09:53.716 08:35:48 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:09:53.716 08:35:48 -- common/autobuild_common.sh@453 -- $ get_config_params 00:09:53.716 08:35:48 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:09:53.716 08:35:48 -- common/autotest_common.sh@10 -- $ set +x 00:09:53.716 08:35:48 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:09:53.716 08:35:48 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:09:53.716 08:35:48 -- pm/common@17 -- $ local monitor 00:09:53.716 08:35:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:53.716 08:35:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:53.716 08:35:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:53.716 08:35:48 -- pm/common@21 -- $ date +%s 00:09:53.716 08:35:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:53.716 08:35:48 -- pm/common@21 -- $ date +%s 00:09:53.716 08:35:48 -- pm/common@25 -- $ sleep 1 00:09:53.716 08:35:48 -- pm/common@21 -- $ date +%s 00:09:53.716 08:35:48 -- pm/common@21 -- $ date +%s 00:09:53.716 08:35:48 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715754948 00:09:53.716 08:35:48 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715754948 00:09:53.716 08:35:48 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715754948 00:09:53.716 08:35:48 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715754948 00:09:53.716 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715754948_collect-vmstat.pm.log 00:09:53.716 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715754948_collect-cpu-load.pm.log 00:09:53.716 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715754948_collect-cpu-temp.pm.log 00:09:53.716 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715754948_collect-bmc-pm.bmc.pm.log 00:09:54.650 08:35:49 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:09:54.650 08:35:49 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:09:54.650 08:35:49 -- spdk/autobuild.sh@12 -- $ umask 022 00:09:54.650 08:35:49 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:54.650 08:35:49 -- spdk/autobuild.sh@16 -- $ date -u 00:09:54.650 Wed May 15 06:35:49 AM UTC 2024 00:09:54.650 08:35:49 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:09:54.650 v24.05-pre-658-g4506c0c36 00:09:54.650 08:35:49 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:09:54.650 08:35:49 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:09:54.650 08:35:49 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:09:54.650 08:35:49 -- common/autotest_common.sh@1098 -- $ '[' 3 -le 1 ']' 00:09:54.650 08:35:49 -- common/autotest_common.sh@1104 -- $ xtrace_disable 00:09:54.650 08:35:49 -- common/autotest_common.sh@10 -- $ set +x 00:09:54.650 ************************************ 00:09:54.650 START TEST ubsan 00:09:54.650 ************************************ 00:09:54.650 08:35:49 ubsan -- common/autotest_common.sh@1122 -- $ echo 'using ubsan' 00:09:54.650 using ubsan 00:09:54.650 00:09:54.650 real 0m0.000s 00:09:54.650 user 0m0.000s 00:09:54.650 sys 0m0.000s 00:09:54.650 08:35:49 ubsan -- common/autotest_common.sh@1123 -- $ xtrace_disable 00:09:54.650 08:35:49 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:09:54.650 ************************************ 00:09:54.650 END TEST ubsan 00:09:54.650 ************************************ 00:09:54.650 08:35:49 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:09:54.650 08:35:49 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:09:54.650 08:35:49 -- common/autobuild_common.sh@429 -- $ run_test build_native_dpdk _build_native_dpdk 00:09:54.650 08:35:49 -- common/autotest_common.sh@1098 -- $ '[' 2 -le 1 ']' 00:09:54.650 08:35:49 -- common/autotest_common.sh@1104 -- $ xtrace_disable 00:09:54.650 08:35:49 -- common/autotest_common.sh@10 -- $ set +x 00:09:54.650 ************************************ 00:09:54.650 START TEST build_native_dpdk 00:09:54.650 ************************************ 00:09:54.650 08:35:49 build_native_dpdk -- common/autotest_common.sh@1122 -- $ _build_native_dpdk 00:09:54.650 08:35:49 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:09:54.650 08:35:49 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:09:54.650 08:35:49 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:09:54.650 08:35:49 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:09:54.650 08:35:49 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:09:54.650 08:35:49 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:09:54.650 08:35:49 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:09:54.650 08:35:49 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:09:54.650 08:35:49 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:09:54.650 08:35:49 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:09:54.650 08:35:49 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:09:54.650 08:35:49 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:09:54.650 08:35:49 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:09:54.650 08:35:49 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:09:54.650 08:35:49 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:09:54.650 08:35:49 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:09:54.650 08:35:49 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:09:54.650 08:35:49 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:09:54.650 08:35:49 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:54.650 08:35:49 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:09:54.650 caf0f5d395 version: 22.11.4 00:09:54.650 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:09:54.650 dc9c799c7d vhost: fix missing spinlock unlock 00:09:54.650 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:09:54.650 6ef77f2a5e net/gve: fix RX buffer size alignment 00:09:54.650 08:35:49 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:09:54.650 08:35:49 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:09:54.650 08:35:49 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:09:54.650 08:35:49 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:09:54.650 08:35:49 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:09:54.650 08:35:49 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:09:54.650 08:35:49 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:09:54.650 08:35:49 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:09:54.650 08:35:49 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:09:54.650 08:35:49 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:09:54.650 08:35:49 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:09:54.650 08:35:49 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:09:54.650 08:35:49 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:09:54.650 08:35:49 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:09:54.650 08:35:49 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:09:54.650 08:35:49 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:09:54.650 08:35:49 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:09:54.650 08:35:49 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:09:54.650 08:35:49 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:09:54.650 08:35:49 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:09:54.650 08:35:49 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:09:54.650 08:35:49 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:09:54.650 08:35:49 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:09:54.650 08:35:49 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:09:54.650 08:35:49 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:09:54.650 08:35:49 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:09:54.650 08:35:49 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:09:54.650 08:35:49 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:09:54.650 08:35:49 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:09:54.650 08:35:49 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:09:54.650 08:35:49 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:09:54.650 08:35:49 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:09:54.650 08:35:49 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:54.650 08:35:49 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:09:54.650 08:35:49 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:09:54.650 08:35:49 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:09:54.650 08:35:49 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:09:54.651 08:35:49 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:09:54.651 08:35:49 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:09:54.651 08:35:49 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:09:54.651 08:35:49 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:09:54.651 08:35:49 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:09:54.651 08:35:49 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:09:54.651 08:35:49 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:09:54.651 08:35:49 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:09:54.651 08:35:49 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:09:54.651 patching file config/rte_config.h 00:09:54.651 Hunk #1 succeeded at 60 (offset 1 line). 00:09:54.651 08:35:49 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:09:54.651 08:35:49 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:09:54.651 08:35:49 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:09:54.651 08:35:49 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:09:54.651 08:35:49 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:09:58.834 The Meson build system 00:09:58.834 Version: 1.3.1 00:09:58.834 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:09:58.834 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:09:58.834 Build type: native build 00:09:58.834 Program cat found: YES (/usr/bin/cat) 00:09:58.834 Project name: DPDK 00:09:58.834 Project version: 22.11.4 00:09:58.834 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:09:58.834 C linker for the host machine: gcc ld.bfd 2.39-16 00:09:58.834 Host machine cpu family: x86_64 00:09:58.834 Host machine cpu: x86_64 00:09:58.834 Message: ## Building in Developer Mode ## 00:09:58.834 Program pkg-config found: YES (/usr/bin/pkg-config) 00:09:58.834 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:09:58.834 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:09:58.834 Program objdump found: YES (/usr/bin/objdump) 00:09:58.834 Program python3 found: YES (/usr/bin/python3) 00:09:58.834 Program cat found: YES (/usr/bin/cat) 00:09:58.834 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:09:58.834 Checking for size of "void *" : 8 00:09:58.834 Checking for size of "void *" : 8 (cached) 00:09:58.834 Library m found: YES 00:09:58.834 Library numa found: YES 00:09:58.834 Has header "numaif.h" : YES 00:09:58.834 Library fdt found: NO 00:09:58.834 Library execinfo found: NO 00:09:58.834 Has header "execinfo.h" : YES 00:09:58.834 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:09:58.834 Run-time dependency libarchive found: NO (tried pkgconfig) 00:09:58.834 Run-time dependency libbsd found: NO (tried pkgconfig) 00:09:58.834 Run-time dependency jansson found: NO (tried pkgconfig) 00:09:58.834 Run-time dependency openssl found: YES 3.0.9 00:09:58.834 Run-time dependency libpcap found: YES 1.10.4 00:09:58.834 Has header "pcap.h" with dependency libpcap: YES 00:09:58.834 Compiler for C supports arguments -Wcast-qual: YES 00:09:58.834 Compiler for C supports arguments -Wdeprecated: YES 00:09:58.834 Compiler for C supports arguments -Wformat: YES 00:09:58.834 Compiler for C supports arguments -Wformat-nonliteral: NO 00:09:58.834 Compiler for C supports arguments -Wformat-security: NO 00:09:58.834 Compiler for C supports arguments -Wmissing-declarations: YES 00:09:58.834 Compiler for C supports arguments -Wmissing-prototypes: YES 00:09:58.834 Compiler for C supports arguments -Wnested-externs: YES 00:09:58.834 Compiler for C supports arguments -Wold-style-definition: YES 00:09:58.834 Compiler for C supports arguments -Wpointer-arith: YES 00:09:58.834 Compiler for C supports arguments -Wsign-compare: YES 00:09:58.834 Compiler for C supports arguments -Wstrict-prototypes: YES 00:09:58.834 Compiler for C supports arguments -Wundef: YES 00:09:58.834 Compiler for C supports arguments -Wwrite-strings: YES 00:09:58.834 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:09:58.834 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:09:58.834 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:09:58.834 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:09:58.834 Compiler for C supports arguments -mavx512f: YES 00:09:58.834 Checking if "AVX512 checking" compiles: YES 00:09:58.834 Fetching value of define "__SSE4_2__" : 1 00:09:58.834 Fetching value of define "__AES__" : 1 00:09:58.834 Fetching value of define "__AVX__" : 1 00:09:58.834 Fetching value of define "__AVX2__" : (undefined) 00:09:58.834 Fetching value of define "__AVX512BW__" : (undefined) 00:09:58.834 Fetching value of define "__AVX512CD__" : (undefined) 00:09:58.834 Fetching value of define "__AVX512DQ__" : (undefined) 00:09:58.834 Fetching value of define "__AVX512F__" : (undefined) 00:09:58.834 Fetching value of define "__AVX512VL__" : (undefined) 00:09:58.834 Fetching value of define "__PCLMUL__" : 1 00:09:58.834 Fetching value of define "__RDRND__" : 1 00:09:58.834 Fetching value of define "__RDSEED__" : (undefined) 00:09:58.834 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:09:58.834 Compiler for C supports arguments -Wno-format-truncation: YES 00:09:58.834 Message: lib/kvargs: Defining dependency "kvargs" 00:09:58.834 Message: lib/telemetry: Defining dependency "telemetry" 00:09:58.834 Checking for function "getentropy" : YES 00:09:58.834 Message: lib/eal: Defining dependency "eal" 00:09:58.834 Message: lib/ring: Defining dependency "ring" 00:09:58.834 Message: lib/rcu: Defining dependency "rcu" 00:09:58.834 Message: lib/mempool: Defining dependency "mempool" 00:09:58.834 Message: lib/mbuf: Defining dependency "mbuf" 00:09:58.834 Fetching value of define "__PCLMUL__" : 1 (cached) 00:09:58.834 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:09:58.834 Compiler for C supports arguments -mpclmul: YES 00:09:58.834 Compiler for C supports arguments -maes: YES 00:09:58.834 Compiler for C supports arguments -mavx512f: YES (cached) 00:09:58.834 Compiler for C supports arguments -mavx512bw: YES 00:09:58.834 Compiler for C supports arguments -mavx512dq: YES 00:09:58.834 Compiler for C supports arguments -mavx512vl: YES 00:09:58.834 Compiler for C supports arguments -mvpclmulqdq: YES 00:09:58.834 Compiler for C supports arguments -mavx2: YES 00:09:58.834 Compiler for C supports arguments -mavx: YES 00:09:58.834 Message: lib/net: Defining dependency "net" 00:09:58.834 Message: lib/meter: Defining dependency "meter" 00:09:58.834 Message: lib/ethdev: Defining dependency "ethdev" 00:09:58.834 Message: lib/pci: Defining dependency "pci" 00:09:58.834 Message: lib/cmdline: Defining dependency "cmdline" 00:09:58.834 Message: lib/metrics: Defining dependency "metrics" 00:09:58.834 Message: lib/hash: Defining dependency "hash" 00:09:58.834 Message: lib/timer: Defining dependency "timer" 00:09:58.834 Fetching value of define "__AVX2__" : (undefined) (cached) 00:09:58.834 Compiler for C supports arguments -mavx2: YES (cached) 00:09:58.834 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:09:58.834 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:09:58.834 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:09:58.834 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:09:58.834 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:09:58.834 Message: lib/acl: Defining dependency "acl" 00:09:58.834 Message: lib/bbdev: Defining dependency "bbdev" 00:09:58.834 Message: lib/bitratestats: Defining dependency "bitratestats" 00:09:58.835 Run-time dependency libelf found: YES 0.190 00:09:58.835 Message: lib/bpf: Defining dependency "bpf" 00:09:58.835 Message: lib/cfgfile: Defining dependency "cfgfile" 00:09:58.835 Message: lib/compressdev: Defining dependency "compressdev" 00:09:58.835 Message: lib/cryptodev: Defining dependency "cryptodev" 00:09:58.835 Message: lib/distributor: Defining dependency "distributor" 00:09:58.835 Message: lib/efd: Defining dependency "efd" 00:09:58.835 Message: lib/eventdev: Defining dependency "eventdev" 00:09:58.835 Message: lib/gpudev: Defining dependency "gpudev" 00:09:58.835 Message: lib/gro: Defining dependency "gro" 00:09:58.835 Message: lib/gso: Defining dependency "gso" 00:09:58.835 Message: lib/ip_frag: Defining dependency "ip_frag" 00:09:58.835 Message: lib/jobstats: Defining dependency "jobstats" 00:09:58.835 Message: lib/latencystats: Defining dependency "latencystats" 00:09:58.835 Message: lib/lpm: Defining dependency "lpm" 00:09:58.835 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:09:58.835 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:09:58.835 Fetching value of define "__AVX512IFMA__" : (undefined) 00:09:58.835 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:09:58.835 Message: lib/member: Defining dependency "member" 00:09:58.835 Message: lib/pcapng: Defining dependency "pcapng" 00:09:58.835 Compiler for C supports arguments -Wno-cast-qual: YES 00:09:58.835 Message: lib/power: Defining dependency "power" 00:09:58.835 Message: lib/rawdev: Defining dependency "rawdev" 00:09:58.835 Message: lib/regexdev: Defining dependency "regexdev" 00:09:58.835 Message: lib/dmadev: Defining dependency "dmadev" 00:09:58.835 Message: lib/rib: Defining dependency "rib" 00:09:58.835 Message: lib/reorder: Defining dependency "reorder" 00:09:58.835 Message: lib/sched: Defining dependency "sched" 00:09:58.835 Message: lib/security: Defining dependency "security" 00:09:58.835 Message: lib/stack: Defining dependency "stack" 00:09:58.835 Has header "linux/userfaultfd.h" : YES 00:09:58.835 Message: lib/vhost: Defining dependency "vhost" 00:09:58.835 Message: lib/ipsec: Defining dependency "ipsec" 00:09:58.835 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:09:58.835 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:09:58.835 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:09:58.835 Compiler for C supports arguments -mavx512bw: YES (cached) 00:09:58.835 Message: lib/fib: Defining dependency "fib" 00:09:58.835 Message: lib/port: Defining dependency "port" 00:09:58.835 Message: lib/pdump: Defining dependency "pdump" 00:09:58.835 Message: lib/table: Defining dependency "table" 00:09:58.835 Message: lib/pipeline: Defining dependency "pipeline" 00:09:58.835 Message: lib/graph: Defining dependency "graph" 00:09:58.835 Message: lib/node: Defining dependency "node" 00:09:58.835 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:09:58.835 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:09:58.835 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:09:58.835 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:09:58.835 Compiler for C supports arguments -Wno-sign-compare: YES 00:09:58.835 Compiler for C supports arguments -Wno-unused-value: YES 00:10:00.224 Compiler for C supports arguments -Wno-format: YES 00:10:00.224 Compiler for C supports arguments -Wno-format-security: YES 00:10:00.225 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:10:00.225 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:10:00.225 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:10:00.225 Compiler for C supports arguments -Wno-unused-parameter: YES 00:10:00.225 Fetching value of define "__AVX2__" : (undefined) (cached) 00:10:00.225 Compiler for C supports arguments -mavx2: YES (cached) 00:10:00.225 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:10:00.225 Compiler for C supports arguments -mavx512f: YES (cached) 00:10:00.225 Compiler for C supports arguments -mavx512bw: YES (cached) 00:10:00.225 Compiler for C supports arguments -march=skylake-avx512: YES 00:10:00.225 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:10:00.225 Program doxygen found: YES (/usr/bin/doxygen) 00:10:00.225 Configuring doxy-api.conf using configuration 00:10:00.225 Program sphinx-build found: NO 00:10:00.225 Configuring rte_build_config.h using configuration 00:10:00.225 Message: 00:10:00.225 ================= 00:10:00.225 Applications Enabled 00:10:00.225 ================= 00:10:00.225 00:10:00.225 apps: 00:10:00.225 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:10:00.225 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:10:00.225 test-security-perf, 00:10:00.225 00:10:00.225 Message: 00:10:00.225 ================= 00:10:00.225 Libraries Enabled 00:10:00.225 ================= 00:10:00.225 00:10:00.225 libs: 00:10:00.225 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:10:00.225 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:10:00.225 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:10:00.225 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:10:00.225 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:10:00.225 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:10:00.225 table, pipeline, graph, node, 00:10:00.225 00:10:00.225 Message: 00:10:00.225 =============== 00:10:00.225 Drivers Enabled 00:10:00.225 =============== 00:10:00.225 00:10:00.225 common: 00:10:00.225 00:10:00.225 bus: 00:10:00.225 pci, vdev, 00:10:00.225 mempool: 00:10:00.225 ring, 00:10:00.225 dma: 00:10:00.225 00:10:00.225 net: 00:10:00.225 i40e, 00:10:00.225 raw: 00:10:00.225 00:10:00.225 crypto: 00:10:00.225 00:10:00.225 compress: 00:10:00.225 00:10:00.225 regex: 00:10:00.225 00:10:00.225 vdpa: 00:10:00.225 00:10:00.225 event: 00:10:00.225 00:10:00.225 baseband: 00:10:00.225 00:10:00.225 gpu: 00:10:00.225 00:10:00.225 00:10:00.225 Message: 00:10:00.225 ================= 00:10:00.225 Content Skipped 00:10:00.225 ================= 00:10:00.225 00:10:00.225 apps: 00:10:00.225 00:10:00.225 libs: 00:10:00.225 kni: explicitly disabled via build config (deprecated lib) 00:10:00.225 flow_classify: explicitly disabled via build config (deprecated lib) 00:10:00.225 00:10:00.225 drivers: 00:10:00.225 common/cpt: not in enabled drivers build config 00:10:00.225 common/dpaax: not in enabled drivers build config 00:10:00.225 common/iavf: not in enabled drivers build config 00:10:00.225 common/idpf: not in enabled drivers build config 00:10:00.225 common/mvep: not in enabled drivers build config 00:10:00.225 common/octeontx: not in enabled drivers build config 00:10:00.225 bus/auxiliary: not in enabled drivers build config 00:10:00.225 bus/dpaa: not in enabled drivers build config 00:10:00.225 bus/fslmc: not in enabled drivers build config 00:10:00.225 bus/ifpga: not in enabled drivers build config 00:10:00.225 bus/vmbus: not in enabled drivers build config 00:10:00.225 common/cnxk: not in enabled drivers build config 00:10:00.225 common/mlx5: not in enabled drivers build config 00:10:00.225 common/qat: not in enabled drivers build config 00:10:00.225 common/sfc_efx: not in enabled drivers build config 00:10:00.225 mempool/bucket: not in enabled drivers build config 00:10:00.225 mempool/cnxk: not in enabled drivers build config 00:10:00.225 mempool/dpaa: not in enabled drivers build config 00:10:00.225 mempool/dpaa2: not in enabled drivers build config 00:10:00.225 mempool/octeontx: not in enabled drivers build config 00:10:00.225 mempool/stack: not in enabled drivers build config 00:10:00.225 dma/cnxk: not in enabled drivers build config 00:10:00.225 dma/dpaa: not in enabled drivers build config 00:10:00.225 dma/dpaa2: not in enabled drivers build config 00:10:00.225 dma/hisilicon: not in enabled drivers build config 00:10:00.225 dma/idxd: not in enabled drivers build config 00:10:00.225 dma/ioat: not in enabled drivers build config 00:10:00.225 dma/skeleton: not in enabled drivers build config 00:10:00.225 net/af_packet: not in enabled drivers build config 00:10:00.225 net/af_xdp: not in enabled drivers build config 00:10:00.225 net/ark: not in enabled drivers build config 00:10:00.225 net/atlantic: not in enabled drivers build config 00:10:00.225 net/avp: not in enabled drivers build config 00:10:00.225 net/axgbe: not in enabled drivers build config 00:10:00.225 net/bnx2x: not in enabled drivers build config 00:10:00.225 net/bnxt: not in enabled drivers build config 00:10:00.225 net/bonding: not in enabled drivers build config 00:10:00.225 net/cnxk: not in enabled drivers build config 00:10:00.225 net/cxgbe: not in enabled drivers build config 00:10:00.225 net/dpaa: not in enabled drivers build config 00:10:00.225 net/dpaa2: not in enabled drivers build config 00:10:00.225 net/e1000: not in enabled drivers build config 00:10:00.225 net/ena: not in enabled drivers build config 00:10:00.225 net/enetc: not in enabled drivers build config 00:10:00.225 net/enetfec: not in enabled drivers build config 00:10:00.225 net/enic: not in enabled drivers build config 00:10:00.225 net/failsafe: not in enabled drivers build config 00:10:00.225 net/fm10k: not in enabled drivers build config 00:10:00.225 net/gve: not in enabled drivers build config 00:10:00.225 net/hinic: not in enabled drivers build config 00:10:00.225 net/hns3: not in enabled drivers build config 00:10:00.225 net/iavf: not in enabled drivers build config 00:10:00.225 net/ice: not in enabled drivers build config 00:10:00.225 net/idpf: not in enabled drivers build config 00:10:00.225 net/igc: not in enabled drivers build config 00:10:00.225 net/ionic: not in enabled drivers build config 00:10:00.225 net/ipn3ke: not in enabled drivers build config 00:10:00.225 net/ixgbe: not in enabled drivers build config 00:10:00.225 net/kni: not in enabled drivers build config 00:10:00.225 net/liquidio: not in enabled drivers build config 00:10:00.225 net/mana: not in enabled drivers build config 00:10:00.225 net/memif: not in enabled drivers build config 00:10:00.225 net/mlx4: not in enabled drivers build config 00:10:00.225 net/mlx5: not in enabled drivers build config 00:10:00.225 net/mvneta: not in enabled drivers build config 00:10:00.225 net/mvpp2: not in enabled drivers build config 00:10:00.225 net/netvsc: not in enabled drivers build config 00:10:00.225 net/nfb: not in enabled drivers build config 00:10:00.225 net/nfp: not in enabled drivers build config 00:10:00.225 net/ngbe: not in enabled drivers build config 00:10:00.225 net/null: not in enabled drivers build config 00:10:00.225 net/octeontx: not in enabled drivers build config 00:10:00.225 net/octeon_ep: not in enabled drivers build config 00:10:00.225 net/pcap: not in enabled drivers build config 00:10:00.225 net/pfe: not in enabled drivers build config 00:10:00.225 net/qede: not in enabled drivers build config 00:10:00.225 net/ring: not in enabled drivers build config 00:10:00.225 net/sfc: not in enabled drivers build config 00:10:00.225 net/softnic: not in enabled drivers build config 00:10:00.225 net/tap: not in enabled drivers build config 00:10:00.225 net/thunderx: not in enabled drivers build config 00:10:00.225 net/txgbe: not in enabled drivers build config 00:10:00.225 net/vdev_netvsc: not in enabled drivers build config 00:10:00.225 net/vhost: not in enabled drivers build config 00:10:00.225 net/virtio: not in enabled drivers build config 00:10:00.225 net/vmxnet3: not in enabled drivers build config 00:10:00.225 raw/cnxk_bphy: not in enabled drivers build config 00:10:00.225 raw/cnxk_gpio: not in enabled drivers build config 00:10:00.225 raw/dpaa2_cmdif: not in enabled drivers build config 00:10:00.225 raw/ifpga: not in enabled drivers build config 00:10:00.225 raw/ntb: not in enabled drivers build config 00:10:00.225 raw/skeleton: not in enabled drivers build config 00:10:00.225 crypto/armv8: not in enabled drivers build config 00:10:00.225 crypto/bcmfs: not in enabled drivers build config 00:10:00.225 crypto/caam_jr: not in enabled drivers build config 00:10:00.225 crypto/ccp: not in enabled drivers build config 00:10:00.225 crypto/cnxk: not in enabled drivers build config 00:10:00.225 crypto/dpaa_sec: not in enabled drivers build config 00:10:00.225 crypto/dpaa2_sec: not in enabled drivers build config 00:10:00.225 crypto/ipsec_mb: not in enabled drivers build config 00:10:00.225 crypto/mlx5: not in enabled drivers build config 00:10:00.225 crypto/mvsam: not in enabled drivers build config 00:10:00.225 crypto/nitrox: not in enabled drivers build config 00:10:00.225 crypto/null: not in enabled drivers build config 00:10:00.225 crypto/octeontx: not in enabled drivers build config 00:10:00.225 crypto/openssl: not in enabled drivers build config 00:10:00.225 crypto/scheduler: not in enabled drivers build config 00:10:00.225 crypto/uadk: not in enabled drivers build config 00:10:00.225 crypto/virtio: not in enabled drivers build config 00:10:00.225 compress/isal: not in enabled drivers build config 00:10:00.225 compress/mlx5: not in enabled drivers build config 00:10:00.225 compress/octeontx: not in enabled drivers build config 00:10:00.225 compress/zlib: not in enabled drivers build config 00:10:00.225 regex/mlx5: not in enabled drivers build config 00:10:00.225 regex/cn9k: not in enabled drivers build config 00:10:00.225 vdpa/ifc: not in enabled drivers build config 00:10:00.225 vdpa/mlx5: not in enabled drivers build config 00:10:00.225 vdpa/sfc: not in enabled drivers build config 00:10:00.225 event/cnxk: not in enabled drivers build config 00:10:00.225 event/dlb2: not in enabled drivers build config 00:10:00.225 event/dpaa: not in enabled drivers build config 00:10:00.225 event/dpaa2: not in enabled drivers build config 00:10:00.225 event/dsw: not in enabled drivers build config 00:10:00.225 event/opdl: not in enabled drivers build config 00:10:00.226 event/skeleton: not in enabled drivers build config 00:10:00.226 event/sw: not in enabled drivers build config 00:10:00.226 event/octeontx: not in enabled drivers build config 00:10:00.226 baseband/acc: not in enabled drivers build config 00:10:00.226 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:10:00.226 baseband/fpga_lte_fec: not in enabled drivers build config 00:10:00.226 baseband/la12xx: not in enabled drivers build config 00:10:00.226 baseband/null: not in enabled drivers build config 00:10:00.226 baseband/turbo_sw: not in enabled drivers build config 00:10:00.226 gpu/cuda: not in enabled drivers build config 00:10:00.226 00:10:00.226 00:10:00.226 Build targets in project: 316 00:10:00.226 00:10:00.226 DPDK 22.11.4 00:10:00.226 00:10:00.226 User defined options 00:10:00.226 libdir : lib 00:10:00.226 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:10:00.226 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:10:00.226 c_link_args : 00:10:00.226 enable_docs : false 00:10:00.226 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:10:00.226 enable_kmods : false 00:10:00.226 machine : native 00:10:00.226 tests : false 00:10:00.226 00:10:00.226 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:10:00.226 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:10:00.226 08:35:54 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:10:00.226 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:10:00.226 [1/745] Generating lib/rte_kvargs_mingw with a custom command 00:10:00.226 [2/745] Generating lib/rte_telemetry_def with a custom command 00:10:00.226 [3/745] Generating lib/rte_telemetry_mingw with a custom command 00:10:00.226 [4/745] Generating lib/rte_kvargs_def with a custom command 00:10:00.226 [5/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:10:00.226 [6/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:10:00.226 [7/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:10:00.226 [8/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:10:00.226 [9/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:10:00.226 [10/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:10:00.226 [11/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:10:00.226 [12/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:10:00.226 [13/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:10:00.226 [14/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:10:00.226 [15/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:10:00.226 [16/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:10:00.226 [17/745] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:10:00.226 [18/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:10:00.226 [19/745] Linking static target lib/librte_kvargs.a 00:10:00.226 [20/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:10:00.226 [21/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:10:00.226 [22/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:10:00.486 [23/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:10:00.486 [24/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:10:00.486 [25/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:10:00.486 [26/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:10:00.486 [27/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:10:00.486 [28/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:10:00.486 [29/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:10:00.486 [30/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:10:00.486 [31/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:10:00.486 [32/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:10:00.486 [33/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:10:00.486 [34/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:10:00.486 [35/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:10:00.486 [36/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:10:00.486 [37/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:10:00.486 [38/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:10:00.486 [39/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:10:00.486 [40/745] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:10:00.486 [41/745] Generating lib/rte_eal_def with a custom command 00:10:00.486 [42/745] Generating lib/rte_eal_mingw with a custom command 00:10:00.486 [43/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:10:00.486 [44/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:10:00.486 [45/745] Generating lib/rte_ring_def with a custom command 00:10:00.486 [46/745] Generating lib/rte_ring_mingw with a custom command 00:10:00.486 [47/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:10:00.486 [48/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:10:00.486 [49/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:10:00.486 [50/745] Generating lib/rte_rcu_def with a custom command 00:10:00.486 [51/745] Generating lib/rte_rcu_mingw with a custom command 00:10:00.486 [52/745] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:10:00.486 [53/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:10:00.486 [54/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:10:00.486 [55/745] Generating lib/rte_mempool_def with a custom command 00:10:00.486 [56/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:10:00.486 [57/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:10:00.486 [58/745] Generating lib/rte_mempool_mingw with a custom command 00:10:00.486 [59/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:10:00.486 [60/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:10:00.486 [61/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:10:00.486 [62/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:10:00.486 [63/745] Generating lib/rte_mbuf_def with a custom command 00:10:00.486 [64/745] Generating lib/rte_mbuf_mingw with a custom command 00:10:00.486 [65/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:10:00.486 [66/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:10:00.486 [67/745] Generating lib/rte_net_mingw with a custom command 00:10:00.486 [68/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:10:00.486 [69/745] Generating lib/rte_net_def with a custom command 00:10:00.486 [70/745] Generating lib/rte_meter_def with a custom command 00:10:00.486 [71/745] Generating lib/rte_meter_mingw with a custom command 00:10:00.486 [72/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:10:00.486 [73/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:10:00.748 [74/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:10:00.748 [75/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:10:00.748 [76/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:10:00.748 [77/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:10:00.748 [78/745] Generating lib/rte_ethdev_def with a custom command 00:10:00.748 [79/745] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:10:00.748 [80/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:10:00.748 [81/745] Linking static target lib/librte_ring.a 00:10:00.748 [82/745] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:10:00.748 [83/745] Generating lib/rte_ethdev_mingw with a custom command 00:10:00.748 [84/745] Linking target lib/librte_kvargs.so.23.0 00:10:00.748 [85/745] Generating lib/rte_pci_def with a custom command 00:10:00.748 [86/745] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:10:00.748 [87/745] Linking static target lib/librte_meter.a 00:10:00.748 [88/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:10:00.748 [89/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:10:01.012 [90/745] Generating lib/rte_pci_mingw with a custom command 00:10:01.012 [91/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:10:01.012 [92/745] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:10:01.012 [93/745] Linking static target lib/librte_pci.a 00:10:01.012 [94/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:10:01.012 [95/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:10:01.012 [96/745] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:10:01.012 [97/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:10:01.012 [98/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:10:01.273 [99/745] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:10:01.273 [100/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:10:01.273 [101/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:10:01.273 [102/745] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:10:01.273 [103/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:10:01.273 [104/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:10:01.273 [105/745] Linking static target lib/librte_telemetry.a 00:10:01.273 [106/745] Generating lib/rte_cmdline_def with a custom command 00:10:01.273 [107/745] Generating lib/rte_cmdline_mingw with a custom command 00:10:01.273 [108/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:10:01.273 [109/745] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:10:01.273 [110/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:10:01.273 [111/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:10:01.273 [112/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:10:01.273 [113/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:10:01.273 [114/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:10:01.273 [115/745] Generating lib/rte_metrics_def with a custom command 00:10:01.273 [116/745] Generating lib/rte_metrics_mingw with a custom command 00:10:01.273 [117/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:10:01.273 [118/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:10:01.273 [119/745] Generating lib/rte_hash_def with a custom command 00:10:01.273 [120/745] Generating lib/rte_hash_mingw with a custom command 00:10:01.273 [121/745] Generating lib/rte_timer_def with a custom command 00:10:01.535 [122/745] Generating lib/rte_timer_mingw with a custom command 00:10:01.535 [123/745] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:10:01.535 [124/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:10:01.535 [125/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:10:01.535 [126/745] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:10:01.796 [127/745] Linking static target lib/net/libnet_crc_avx512_lib.a 00:10:01.796 [128/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:10:01.796 [129/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:10:01.796 [130/745] Generating lib/rte_acl_def with a custom command 00:10:01.796 [131/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:10:01.796 [132/745] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:10:01.797 [133/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:10:01.797 [134/745] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:10:01.797 [135/745] Generating lib/rte_acl_mingw with a custom command 00:10:01.797 [136/745] Generating lib/rte_bbdev_def with a custom command 00:10:01.797 [137/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:10:01.797 [138/745] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:10:01.797 [139/745] Generating lib/rte_bbdev_mingw with a custom command 00:10:01.797 [140/745] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:10:01.797 [141/745] Linking target lib/librte_telemetry.so.23.0 00:10:01.797 [142/745] Generating lib/rte_bitratestats_mingw with a custom command 00:10:01.797 [143/745] Generating lib/rte_bitratestats_def with a custom command 00:10:01.797 [144/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:10:01.797 [145/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:10:02.058 [146/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:10:02.058 [147/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:10:02.058 [148/745] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:10:02.058 [149/745] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:10:02.058 [150/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:10:02.058 [151/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:10:02.058 [152/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:10:02.058 [153/745] Generating lib/rte_bpf_def with a custom command 00:10:02.058 [154/745] Generating lib/rte_bpf_mingw with a custom command 00:10:02.058 [155/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:10:02.058 [156/745] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:10:02.058 [157/745] Generating lib/rte_cfgfile_def with a custom command 00:10:02.058 [158/745] Generating lib/rte_cfgfile_mingw with a custom command 00:10:02.058 [159/745] Generating lib/rte_compressdev_def with a custom command 00:10:02.058 [160/745] Generating lib/rte_compressdev_mingw with a custom command 00:10:02.058 [161/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:10:02.058 [162/745] Generating lib/rte_cryptodev_def with a custom command 00:10:02.058 [163/745] Generating lib/rte_cryptodev_mingw with a custom command 00:10:02.058 [164/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:10:02.321 [165/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:10:02.321 [166/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:10:02.321 [167/745] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:10:02.321 [168/745] Linking static target lib/librte_rcu.a 00:10:02.321 [169/745] Generating lib/rte_distributor_def with a custom command 00:10:02.321 [170/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:10:02.321 [171/745] Generating lib/rte_distributor_mingw with a custom command 00:10:02.321 [172/745] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:10:02.321 [173/745] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:10:02.321 [174/745] Linking static target lib/librte_timer.a 00:10:02.321 [175/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:10:02.321 [176/745] Linking static target lib/librte_net.a 00:10:02.321 [177/745] Linking static target lib/librte_cmdline.a 00:10:02.321 [178/745] Generating lib/rte_efd_def with a custom command 00:10:02.321 [179/745] Generating lib/rte_efd_mingw with a custom command 00:10:02.321 [180/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:10:02.321 [181/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:10:02.585 [182/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:10:02.585 [183/745] Linking static target lib/librte_metrics.a 00:10:02.585 [184/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:10:02.585 [185/745] Linking static target lib/librte_mempool.a 00:10:02.585 [186/745] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:10:02.585 [187/745] Linking static target lib/librte_cfgfile.a 00:10:02.585 [188/745] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:10:02.844 [189/745] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:10:02.844 [190/745] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:10:02.844 [191/745] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:10:02.844 [192/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:10:02.844 [193/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:10:02.844 [194/745] Generating lib/rte_eventdev_def with a custom command 00:10:02.844 [195/745] Linking static target lib/librte_eal.a 00:10:02.844 [196/745] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:10:02.844 [197/745] Generating lib/rte_eventdev_mingw with a custom command 00:10:03.104 [198/745] Generating lib/rte_gpudev_def with a custom command 00:10:03.104 [199/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:10:03.104 [200/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:10:03.104 [201/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:10:03.104 [202/745] Generating lib/rte_gpudev_mingw with a custom command 00:10:03.104 [203/745] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:10:03.104 [204/745] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:10:03.104 [205/745] Linking static target lib/librte_bitratestats.a 00:10:03.104 [206/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:10:03.104 [207/745] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:10:03.104 [208/745] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:10:03.104 [209/745] Generating lib/rte_gro_def with a custom command 00:10:03.104 [210/745] Generating lib/rte_gro_mingw with a custom command 00:10:03.104 [211/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:10:03.370 [212/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:10:03.370 [213/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:10:03.370 [214/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:10:03.370 [215/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:10:03.370 [216/745] Generating lib/rte_gso_def with a custom command 00:10:03.370 [217/745] Generating lib/rte_gso_mingw with a custom command 00:10:03.370 [218/745] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:10:03.630 [219/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:10:03.630 [220/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:10:03.630 [221/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:10:03.630 [222/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:10:03.630 [223/745] Generating lib/rte_ip_frag_def with a custom command 00:10:03.630 [224/745] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:10:03.630 [225/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:10:03.630 [226/745] Linking static target lib/librte_bbdev.a 00:10:03.630 [227/745] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:10:03.630 [228/745] Generating lib/rte_ip_frag_mingw with a custom command 00:10:03.630 [229/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:10:03.896 [230/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:10:03.896 [231/745] Generating lib/rte_jobstats_def with a custom command 00:10:03.896 [232/745] Generating lib/rte_jobstats_mingw with a custom command 00:10:03.896 [233/745] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:10:03.896 [234/745] Generating lib/rte_latencystats_mingw with a custom command 00:10:03.897 [235/745] Generating lib/rte_latencystats_def with a custom command 00:10:03.897 [236/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:10:03.897 [237/745] Linking static target lib/librte_compressdev.a 00:10:03.897 [238/745] Generating lib/rte_lpm_mingw with a custom command 00:10:03.897 [239/745] Generating lib/rte_lpm_def with a custom command 00:10:03.897 [240/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:10:03.897 [241/745] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:10:03.897 [242/745] Linking static target lib/librte_jobstats.a 00:10:04.159 [243/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:10:04.159 [244/745] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:10:04.159 [245/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:10:04.159 [246/745] Linking static target lib/librte_distributor.a 00:10:04.159 [247/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:10:04.421 [248/745] Generating lib/rte_member_def with a custom command 00:10:04.421 [249/745] Generating lib/rte_member_mingw with a custom command 00:10:04.421 [250/745] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:10:04.421 [251/745] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:10:04.421 [252/745] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:10:04.421 [253/745] Generating lib/rte_pcapng_def with a custom command 00:10:04.421 [254/745] Generating lib/rte_pcapng_mingw with a custom command 00:10:04.421 [255/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:10:04.421 [256/745] Linking static target lib/librte_bpf.a 00:10:04.682 [257/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:10:04.682 [258/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:10:04.682 [259/745] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:10:04.682 [260/745] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:10:04.682 [261/745] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:10:04.682 [262/745] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:10:04.682 [263/745] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:10:04.682 [264/745] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:10:04.682 [265/745] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:10:04.682 [266/745] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:10:04.682 [267/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:10:04.682 [268/745] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:10:04.683 [269/745] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:10:04.683 [270/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:10:04.683 [271/745] Generating lib/rte_power_def with a custom command 00:10:04.683 [272/745] Generating lib/rte_power_mingw with a custom command 00:10:04.683 [273/745] Linking static target lib/librte_gpudev.a 00:10:04.683 [274/745] Linking static target lib/librte_gro.a 00:10:04.683 [275/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:10:04.683 [276/745] Generating lib/rte_rawdev_def with a custom command 00:10:04.683 [277/745] Generating lib/rte_rawdev_mingw with a custom command 00:10:04.683 [278/745] Generating lib/rte_regexdev_def with a custom command 00:10:04.948 [279/745] Generating lib/rte_regexdev_mingw with a custom command 00:10:04.948 [280/745] Generating lib/rte_dmadev_def with a custom command 00:10:04.948 [281/745] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:10:04.948 [282/745] Generating lib/rte_dmadev_mingw with a custom command 00:10:04.948 [283/745] Generating lib/rte_rib_mingw with a custom command 00:10:04.948 [284/745] Generating lib/rte_rib_def with a custom command 00:10:04.948 [285/745] Generating lib/rte_reorder_def with a custom command 00:10:04.948 [286/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:10:04.948 [287/745] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:10:04.948 [288/745] Generating lib/rte_reorder_mingw with a custom command 00:10:05.211 [289/745] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:10:05.211 [290/745] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:10:05.211 [291/745] Generating lib/rte_sched_def with a custom command 00:10:05.211 [292/745] Generating lib/rte_sched_mingw with a custom command 00:10:05.211 [293/745] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:10:05.211 [294/745] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:10:05.211 [295/745] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:10:05.211 [296/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:10:05.211 [297/745] Generating lib/rte_security_def with a custom command 00:10:05.211 [298/745] Generating lib/rte_security_mingw with a custom command 00:10:05.212 [299/745] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:10:05.212 [300/745] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:10:05.212 [301/745] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:10:05.212 [302/745] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:10:05.212 [303/745] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:10:05.212 [304/745] Generating lib/rte_stack_def with a custom command 00:10:05.212 [305/745] Generating lib/rte_stack_mingw with a custom command 00:10:05.475 [306/745] Linking static target lib/member/libsketch_avx512_tmp.a 00:10:05.475 [307/745] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:10:05.475 [308/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:10:05.475 [309/745] Linking static target lib/librte_latencystats.a 00:10:05.475 [310/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:10:05.475 [311/745] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:10:05.475 [312/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:10:05.475 [313/745] Linking static target lib/librte_rawdev.a 00:10:05.475 [314/745] Linking static target lib/librte_stack.a 00:10:05.475 [315/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:10:05.475 [316/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:10:05.475 [317/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:10:05.475 [318/745] Generating lib/rte_vhost_def with a custom command 00:10:05.475 [319/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:10:05.475 [320/745] Generating lib/rte_vhost_mingw with a custom command 00:10:05.475 [321/745] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:10:05.475 [322/745] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:10:05.475 [323/745] Linking static target lib/librte_dmadev.a 00:10:05.740 [324/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:10:05.740 [325/745] Linking static target lib/librte_ip_frag.a 00:10:05.740 [326/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:10:05.740 [327/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:10:05.740 [328/745] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:10:05.740 [329/745] Generating lib/rte_ipsec_def with a custom command 00:10:05.740 [330/745] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:10:06.014 [331/745] Generating lib/rte_ipsec_mingw with a custom command 00:10:06.014 [332/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:10:06.014 [333/745] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:10:06.273 [334/745] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:10:06.273 [335/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:10:06.273 [336/745] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:10:06.273 [337/745] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:10:06.273 [338/745] Linking static target lib/librte_gso.a 00:10:06.273 [339/745] Generating lib/rte_fib_def with a custom command 00:10:06.273 [340/745] Generating lib/rte_fib_mingw with a custom command 00:10:06.273 [341/745] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:10:06.273 [342/745] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:10:06.273 [343/745] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:10:06.273 [344/745] Linking static target lib/librte_regexdev.a 00:10:06.273 [345/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:10:06.536 [346/745] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:10:06.536 [347/745] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:10:06.536 [348/745] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:10:06.536 [349/745] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:10:06.536 [350/745] Linking static target lib/librte_efd.a 00:10:06.795 [351/745] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:10:06.795 [352/745] Linking static target lib/librte_pcapng.a 00:10:06.795 [353/745] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:10:06.795 [354/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:10:06.795 [355/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:10:06.795 [356/745] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:10:06.795 [357/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:10:06.795 [358/745] Linking static target lib/librte_lpm.a 00:10:07.056 [359/745] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:10:07.056 [360/745] Linking static target lib/librte_reorder.a 00:10:07.056 [361/745] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:10:07.056 [362/745] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:10:07.056 [363/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:10:07.056 [364/745] Generating lib/rte_port_def with a custom command 00:10:07.056 [365/745] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:10:07.056 [366/745] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:10:07.056 [367/745] Linking static target lib/fib/libtrie_avx512_tmp.a 00:10:07.056 [368/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:10:07.056 [369/745] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:10:07.317 [370/745] Generating lib/rte_port_mingw with a custom command 00:10:07.317 [371/745] Linking static target lib/acl/libavx2_tmp.a 00:10:07.317 [372/745] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:10:07.317 [373/745] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:10:07.317 [374/745] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:10:07.317 [375/745] Generating lib/rte_pdump_def with a custom command 00:10:07.317 [376/745] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:10:07.317 [377/745] Generating lib/rte_pdump_mingw with a custom command 00:10:07.317 [378/745] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:10:07.317 [379/745] Linking static target lib/librte_security.a 00:10:07.317 [380/745] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:10:07.317 [381/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:10:07.317 [382/745] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:10:07.582 [383/745] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:10:07.582 [384/745] Linking static target lib/librte_power.a 00:10:07.582 [385/745] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:10:07.582 [386/745] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:10:07.582 [387/745] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:10:07.582 [388/745] Linking static target lib/librte_hash.a 00:10:07.582 [389/745] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:10:07.582 [390/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:10:07.582 [391/745] Linking static target lib/librte_rib.a 00:10:07.845 [392/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:10:07.845 [393/745] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:10:07.845 [394/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:10:07.845 [395/745] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:10:07.845 [396/745] Linking static target lib/acl/libavx512_tmp.a 00:10:07.845 [397/745] Generating lib/rte_table_def with a custom command 00:10:07.845 [398/745] Linking static target lib/librte_acl.a 00:10:07.845 [399/745] Generating lib/rte_table_mingw with a custom command 00:10:08.108 [400/745] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:10:08.108 [401/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:10:08.108 [402/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:10:08.108 [403/745] Linking static target lib/librte_ethdev.a 00:10:08.373 [404/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:10:08.373 [405/745] Linking static target lib/librte_mbuf.a 00:10:08.373 [406/745] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:10:08.373 [407/745] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:10:08.373 [408/745] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:10:08.373 [409/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:10:08.630 [410/745] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:10:08.630 [411/745] Generating lib/rte_pipeline_def with a custom command 00:10:08.630 [412/745] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:10:08.630 [413/745] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:10:08.630 [414/745] Generating lib/rte_pipeline_mingw with a custom command 00:10:08.630 [415/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:10:08.630 [416/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:10:08.630 [417/745] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:10:08.630 [418/745] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:10:08.630 [419/745] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:10:08.630 [420/745] Linking static target lib/librte_fib.a 00:10:08.630 [421/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:10:08.630 [422/745] Generating lib/rte_graph_def with a custom command 00:10:08.630 [423/745] Generating lib/rte_graph_mingw with a custom command 00:10:08.630 [424/745] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:10:08.893 [425/745] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:10:08.893 [426/745] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:10:08.893 [427/745] Linking static target lib/librte_member.a 00:10:08.893 [428/745] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:10:08.893 [429/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:10:08.893 [430/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:10:09.153 [431/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:10:09.153 [432/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:10:09.153 [433/745] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:10:09.154 [434/745] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:10:09.154 [435/745] Linking static target lib/librte_eventdev.a 00:10:09.154 [436/745] Compiling C object lib/librte_node.a.p/node_null.c.o 00:10:09.154 [437/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:10:09.154 [438/745] Generating lib/rte_node_mingw with a custom command 00:10:09.154 [439/745] Generating lib/rte_node_def with a custom command 00:10:09.154 [440/745] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:10:09.154 [441/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:10:09.154 [442/745] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:10:09.154 [443/745] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:10:09.154 [444/745] Linking static target lib/librte_sched.a 00:10:09.420 [445/745] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:10:09.420 [446/745] Generating drivers/rte_bus_pci_def with a custom command 00:10:09.420 [447/745] Generating drivers/rte_bus_pci_mingw with a custom command 00:10:09.420 [448/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:10:09.420 [449/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:10:09.420 [450/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:10:09.420 [451/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:10:09.420 [452/745] Generating drivers/rte_bus_vdev_mingw with a custom command 00:10:09.420 [453/745] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:10:09.420 [454/745] Generating drivers/rte_bus_vdev_def with a custom command 00:10:09.420 [455/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:10:09.420 [456/745] Generating drivers/rte_mempool_ring_def with a custom command 00:10:09.682 [457/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:10:09.682 [458/745] Generating drivers/rte_mempool_ring_mingw with a custom command 00:10:09.682 [459/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:10:09.682 [460/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:10:09.682 [461/745] Linking static target lib/librte_cryptodev.a 00:10:09.682 [462/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:10:09.682 [463/745] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:10:09.682 [464/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:10:09.682 [465/745] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:10:09.682 [466/745] Linking static target lib/librte_pdump.a 00:10:09.943 [467/745] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:10:09.943 [468/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:10:09.943 [469/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:10:09.943 [470/745] Linking static target drivers/libtmp_rte_bus_vdev.a 00:10:09.943 [471/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:10:09.943 [472/745] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:10:09.943 [473/745] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:10:09.943 [474/745] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:10:09.943 [475/745] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:10:09.943 [476/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:10:09.943 [477/745] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:10:10.225 [478/745] Compiling C object lib/librte_node.a.p/node_log.c.o 00:10:10.225 [479/745] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:10:10.225 [480/745] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:10:10.225 [481/745] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:10:10.225 [482/745] Generating drivers/rte_net_i40e_def with a custom command 00:10:10.225 [483/745] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:10:10.225 [484/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:10:10.225 [485/745] Linking static target drivers/librte_bus_vdev.a 00:10:10.225 [486/745] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:10:10.225 [487/745] Generating drivers/rte_net_i40e_mingw with a custom command 00:10:10.537 [488/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:10:10.537 [489/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:10:10.537 [490/745] Linking static target lib/librte_table.a 00:10:10.537 [491/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:10:10.537 [492/745] Linking static target lib/librte_ipsec.a 00:10:10.537 [493/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:10:10.537 [494/745] Linking static target drivers/libtmp_rte_bus_pci.a 00:10:10.537 [495/745] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:10:10.800 [496/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:10:10.800 [497/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:10:10.800 [498/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:10:11.061 [499/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:10:11.061 [500/745] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:10:11.061 [501/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:10:11.061 [502/745] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:10:11.061 [503/745] Linking static target lib/librte_graph.a 00:10:11.061 [504/745] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:10:11.061 [505/745] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:10:11.061 [506/745] Linking static target drivers/librte_bus_pci.a 00:10:11.061 [507/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:10:11.061 [508/745] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:10:11.061 [509/745] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:10:11.061 [510/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:10:11.061 [511/745] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:10:11.325 [512/745] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:10:11.325 [513/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:10:11.589 [514/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:10:11.589 [515/745] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:10:11.589 [516/745] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:10:11.850 [517/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:10:11.850 [518/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:10:12.117 [519/745] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:10:12.117 [520/745] Linking static target lib/librte_port.a 00:10:12.117 [521/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:10:12.117 [522/745] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:10:12.117 [523/745] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:10:12.117 [524/745] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:10:12.117 [525/745] Linking static target drivers/libtmp_rte_mempool_ring.a 00:10:12.386 [526/745] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:10:12.386 [527/745] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:10:12.386 [528/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:10:12.386 [529/745] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:10:12.386 [530/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:10:12.386 [531/745] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:10:12.646 [532/745] Linking static target drivers/librte_mempool_ring.a 00:10:12.646 [533/745] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:10:12.646 [534/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:10:12.647 [535/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:10:12.647 [536/745] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:10:12.647 [537/745] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:10:12.917 [538/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:10:12.917 [539/745] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:10:12.917 [540/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:10:13.179 [541/745] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:10:13.448 [542/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:10:13.448 [543/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:10:13.448 [544/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:10:13.448 [545/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:10:13.448 [546/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:10:13.712 [547/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:10:13.712 [548/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:10:13.712 [549/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:10:13.712 [550/745] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:10:13.712 [551/745] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:10:13.972 [552/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:10:13.972 [553/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:10:14.236 [554/745] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:10:14.236 [555/745] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:10:14.236 [556/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:10:14.236 [557/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:10:14.498 [558/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:10:14.498 [559/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:10:14.758 [560/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:10:14.758 [561/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:10:14.758 [562/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:10:14.758 [563/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:10:15.024 [564/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:10:15.024 [565/745] Linking static target drivers/net/i40e/base/libi40e_base.a 00:10:15.024 [566/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:10:15.024 [567/745] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:10:15.024 [568/745] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:10:15.024 [569/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:10:15.285 [570/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:10:15.285 [571/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:10:15.285 [572/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:10:15.285 [573/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:10:15.544 [574/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:10:15.544 [575/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:10:15.544 [576/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:10:15.809 [577/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:10:15.809 [578/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:10:15.809 [579/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:10:15.809 [580/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:10:15.809 [581/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:10:15.809 [582/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:10:15.809 [583/745] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:10:16.068 [584/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:10:16.330 [585/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:10:16.330 [586/745] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:10:16.330 [587/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:10:16.593 [588/745] Linking target lib/librte_eal.so.23.0 00:10:16.593 [589/745] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:10:16.854 [590/745] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:10:16.854 [591/745] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:10:16.854 [592/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:10:16.854 [593/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:10:16.855 [594/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:10:16.855 [595/745] Linking target lib/librte_ring.so.23.0 00:10:16.855 [596/745] Linking target lib/librte_meter.so.23.0 00:10:16.855 [597/745] Linking target lib/librte_pci.so.23.0 00:10:16.855 [598/745] Linking target lib/librte_timer.so.23.0 00:10:16.855 [599/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:10:16.855 [600/745] Linking target lib/librte_cfgfile.so.23.0 00:10:16.855 [601/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:10:16.855 [602/745] Linking target lib/librte_acl.so.23.0 00:10:16.855 [603/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:10:17.118 [604/745] Linking target lib/librte_jobstats.so.23.0 00:10:17.118 [605/745] Linking target lib/librte_dmadev.so.23.0 00:10:17.118 [606/745] Linking target lib/librte_rawdev.so.23.0 00:10:17.118 [607/745] Linking target lib/librte_stack.so.23.0 00:10:17.118 [608/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:10:17.118 [609/745] Linking target lib/librte_graph.so.23.0 00:10:17.118 [610/745] Linking target drivers/librte_bus_vdev.so.23.0 00:10:17.118 [611/745] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:10:17.118 [612/745] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:10:17.118 [613/745] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:10:17.118 [614/745] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:10:17.118 [615/745] Linking target lib/librte_rcu.so.23.0 00:10:17.118 [616/745] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:10:17.118 [617/745] Linking target lib/librte_mempool.so.23.0 00:10:17.118 [618/745] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:10:17.118 [619/745] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:10:17.379 [620/745] Linking target drivers/librte_bus_pci.so.23.0 00:10:17.379 [621/745] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:10:17.379 [622/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:10:17.379 [623/745] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:10:17.379 [624/745] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:10:17.379 [625/745] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:10:17.379 [626/745] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:10:17.379 [627/745] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:10:17.379 [628/745] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:10:17.379 [629/745] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:10:17.379 [630/745] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:10:17.379 [631/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:10:17.379 [632/745] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:10:17.379 [633/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:10:17.638 [634/745] Linking target drivers/librte_mempool_ring.so.23.0 00:10:17.638 [635/745] Linking target lib/librte_rib.so.23.0 00:10:17.638 [636/745] Linking target lib/librte_mbuf.so.23.0 00:10:17.638 [637/745] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:10:17.638 [638/745] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:10:17.638 [639/745] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:10:17.638 [640/745] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:10:17.638 [641/745] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:10:17.638 [642/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:10:17.638 [643/745] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:10:17.638 [644/745] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:10:17.638 [645/745] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:10:17.896 [646/745] Linking target lib/librte_net.so.23.0 00:10:17.897 [647/745] Linking target lib/librte_cryptodev.so.23.0 00:10:17.897 [648/745] Linking target lib/librte_sched.so.23.0 00:10:17.897 [649/745] Linking target lib/librte_gpudev.so.23.0 00:10:17.897 [650/745] Linking target lib/librte_distributor.so.23.0 00:10:17.897 [651/745] Linking target lib/librte_bbdev.so.23.0 00:10:17.897 [652/745] Linking target lib/librte_reorder.so.23.0 00:10:17.897 [653/745] Linking target lib/librte_regexdev.so.23.0 00:10:17.897 [654/745] Linking target lib/librte_compressdev.so.23.0 00:10:17.897 [655/745] Linking target lib/librte_fib.so.23.0 00:10:17.897 [656/745] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:10:17.897 [657/745] Linking target lib/librte_hash.so.23.0 00:10:17.897 [658/745] Linking target lib/librte_cmdline.so.23.0 00:10:17.897 [659/745] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:10:17.897 [660/745] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:10:17.897 [661/745] Linking target lib/librte_ethdev.so.23.0 00:10:17.897 [662/745] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:10:17.897 [663/745] Linking target lib/librte_security.so.23.0 00:10:18.155 [664/745] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:10:18.155 [665/745] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:10:18.155 [666/745] Linking target lib/librte_lpm.so.23.0 00:10:18.155 [667/745] Linking target lib/librte_efd.so.23.0 00:10:18.155 [668/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:10:18.155 [669/745] Linking target lib/librte_member.so.23.0 00:10:18.155 [670/745] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:10:18.155 [671/745] Linking target lib/librte_gso.so.23.0 00:10:18.155 [672/745] Linking target lib/librte_pcapng.so.23.0 00:10:18.155 [673/745] Linking target lib/librte_ip_frag.so.23.0 00:10:18.155 [674/745] Linking target lib/librte_gro.so.23.0 00:10:18.155 [675/745] Linking target lib/librte_metrics.so.23.0 00:10:18.155 [676/745] Linking target lib/librte_bpf.so.23.0 00:10:18.155 [677/745] Linking target lib/librte_power.so.23.0 00:10:18.155 [678/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:10:18.155 [679/745] Linking target lib/librte_eventdev.so.23.0 00:10:18.155 [680/745] Linking target lib/librte_ipsec.so.23.0 00:10:18.413 [681/745] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:10:18.413 [682/745] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:10:18.413 [683/745] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:10:18.413 [684/745] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:10:18.413 [685/745] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:10:18.413 [686/745] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:10:18.413 [687/745] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:10:18.413 [688/745] Linking target lib/librte_pdump.so.23.0 00:10:18.413 [689/745] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:10:18.413 [690/745] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:10:18.413 [691/745] Linking target lib/librte_latencystats.so.23.0 00:10:18.413 [692/745] Linking target lib/librte_bitratestats.so.23.0 00:10:18.413 [693/745] Linking target lib/librte_port.so.23.0 00:10:18.672 [694/745] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:10:18.672 [695/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:10:18.672 [696/745] Linking target lib/librte_table.so.23.0 00:10:18.672 [697/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:10:18.930 [698/745] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:10:19.187 [699/745] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:10:19.187 [700/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:10:19.187 [701/745] Linking static target drivers/libtmp_rte_net_i40e.a 00:10:19.446 [702/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:10:19.704 [703/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:10:19.704 [704/745] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:10:19.704 [705/745] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:10:19.704 [706/745] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:10:19.704 [707/745] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:10:19.704 [708/745] Linking static target drivers/librte_net_i40e.a 00:10:19.962 [709/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:10:20.220 [710/745] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:10:20.220 [711/745] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:10:20.220 [712/745] Linking target drivers/librte_net_i40e.so.23.0 00:10:21.595 [713/745] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:10:21.595 [714/745] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:10:21.595 [715/745] Linking static target lib/librte_node.a 00:10:21.852 [716/745] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:10:21.853 [717/745] Linking target lib/librte_node.so.23.0 00:10:22.110 [718/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:10:22.676 [719/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:10:30.820 [720/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:11:02.882 [721/745] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:11:02.882 [722/745] Linking static target lib/librte_vhost.a 00:11:02.882 [723/745] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:11:02.882 [724/745] Linking target lib/librte_vhost.so.23.0 00:11:10.987 [725/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:11:10.987 [726/745] Linking static target lib/librte_pipeline.a 00:11:11.246 [727/745] Linking target app/dpdk-test-pipeline 00:11:11.246 [728/745] Linking target app/dpdk-pdump 00:11:11.246 [729/745] Linking target app/dpdk-test-sad 00:11:11.504 [730/745] Linking target app/dpdk-test-cmdline 00:11:11.504 [731/745] Linking target app/dpdk-dumpcap 00:11:11.504 [732/745] Linking target app/dpdk-test-security-perf 00:11:11.504 [733/745] Linking target app/dpdk-test-regex 00:11:11.504 [734/745] Linking target app/dpdk-test-crypto-perf 00:11:11.504 [735/745] Linking target app/dpdk-test-compress-perf 00:11:11.504 [736/745] Linking target app/dpdk-test-fib 00:11:11.504 [737/745] Linking target app/dpdk-proc-info 00:11:11.504 [738/745] Linking target app/dpdk-test-gpudev 00:11:11.504 [739/745] Linking target app/dpdk-test-flow-perf 00:11:11.504 [740/745] Linking target app/dpdk-test-acl 00:11:11.504 [741/745] Linking target app/dpdk-test-bbdev 00:11:11.504 [742/745] Linking target app/dpdk-test-eventdev 00:11:11.504 [743/745] Linking target app/dpdk-testpmd 00:11:13.405 [744/745] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:11:13.405 [745/745] Linking target lib/librte_pipeline.so.23.0 00:11:13.405 08:37:07 build_native_dpdk -- common/autobuild_common.sh@187 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:11:13.405 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:11:13.405 [0/1] Installing files. 00:11:13.668 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:11:13.668 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:11:13.668 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:11:13.668 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:11:13.668 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:11:13.668 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:11:13.668 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:11:13.668 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:11:13.668 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:11:13.668 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:11:13.668 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:11:13.668 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:11:13.668 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:11:13.668 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:11:13.668 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:11:13.668 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:11:13.668 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:11:13.668 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:11:13.668 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:11:13.668 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:11:13.668 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:11:13.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:11:13.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:11:13.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:11:13.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:11:13.673 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:11:13.673 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:11:13.673 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:11:13.673 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:11:13.673 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:11:13.673 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:11:13.673 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:11:13.673 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:11:13.673 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:11:13.673 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:11:13.673 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:11:13.673 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:11:13.673 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:11:13.673 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:11:13.673 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:11:13.673 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:11:13.673 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:11:13.673 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:11:13.673 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:11:13.673 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:11:13.673 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:11:13.673 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:11:13.673 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:11:13.673 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:11:13.673 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:11:13.673 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:11:13.673 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:11:13.673 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:11:13.673 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:11:13.673 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:11:13.673 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:11:13.673 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:11:13.673 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:11:13.673 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:11:13.673 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:11:13.673 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:11:13.673 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:11:13.673 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:11:13.673 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:11:13.673 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:11:13.673 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:11:13.673 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:11:13.673 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:11:13.673 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:11:13.673 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:11:13.673 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:11:13.673 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:11:13.673 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:11:13.673 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:11:13.673 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:11:13.673 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:11:13.673 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:11:13.673 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:11:13.673 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:11:13.673 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:11:13.674 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:11:13.674 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:11:13.674 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:11:13.674 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:11:13.674 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:11:13.674 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:11:13.674 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:11:13.674 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:11:13.674 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:11:13.674 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:11:13.674 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:11:13.674 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:11:13.674 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:11:13.674 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:11:13.674 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:11:13.674 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:11:13.674 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:11:13.674 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:11:13.674 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:11:13.674 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:11:13.674 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.674 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.675 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.675 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.675 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.675 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.675 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:14.245 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:14.245 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:14.245 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:14.245 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:14.245 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:14.245 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:14.245 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:14.245 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:14.245 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:14.245 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:11:14.245 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:14.245 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:11:14.245 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:14.245 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:11:14.245 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:14.245 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:11:14.245 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:11:14.245 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:11:14.245 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:11:14.245 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:11:14.245 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:11:14.245 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:11:14.245 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:11:14.245 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:11:14.245 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:11:14.245 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:11:14.245 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:11:14.245 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:11:14.245 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:11:14.245 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:11:14.245 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:11:14.245 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:11:14.245 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:11:14.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:11:14.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:11:14.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:11:14.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:11:14.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:11:14.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:11:14.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:11:14.249 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:11:14.249 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:11:14.249 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:11:14.249 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:11:14.249 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:11:14.249 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:11:14.249 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:11:14.249 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:11:14.249 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:11:14.249 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:11:14.249 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:11:14.249 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:11:14.249 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:11:14.249 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:11:14.249 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.23 00:11:14.249 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:11:14.249 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:11:14.249 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:11:14.249 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:11:14.249 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:11:14.249 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:11:14.249 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:11:14.249 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:11:14.249 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:11:14.249 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:11:14.249 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:11:14.249 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:11:14.249 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:11:14.249 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:11:14.249 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:11:14.249 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:11:14.249 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:11:14.249 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:11:14.249 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:11:14.249 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:11:14.249 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:11:14.249 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:11:14.249 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:11:14.249 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:11:14.250 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:11:14.250 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:11:14.250 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:11:14.250 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:11:14.250 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:11:14.250 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:11:14.250 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:11:14.250 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:11:14.250 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:11:14.250 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:11:14.250 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:11:14.250 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:11:14.250 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:11:14.250 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:11:14.250 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:11:14.250 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:11:14.250 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:11:14.250 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:11:14.250 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:11:14.250 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:11:14.250 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:11:14.250 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:11:14.250 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:11:14.250 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:11:14.250 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:11:14.250 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.23 00:11:14.250 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:11:14.250 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:11:14.250 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:11:14.250 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.23 00:11:14.250 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:11:14.250 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:11:14.250 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:11:14.250 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:11:14.250 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:11:14.250 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:11:14.250 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:11:14.250 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:11:14.250 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:11:14.250 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:11:14.250 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:11:14.250 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:11:14.250 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:11:14.250 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.23 00:11:14.250 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:11:14.250 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:11:14.250 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:11:14.250 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:11:14.250 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:11:14.250 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:11:14.250 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:11:14.250 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:11:14.250 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:11:14.250 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.23 00:11:14.250 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:11:14.250 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:11:14.250 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:11:14.250 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.23 00:11:14.250 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:11:14.250 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:11:14.250 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:11:14.250 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:11:14.250 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:11:14.250 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.23 00:11:14.250 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:11:14.250 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:11:14.250 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:11:14.250 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:11:14.250 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:11:14.250 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:11:14.250 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:11:14.250 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:11:14.250 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:11:14.250 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:11:14.250 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:11:14.250 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:11:14.250 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:11:14.250 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:11:14.250 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:11:14.250 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:11:14.250 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:11:14.250 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:11:14.250 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:11:14.250 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:11:14.250 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:11:14.250 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:11:14.250 08:37:08 build_native_dpdk -- common/autobuild_common.sh@189 -- $ uname -s 00:11:14.250 08:37:08 build_native_dpdk -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:11:14.250 08:37:08 build_native_dpdk -- common/autobuild_common.sh@200 -- $ cat 00:11:14.251 08:37:08 build_native_dpdk -- common/autobuild_common.sh@205 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:14.251 00:11:14.251 real 1m19.560s 00:11:14.251 user 14m35.029s 00:11:14.251 sys 1m50.350s 00:11:14.251 08:37:08 build_native_dpdk -- common/autotest_common.sh@1123 -- $ xtrace_disable 00:11:14.251 08:37:08 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:11:14.251 ************************************ 00:11:14.251 END TEST build_native_dpdk 00:11:14.251 ************************************ 00:11:14.251 08:37:08 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:11:14.251 08:37:08 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:11:14.251 08:37:08 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:11:14.251 08:37:08 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:11:14.251 08:37:08 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:11:14.251 08:37:08 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:11:14.251 08:37:08 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:11:14.251 08:37:08 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:11:14.251 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:11:14.510 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:14.510 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.510 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:14.768 Using 'verbs' RDMA provider 00:11:25.304 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:11:33.411 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:11:33.669 Creating mk/config.mk...done. 00:11:33.669 Creating mk/cc.flags.mk...done. 00:11:33.669 Type 'make' to build. 00:11:33.669 08:37:28 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:11:33.669 08:37:28 -- common/autotest_common.sh@1098 -- $ '[' 3 -le 1 ']' 00:11:33.669 08:37:28 -- common/autotest_common.sh@1104 -- $ xtrace_disable 00:11:33.669 08:37:28 -- common/autotest_common.sh@10 -- $ set +x 00:11:33.669 ************************************ 00:11:33.669 START TEST make 00:11:33.669 ************************************ 00:11:33.669 08:37:28 make -- common/autotest_common.sh@1122 -- $ make -j48 00:11:33.927 make[1]: Nothing to be done for 'all'. 00:11:35.321 The Meson build system 00:11:35.321 Version: 1.3.1 00:11:35.321 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:11:35.321 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:11:35.321 Build type: native build 00:11:35.321 Project name: libvfio-user 00:11:35.321 Project version: 0.0.1 00:11:35.321 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:11:35.321 C linker for the host machine: gcc ld.bfd 2.39-16 00:11:35.321 Host machine cpu family: x86_64 00:11:35.321 Host machine cpu: x86_64 00:11:35.321 Run-time dependency threads found: YES 00:11:35.321 Library dl found: YES 00:11:35.321 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:11:35.321 Run-time dependency json-c found: YES 0.17 00:11:35.321 Run-time dependency cmocka found: YES 1.1.7 00:11:35.321 Program pytest-3 found: NO 00:11:35.321 Program flake8 found: NO 00:11:35.321 Program misspell-fixer found: NO 00:11:35.321 Program restructuredtext-lint found: NO 00:11:35.321 Program valgrind found: YES (/usr/bin/valgrind) 00:11:35.321 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:11:35.321 Compiler for C supports arguments -Wmissing-declarations: YES 00:11:35.321 Compiler for C supports arguments -Wwrite-strings: YES 00:11:35.321 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:11:35.321 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:11:35.321 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:11:35.321 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:11:35.321 Build targets in project: 8 00:11:35.321 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:11:35.321 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:11:35.321 00:11:35.321 libvfio-user 0.0.1 00:11:35.321 00:11:35.321 User defined options 00:11:35.321 buildtype : debug 00:11:35.321 default_library: shared 00:11:35.321 libdir : /usr/local/lib 00:11:35.321 00:11:35.321 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:11:36.274 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:11:36.274 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:11:36.274 [2/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:11:36.274 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:11:36.274 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:11:36.537 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:11:36.537 [6/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:11:36.537 [7/37] Compiling C object samples/lspci.p/lspci.c.o 00:11:36.537 [8/37] Compiling C object samples/null.p/null.c.o 00:11:36.537 [9/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:11:36.537 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:11:36.537 [11/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:11:36.537 [12/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:11:36.537 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:11:36.537 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:11:36.537 [15/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:11:36.537 [16/37] Compiling C object test/unit_tests.p/mocks.c.o 00:11:36.537 [17/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:11:36.537 [18/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:11:36.537 [19/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:11:36.537 [20/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:11:36.537 [21/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:11:36.537 [22/37] Compiling C object samples/server.p/server.c.o 00:11:36.537 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:11:36.537 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:11:36.537 [25/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:11:36.537 [26/37] Compiling C object samples/client.p/client.c.o 00:11:36.797 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:11:36.797 [28/37] Linking target samples/client 00:11:36.797 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:11:36.797 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:11:36.797 [31/37] Linking target test/unit_tests 00:11:37.058 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:11:37.058 [33/37] Linking target samples/null 00:11:37.058 [34/37] Linking target samples/lspci 00:11:37.058 [35/37] Linking target samples/gpio-pci-idio-16 00:11:37.058 [36/37] Linking target samples/shadow_ioeventfd_server 00:11:37.058 [37/37] Linking target samples/server 00:11:37.058 INFO: autodetecting backend as ninja 00:11:37.058 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:11:37.058 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:11:38.004 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:11:38.004 ninja: no work to do. 00:11:50.199 CC lib/ut_mock/mock.o 00:11:50.199 CC lib/log/log.o 00:11:50.199 CC lib/log/log_flags.o 00:11:50.199 CC lib/log/log_deprecated.o 00:11:50.199 CC lib/ut/ut.o 00:11:50.199 LIB libspdk_ut_mock.a 00:11:50.199 SO libspdk_ut_mock.so.6.0 00:11:50.199 LIB libspdk_log.a 00:11:50.199 LIB libspdk_ut.a 00:11:50.199 SO libspdk_ut.so.2.0 00:11:50.199 SO libspdk_log.so.7.0 00:11:50.199 SYMLINK libspdk_ut_mock.so 00:11:50.199 SYMLINK libspdk_ut.so 00:11:50.199 SYMLINK libspdk_log.so 00:11:50.199 CC lib/dma/dma.o 00:11:50.199 CC lib/util/base64.o 00:11:50.199 CC lib/util/bit_array.o 00:11:50.199 CC lib/util/cpuset.o 00:11:50.199 CXX lib/trace_parser/trace.o 00:11:50.199 CC lib/ioat/ioat.o 00:11:50.199 CC lib/util/crc16.o 00:11:50.199 CC lib/util/crc32.o 00:11:50.199 CC lib/util/crc32c.o 00:11:50.199 CC lib/util/crc32_ieee.o 00:11:50.199 CC lib/util/crc64.o 00:11:50.199 CC lib/util/dif.o 00:11:50.199 CC lib/util/fd.o 00:11:50.199 CC lib/util/file.o 00:11:50.199 CC lib/util/hexlify.o 00:11:50.199 CC lib/util/iov.o 00:11:50.200 CC lib/util/math.o 00:11:50.200 CC lib/util/pipe.o 00:11:50.200 CC lib/util/strerror_tls.o 00:11:50.200 CC lib/util/string.o 00:11:50.200 CC lib/util/uuid.o 00:11:50.200 CC lib/util/fd_group.o 00:11:50.200 CC lib/util/xor.o 00:11:50.200 CC lib/util/zipf.o 00:11:50.200 CC lib/vfio_user/host/vfio_user_pci.o 00:11:50.200 CC lib/vfio_user/host/vfio_user.o 00:11:50.200 LIB libspdk_dma.a 00:11:50.200 SO libspdk_dma.so.4.0 00:11:50.200 SYMLINK libspdk_dma.so 00:11:50.200 LIB libspdk_ioat.a 00:11:50.200 SO libspdk_ioat.so.7.0 00:11:50.200 LIB libspdk_vfio_user.a 00:11:50.200 SYMLINK libspdk_ioat.so 00:11:50.200 SO libspdk_vfio_user.so.5.0 00:11:50.200 SYMLINK libspdk_vfio_user.so 00:11:50.200 LIB libspdk_util.a 00:11:50.200 SO libspdk_util.so.9.0 00:11:50.458 SYMLINK libspdk_util.so 00:11:50.458 CC lib/json/json_parse.o 00:11:50.458 CC lib/env_dpdk/env.o 00:11:50.458 CC lib/rdma/common.o 00:11:50.458 CC lib/idxd/idxd.o 00:11:50.458 CC lib/conf/conf.o 00:11:50.458 CC lib/json/json_util.o 00:11:50.458 CC lib/env_dpdk/memory.o 00:11:50.458 CC lib/idxd/idxd_user.o 00:11:50.458 CC lib/vmd/vmd.o 00:11:50.458 CC lib/rdma/rdma_verbs.o 00:11:50.458 CC lib/json/json_write.o 00:11:50.458 CC lib/env_dpdk/pci.o 00:11:50.458 CC lib/vmd/led.o 00:11:50.458 CC lib/env_dpdk/init.o 00:11:50.458 CC lib/env_dpdk/threads.o 00:11:50.458 CC lib/env_dpdk/pci_ioat.o 00:11:50.458 CC lib/env_dpdk/pci_virtio.o 00:11:50.458 CC lib/env_dpdk/pci_vmd.o 00:11:50.458 CC lib/env_dpdk/pci_idxd.o 00:11:50.458 CC lib/env_dpdk/pci_event.o 00:11:50.458 CC lib/env_dpdk/sigbus_handler.o 00:11:50.458 CC lib/env_dpdk/pci_dpdk.o 00:11:50.458 CC lib/env_dpdk/pci_dpdk_2207.o 00:11:50.458 CC lib/env_dpdk/pci_dpdk_2211.o 00:11:50.717 LIB libspdk_trace_parser.a 00:11:50.717 SO libspdk_trace_parser.so.5.0 00:11:50.717 SYMLINK libspdk_trace_parser.so 00:11:50.717 LIB libspdk_conf.a 00:11:50.975 SO libspdk_conf.so.6.0 00:11:50.975 LIB libspdk_json.a 00:11:50.975 LIB libspdk_rdma.a 00:11:50.975 SYMLINK libspdk_conf.so 00:11:50.975 SO libspdk_rdma.so.6.0 00:11:50.975 SO libspdk_json.so.6.0 00:11:50.975 SYMLINK libspdk_rdma.so 00:11:50.975 SYMLINK libspdk_json.so 00:11:51.233 CC lib/jsonrpc/jsonrpc_server.o 00:11:51.233 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:11:51.233 CC lib/jsonrpc/jsonrpc_client.o 00:11:51.233 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:11:51.233 LIB libspdk_idxd.a 00:11:51.233 SO libspdk_idxd.so.12.0 00:11:51.233 LIB libspdk_vmd.a 00:11:51.233 SO libspdk_vmd.so.6.0 00:11:51.233 SYMLINK libspdk_idxd.so 00:11:51.233 SYMLINK libspdk_vmd.so 00:11:51.492 LIB libspdk_jsonrpc.a 00:11:51.492 SO libspdk_jsonrpc.so.6.0 00:11:51.492 SYMLINK libspdk_jsonrpc.so 00:11:51.787 CC lib/rpc/rpc.o 00:11:52.070 LIB libspdk_rpc.a 00:11:52.070 SO libspdk_rpc.so.6.0 00:11:52.070 SYMLINK libspdk_rpc.so 00:11:52.070 CC lib/trace/trace.o 00:11:52.070 CC lib/notify/notify.o 00:11:52.070 CC lib/trace/trace_flags.o 00:11:52.070 CC lib/notify/notify_rpc.o 00:11:52.070 CC lib/trace/trace_rpc.o 00:11:52.070 CC lib/keyring/keyring.o 00:11:52.070 CC lib/keyring/keyring_rpc.o 00:11:52.328 LIB libspdk_notify.a 00:11:52.328 SO libspdk_notify.so.6.0 00:11:52.328 LIB libspdk_keyring.a 00:11:52.328 SYMLINK libspdk_notify.so 00:11:52.328 LIB libspdk_trace.a 00:11:52.328 SO libspdk_keyring.so.1.0 00:11:52.328 SO libspdk_trace.so.10.0 00:11:52.587 SYMLINK libspdk_keyring.so 00:11:52.587 SYMLINK libspdk_trace.so 00:11:52.587 LIB libspdk_env_dpdk.a 00:11:52.587 SO libspdk_env_dpdk.so.14.0 00:11:52.587 CC lib/sock/sock.o 00:11:52.587 CC lib/sock/sock_rpc.o 00:11:52.587 CC lib/thread/thread.o 00:11:52.587 CC lib/thread/iobuf.o 00:11:52.845 SYMLINK libspdk_env_dpdk.so 00:11:53.103 LIB libspdk_sock.a 00:11:53.103 SO libspdk_sock.so.9.0 00:11:53.103 SYMLINK libspdk_sock.so 00:11:53.362 CC lib/nvme/nvme_ctrlr_cmd.o 00:11:53.362 CC lib/nvme/nvme_ctrlr.o 00:11:53.362 CC lib/nvme/nvme_fabric.o 00:11:53.362 CC lib/nvme/nvme_ns_cmd.o 00:11:53.362 CC lib/nvme/nvme_ns.o 00:11:53.362 CC lib/nvme/nvme_pcie_common.o 00:11:53.362 CC lib/nvme/nvme_pcie.o 00:11:53.362 CC lib/nvme/nvme_qpair.o 00:11:53.362 CC lib/nvme/nvme.o 00:11:53.362 CC lib/nvme/nvme_quirks.o 00:11:53.362 CC lib/nvme/nvme_transport.o 00:11:53.362 CC lib/nvme/nvme_discovery.o 00:11:53.362 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:11:53.362 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:11:53.362 CC lib/nvme/nvme_tcp.o 00:11:53.362 CC lib/nvme/nvme_opal.o 00:11:53.362 CC lib/nvme/nvme_io_msg.o 00:11:53.362 CC lib/nvme/nvme_poll_group.o 00:11:53.362 CC lib/nvme/nvme_zns.o 00:11:53.362 CC lib/nvme/nvme_stubs.o 00:11:53.362 CC lib/nvme/nvme_auth.o 00:11:53.362 CC lib/nvme/nvme_cuse.o 00:11:53.362 CC lib/nvme/nvme_vfio_user.o 00:11:53.362 CC lib/nvme/nvme_rdma.o 00:11:54.297 LIB libspdk_thread.a 00:11:54.298 SO libspdk_thread.so.10.0 00:11:54.298 SYMLINK libspdk_thread.so 00:11:54.556 CC lib/blob/blobstore.o 00:11:54.556 CC lib/vfu_tgt/tgt_endpoint.o 00:11:54.556 CC lib/init/json_config.o 00:11:54.556 CC lib/blob/request.o 00:11:54.556 CC lib/vfu_tgt/tgt_rpc.o 00:11:54.556 CC lib/virtio/virtio.o 00:11:54.556 CC lib/init/subsystem.o 00:11:54.556 CC lib/blob/zeroes.o 00:11:54.556 CC lib/virtio/virtio_vhost_user.o 00:11:54.556 CC lib/init/subsystem_rpc.o 00:11:54.556 CC lib/blob/blob_bs_dev.o 00:11:54.556 CC lib/accel/accel.o 00:11:54.556 CC lib/init/rpc.o 00:11:54.556 CC lib/virtio/virtio_vfio_user.o 00:11:54.556 CC lib/accel/accel_rpc.o 00:11:54.556 CC lib/virtio/virtio_pci.o 00:11:54.556 CC lib/accel/accel_sw.o 00:11:54.814 LIB libspdk_init.a 00:11:54.814 SO libspdk_init.so.5.0 00:11:54.814 LIB libspdk_virtio.a 00:11:54.814 LIB libspdk_vfu_tgt.a 00:11:54.814 SYMLINK libspdk_init.so 00:11:54.814 SO libspdk_vfu_tgt.so.3.0 00:11:54.814 SO libspdk_virtio.so.7.0 00:11:54.814 SYMLINK libspdk_vfu_tgt.so 00:11:55.072 SYMLINK libspdk_virtio.so 00:11:55.072 CC lib/event/app.o 00:11:55.072 CC lib/event/reactor.o 00:11:55.072 CC lib/event/log_rpc.o 00:11:55.072 CC lib/event/app_rpc.o 00:11:55.072 CC lib/event/scheduler_static.o 00:11:55.331 LIB libspdk_event.a 00:11:55.589 SO libspdk_event.so.13.0 00:11:55.589 LIB libspdk_accel.a 00:11:55.589 SYMLINK libspdk_event.so 00:11:55.589 SO libspdk_accel.so.15.0 00:11:55.589 SYMLINK libspdk_accel.so 00:11:55.589 LIB libspdk_nvme.a 00:11:55.847 SO libspdk_nvme.so.13.0 00:11:55.847 CC lib/bdev/bdev.o 00:11:55.847 CC lib/bdev/bdev_rpc.o 00:11:55.847 CC lib/bdev/bdev_zone.o 00:11:55.847 CC lib/bdev/part.o 00:11:55.847 CC lib/bdev/scsi_nvme.o 00:11:56.104 SYMLINK libspdk_nvme.so 00:11:57.502 LIB libspdk_blob.a 00:11:57.502 SO libspdk_blob.so.11.0 00:11:57.502 SYMLINK libspdk_blob.so 00:11:57.760 CC lib/lvol/lvol.o 00:11:57.760 CC lib/blobfs/blobfs.o 00:11:57.760 CC lib/blobfs/tree.o 00:11:58.327 LIB libspdk_bdev.a 00:11:58.327 SO libspdk_bdev.so.15.0 00:11:58.592 LIB libspdk_blobfs.a 00:11:58.592 SYMLINK libspdk_bdev.so 00:11:58.592 SO libspdk_blobfs.so.10.0 00:11:58.592 SYMLINK libspdk_blobfs.so 00:11:58.592 LIB libspdk_lvol.a 00:11:58.592 CC lib/nbd/nbd.o 00:11:58.592 CC lib/ublk/ublk.o 00:11:58.592 CC lib/nbd/nbd_rpc.o 00:11:58.592 CC lib/ftl/ftl_core.o 00:11:58.592 CC lib/ublk/ublk_rpc.o 00:11:58.592 CC lib/ftl/ftl_init.o 00:11:58.592 CC lib/ftl/ftl_layout.o 00:11:58.592 CC lib/ftl/ftl_debug.o 00:11:58.592 CC lib/nvmf/ctrlr.o 00:11:58.592 CC lib/ftl/ftl_io.o 00:11:58.592 CC lib/scsi/dev.o 00:11:58.592 CC lib/nvmf/ctrlr_discovery.o 00:11:58.592 CC lib/scsi/lun.o 00:11:58.592 CC lib/ftl/ftl_sb.o 00:11:58.592 CC lib/nvmf/ctrlr_bdev.o 00:11:58.592 CC lib/scsi/port.o 00:11:58.592 CC lib/ftl/ftl_l2p.o 00:11:58.592 CC lib/nvmf/subsystem.o 00:11:58.592 CC lib/scsi/scsi.o 00:11:58.592 CC lib/ftl/ftl_l2p_flat.o 00:11:58.592 CC lib/nvmf/nvmf.o 00:11:58.592 CC lib/scsi/scsi_bdev.o 00:11:58.592 CC lib/nvmf/nvmf_rpc.o 00:11:58.592 CC lib/ftl/ftl_nv_cache.o 00:11:58.592 CC lib/ftl/ftl_band.o 00:11:58.592 CC lib/scsi/scsi_pr.o 00:11:58.592 CC lib/scsi/scsi_rpc.o 00:11:58.592 CC lib/ftl/ftl_writer.o 00:11:58.592 CC lib/nvmf/transport.o 00:11:58.592 CC lib/nvmf/tcp.o 00:11:58.592 CC lib/scsi/task.o 00:11:58.592 CC lib/ftl/ftl_band_ops.o 00:11:58.592 CC lib/ftl/ftl_rq.o 00:11:58.592 CC lib/ftl/ftl_reloc.o 00:11:58.592 CC lib/nvmf/stubs.o 00:11:58.592 CC lib/ftl/ftl_l2p_cache.o 00:11:58.592 CC lib/nvmf/mdns_server.o 00:11:58.592 CC lib/nvmf/vfio_user.o 00:11:58.592 CC lib/ftl/ftl_p2l.o 00:11:58.592 CC lib/nvmf/rdma.o 00:11:58.592 CC lib/ftl/mngt/ftl_mngt.o 00:11:58.592 CC lib/nvmf/auth.o 00:11:58.592 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:11:58.592 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:11:58.592 CC lib/ftl/mngt/ftl_mngt_startup.o 00:11:58.592 SO libspdk_lvol.so.10.0 00:11:58.592 CC lib/ftl/mngt/ftl_mngt_md.o 00:11:58.592 CC lib/ftl/mngt/ftl_mngt_misc.o 00:11:58.853 SYMLINK libspdk_lvol.so 00:11:58.853 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:11:59.115 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:11:59.115 CC lib/ftl/mngt/ftl_mngt_band.o 00:11:59.115 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:11:59.115 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:11:59.115 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:11:59.115 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:11:59.115 CC lib/ftl/utils/ftl_conf.o 00:11:59.115 CC lib/ftl/utils/ftl_md.o 00:11:59.115 CC lib/ftl/utils/ftl_mempool.o 00:11:59.115 CC lib/ftl/utils/ftl_bitmap.o 00:11:59.115 CC lib/ftl/utils/ftl_property.o 00:11:59.115 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:11:59.115 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:11:59.115 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:11:59.115 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:11:59.115 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:11:59.115 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:11:59.115 CC lib/ftl/upgrade/ftl_sb_v3.o 00:11:59.376 CC lib/ftl/upgrade/ftl_sb_v5.o 00:11:59.376 CC lib/ftl/nvc/ftl_nvc_dev.o 00:11:59.376 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:11:59.376 CC lib/ftl/base/ftl_base_dev.o 00:11:59.376 CC lib/ftl/base/ftl_base_bdev.o 00:11:59.376 CC lib/ftl/ftl_trace.o 00:11:59.376 LIB libspdk_nbd.a 00:11:59.634 SO libspdk_nbd.so.7.0 00:11:59.634 LIB libspdk_scsi.a 00:11:59.634 SYMLINK libspdk_nbd.so 00:11:59.634 SO libspdk_scsi.so.9.0 00:11:59.634 LIB libspdk_ublk.a 00:11:59.634 SYMLINK libspdk_scsi.so 00:11:59.634 SO libspdk_ublk.so.3.0 00:11:59.893 SYMLINK libspdk_ublk.so 00:11:59.893 CC lib/vhost/vhost.o 00:11:59.893 CC lib/iscsi/conn.o 00:11:59.893 CC lib/iscsi/init_grp.o 00:11:59.893 CC lib/vhost/vhost_rpc.o 00:11:59.893 CC lib/iscsi/iscsi.o 00:11:59.893 CC lib/vhost/vhost_scsi.o 00:11:59.893 CC lib/vhost/vhost_blk.o 00:11:59.893 CC lib/iscsi/md5.o 00:11:59.893 CC lib/vhost/rte_vhost_user.o 00:11:59.893 CC lib/iscsi/param.o 00:11:59.893 CC lib/iscsi/portal_grp.o 00:11:59.893 CC lib/iscsi/tgt_node.o 00:11:59.893 CC lib/iscsi/iscsi_subsystem.o 00:11:59.893 CC lib/iscsi/iscsi_rpc.o 00:11:59.893 CC lib/iscsi/task.o 00:11:59.893 LIB libspdk_ftl.a 00:12:00.151 SO libspdk_ftl.so.9.0 00:12:00.409 SYMLINK libspdk_ftl.so 00:12:00.975 LIB libspdk_vhost.a 00:12:01.233 SO libspdk_vhost.so.8.0 00:12:01.233 SYMLINK libspdk_vhost.so 00:12:01.233 LIB libspdk_nvmf.a 00:12:01.233 LIB libspdk_iscsi.a 00:12:01.233 SO libspdk_nvmf.so.18.0 00:12:01.233 SO libspdk_iscsi.so.8.0 00:12:01.492 SYMLINK libspdk_iscsi.so 00:12:01.492 SYMLINK libspdk_nvmf.so 00:12:01.755 CC module/env_dpdk/env_dpdk_rpc.o 00:12:01.755 CC module/vfu_device/vfu_virtio.o 00:12:01.755 CC module/vfu_device/vfu_virtio_blk.o 00:12:01.755 CC module/vfu_device/vfu_virtio_scsi.o 00:12:01.755 CC module/vfu_device/vfu_virtio_rpc.o 00:12:01.755 CC module/blob/bdev/blob_bdev.o 00:12:01.755 CC module/accel/ioat/accel_ioat.o 00:12:01.755 CC module/accel/ioat/accel_ioat_rpc.o 00:12:01.755 CC module/scheduler/gscheduler/gscheduler.o 00:12:01.755 CC module/accel/iaa/accel_iaa.o 00:12:01.755 CC module/accel/dsa/accel_dsa.o 00:12:01.755 CC module/accel/error/accel_error.o 00:12:01.755 CC module/accel/iaa/accel_iaa_rpc.o 00:12:01.755 CC module/accel/dsa/accel_dsa_rpc.o 00:12:01.755 CC module/accel/error/accel_error_rpc.o 00:12:01.755 CC module/scheduler/dynamic/scheduler_dynamic.o 00:12:01.755 CC module/sock/posix/posix.o 00:12:01.755 CC module/keyring/file/keyring.o 00:12:01.755 CC module/keyring/file/keyring_rpc.o 00:12:01.755 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:12:02.013 LIB libspdk_env_dpdk_rpc.a 00:12:02.013 SO libspdk_env_dpdk_rpc.so.6.0 00:12:02.013 SYMLINK libspdk_env_dpdk_rpc.so 00:12:02.013 LIB libspdk_scheduler_gscheduler.a 00:12:02.013 LIB libspdk_keyring_file.a 00:12:02.013 LIB libspdk_scheduler_dpdk_governor.a 00:12:02.013 SO libspdk_scheduler_gscheduler.so.4.0 00:12:02.013 SO libspdk_scheduler_dpdk_governor.so.4.0 00:12:02.013 SO libspdk_keyring_file.so.1.0 00:12:02.013 LIB libspdk_accel_error.a 00:12:02.013 LIB libspdk_accel_ioat.a 00:12:02.013 LIB libspdk_scheduler_dynamic.a 00:12:02.013 LIB libspdk_accel_iaa.a 00:12:02.013 SO libspdk_accel_error.so.2.0 00:12:02.013 SO libspdk_accel_ioat.so.6.0 00:12:02.013 SYMLINK libspdk_scheduler_gscheduler.so 00:12:02.013 SO libspdk_scheduler_dynamic.so.4.0 00:12:02.013 SYMLINK libspdk_scheduler_dpdk_governor.so 00:12:02.013 SYMLINK libspdk_keyring_file.so 00:12:02.271 SO libspdk_accel_iaa.so.3.0 00:12:02.271 LIB libspdk_accel_dsa.a 00:12:02.271 SYMLINK libspdk_accel_ioat.so 00:12:02.271 SYMLINK libspdk_scheduler_dynamic.so 00:12:02.271 SYMLINK libspdk_accel_error.so 00:12:02.271 LIB libspdk_blob_bdev.a 00:12:02.271 SO libspdk_accel_dsa.so.5.0 00:12:02.271 SYMLINK libspdk_accel_iaa.so 00:12:02.271 SO libspdk_blob_bdev.so.11.0 00:12:02.271 SYMLINK libspdk_accel_dsa.so 00:12:02.271 SYMLINK libspdk_blob_bdev.so 00:12:02.535 LIB libspdk_vfu_device.a 00:12:02.535 SO libspdk_vfu_device.so.3.0 00:12:02.535 CC module/bdev/split/vbdev_split.o 00:12:02.535 CC module/bdev/error/vbdev_error.o 00:12:02.535 CC module/bdev/split/vbdev_split_rpc.o 00:12:02.535 CC module/bdev/delay/vbdev_delay.o 00:12:02.535 CC module/blobfs/bdev/blobfs_bdev.o 00:12:02.535 CC module/bdev/error/vbdev_error_rpc.o 00:12:02.535 CC module/bdev/delay/vbdev_delay_rpc.o 00:12:02.535 CC module/bdev/lvol/vbdev_lvol.o 00:12:02.536 CC module/bdev/null/bdev_null.o 00:12:02.536 CC module/bdev/virtio/bdev_virtio_scsi.o 00:12:02.536 CC module/bdev/ftl/bdev_ftl.o 00:12:02.536 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:12:02.536 CC module/bdev/gpt/gpt.o 00:12:02.536 CC module/bdev/zone_block/vbdev_zone_block.o 00:12:02.536 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:12:02.536 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:12:02.536 CC module/bdev/virtio/bdev_virtio_blk.o 00:12:02.536 CC module/bdev/nvme/bdev_nvme.o 00:12:02.536 CC module/bdev/ftl/bdev_ftl_rpc.o 00:12:02.536 CC module/bdev/null/bdev_null_rpc.o 00:12:02.536 CC module/bdev/malloc/bdev_malloc.o 00:12:02.536 CC module/bdev/gpt/vbdev_gpt.o 00:12:02.536 CC module/bdev/iscsi/bdev_iscsi.o 00:12:02.536 CC module/bdev/virtio/bdev_virtio_rpc.o 00:12:02.536 CC module/bdev/malloc/bdev_malloc_rpc.o 00:12:02.536 CC module/bdev/nvme/bdev_nvme_rpc.o 00:12:02.536 CC module/bdev/passthru/vbdev_passthru.o 00:12:02.536 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:12:02.536 CC module/bdev/nvme/nvme_rpc.o 00:12:02.536 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:12:02.536 CC module/bdev/nvme/bdev_mdns_client.o 00:12:02.536 CC module/bdev/nvme/vbdev_opal.o 00:12:02.536 CC module/bdev/nvme/vbdev_opal_rpc.o 00:12:02.536 CC module/bdev/aio/bdev_aio.o 00:12:02.536 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:12:02.536 CC module/bdev/aio/bdev_aio_rpc.o 00:12:02.536 CC module/bdev/raid/bdev_raid.o 00:12:02.536 CC module/bdev/raid/bdev_raid_rpc.o 00:12:02.536 CC module/bdev/raid/bdev_raid_sb.o 00:12:02.536 CC module/bdev/raid/raid0.o 00:12:02.536 CC module/bdev/raid/raid1.o 00:12:02.536 CC module/bdev/raid/concat.o 00:12:02.536 SYMLINK libspdk_vfu_device.so 00:12:02.794 LIB libspdk_sock_posix.a 00:12:02.794 SO libspdk_sock_posix.so.6.0 00:12:02.794 LIB libspdk_blobfs_bdev.a 00:12:02.794 SYMLINK libspdk_sock_posix.so 00:12:03.053 SO libspdk_blobfs_bdev.so.6.0 00:12:03.053 LIB libspdk_bdev_zone_block.a 00:12:03.053 LIB libspdk_bdev_gpt.a 00:12:03.053 SO libspdk_bdev_zone_block.so.6.0 00:12:03.053 LIB libspdk_bdev_split.a 00:12:03.053 SYMLINK libspdk_blobfs_bdev.so 00:12:03.053 SO libspdk_bdev_gpt.so.6.0 00:12:03.053 LIB libspdk_bdev_aio.a 00:12:03.053 SO libspdk_bdev_split.so.6.0 00:12:03.053 LIB libspdk_bdev_delay.a 00:12:03.053 LIB libspdk_bdev_error.a 00:12:03.053 LIB libspdk_bdev_passthru.a 00:12:03.053 LIB libspdk_bdev_null.a 00:12:03.053 SO libspdk_bdev_aio.so.6.0 00:12:03.053 SYMLINK libspdk_bdev_zone_block.so 00:12:03.053 LIB libspdk_bdev_ftl.a 00:12:03.053 SO libspdk_bdev_error.so.6.0 00:12:03.053 SO libspdk_bdev_delay.so.6.0 00:12:03.053 SYMLINK libspdk_bdev_gpt.so 00:12:03.053 SO libspdk_bdev_passthru.so.6.0 00:12:03.053 SO libspdk_bdev_null.so.6.0 00:12:03.053 SYMLINK libspdk_bdev_split.so 00:12:03.053 SO libspdk_bdev_ftl.so.6.0 00:12:03.053 SYMLINK libspdk_bdev_aio.so 00:12:03.053 SYMLINK libspdk_bdev_error.so 00:12:03.053 SYMLINK libspdk_bdev_delay.so 00:12:03.053 SYMLINK libspdk_bdev_passthru.so 00:12:03.053 SYMLINK libspdk_bdev_null.so 00:12:03.053 LIB libspdk_bdev_malloc.a 00:12:03.053 SYMLINK libspdk_bdev_ftl.so 00:12:03.053 LIB libspdk_bdev_lvol.a 00:12:03.053 SO libspdk_bdev_malloc.so.6.0 00:12:03.053 LIB libspdk_bdev_iscsi.a 00:12:03.310 SO libspdk_bdev_lvol.so.6.0 00:12:03.310 SO libspdk_bdev_iscsi.so.6.0 00:12:03.310 SYMLINK libspdk_bdev_malloc.so 00:12:03.310 SYMLINK libspdk_bdev_lvol.so 00:12:03.310 SYMLINK libspdk_bdev_iscsi.so 00:12:03.311 LIB libspdk_bdev_virtio.a 00:12:03.311 SO libspdk_bdev_virtio.so.6.0 00:12:03.311 SYMLINK libspdk_bdev_virtio.so 00:12:03.568 LIB libspdk_bdev_raid.a 00:12:03.827 SO libspdk_bdev_raid.so.6.0 00:12:03.827 SYMLINK libspdk_bdev_raid.so 00:12:04.760 LIB libspdk_bdev_nvme.a 00:12:05.018 SO libspdk_bdev_nvme.so.7.0 00:12:05.018 SYMLINK libspdk_bdev_nvme.so 00:12:05.276 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:12:05.276 CC module/event/subsystems/iobuf/iobuf.o 00:12:05.276 CC module/event/subsystems/vmd/vmd.o 00:12:05.276 CC module/event/subsystems/sock/sock.o 00:12:05.276 CC module/event/subsystems/scheduler/scheduler.o 00:12:05.276 CC module/event/subsystems/vmd/vmd_rpc.o 00:12:05.276 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:12:05.276 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:12:05.276 CC module/event/subsystems/keyring/keyring.o 00:12:05.535 LIB libspdk_event_sock.a 00:12:05.535 LIB libspdk_event_keyring.a 00:12:05.535 LIB libspdk_event_vhost_blk.a 00:12:05.535 LIB libspdk_event_vfu_tgt.a 00:12:05.535 LIB libspdk_event_scheduler.a 00:12:05.535 LIB libspdk_event_vmd.a 00:12:05.535 SO libspdk_event_sock.so.5.0 00:12:05.535 SO libspdk_event_keyring.so.1.0 00:12:05.535 LIB libspdk_event_iobuf.a 00:12:05.535 SO libspdk_event_vhost_blk.so.3.0 00:12:05.535 SO libspdk_event_vfu_tgt.so.3.0 00:12:05.535 SO libspdk_event_scheduler.so.4.0 00:12:05.535 SO libspdk_event_vmd.so.6.0 00:12:05.535 SO libspdk_event_iobuf.so.3.0 00:12:05.535 SYMLINK libspdk_event_sock.so 00:12:05.535 SYMLINK libspdk_event_keyring.so 00:12:05.535 SYMLINK libspdk_event_vhost_blk.so 00:12:05.535 SYMLINK libspdk_event_vfu_tgt.so 00:12:05.535 SYMLINK libspdk_event_scheduler.so 00:12:05.535 SYMLINK libspdk_event_vmd.so 00:12:05.535 SYMLINK libspdk_event_iobuf.so 00:12:05.837 CC module/event/subsystems/accel/accel.o 00:12:06.095 LIB libspdk_event_accel.a 00:12:06.095 SO libspdk_event_accel.so.6.0 00:12:06.095 SYMLINK libspdk_event_accel.so 00:12:06.095 CC module/event/subsystems/bdev/bdev.o 00:12:06.353 LIB libspdk_event_bdev.a 00:12:06.353 SO libspdk_event_bdev.so.6.0 00:12:06.353 SYMLINK libspdk_event_bdev.so 00:12:06.610 CC module/event/subsystems/nbd/nbd.o 00:12:06.610 CC module/event/subsystems/ublk/ublk.o 00:12:06.610 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:12:06.610 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:12:06.610 CC module/event/subsystems/scsi/scsi.o 00:12:06.869 LIB libspdk_event_nbd.a 00:12:06.869 LIB libspdk_event_ublk.a 00:12:06.869 LIB libspdk_event_scsi.a 00:12:06.869 SO libspdk_event_nbd.so.6.0 00:12:06.869 SO libspdk_event_ublk.so.3.0 00:12:06.869 SO libspdk_event_scsi.so.6.0 00:12:06.869 SYMLINK libspdk_event_nbd.so 00:12:06.869 SYMLINK libspdk_event_ublk.so 00:12:06.869 SYMLINK libspdk_event_scsi.so 00:12:06.869 LIB libspdk_event_nvmf.a 00:12:06.869 SO libspdk_event_nvmf.so.6.0 00:12:06.869 SYMLINK libspdk_event_nvmf.so 00:12:07.126 CC module/event/subsystems/iscsi/iscsi.o 00:12:07.126 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:12:07.126 LIB libspdk_event_vhost_scsi.a 00:12:07.126 LIB libspdk_event_iscsi.a 00:12:07.126 SO libspdk_event_vhost_scsi.so.3.0 00:12:07.126 SO libspdk_event_iscsi.so.6.0 00:12:07.126 SYMLINK libspdk_event_vhost_scsi.so 00:12:07.126 SYMLINK libspdk_event_iscsi.so 00:12:07.384 SO libspdk.so.6.0 00:12:07.384 SYMLINK libspdk.so 00:12:07.647 CXX app/trace/trace.o 00:12:07.647 CC app/trace_record/trace_record.o 00:12:07.647 CC app/spdk_lspci/spdk_lspci.o 00:12:07.647 CC app/spdk_top/spdk_top.o 00:12:07.647 CC app/spdk_nvme_perf/perf.o 00:12:07.647 CC app/spdk_nvme_identify/identify.o 00:12:07.647 CC app/spdk_nvme_discover/discovery_aer.o 00:12:07.647 CC test/rpc_client/rpc_client_test.o 00:12:07.647 TEST_HEADER include/spdk/accel.h 00:12:07.647 TEST_HEADER include/spdk/accel_module.h 00:12:07.647 TEST_HEADER include/spdk/assert.h 00:12:07.647 TEST_HEADER include/spdk/barrier.h 00:12:07.647 TEST_HEADER include/spdk/base64.h 00:12:07.647 TEST_HEADER include/spdk/bdev.h 00:12:07.647 TEST_HEADER include/spdk/bdev_module.h 00:12:07.647 TEST_HEADER include/spdk/bdev_zone.h 00:12:07.647 TEST_HEADER include/spdk/bit_array.h 00:12:07.647 TEST_HEADER include/spdk/bit_pool.h 00:12:07.647 TEST_HEADER include/spdk/blob_bdev.h 00:12:07.647 TEST_HEADER include/spdk/blobfs_bdev.h 00:12:07.647 CC examples/interrupt_tgt/interrupt_tgt.o 00:12:07.647 TEST_HEADER include/spdk/blobfs.h 00:12:07.647 TEST_HEADER include/spdk/blob.h 00:12:07.647 TEST_HEADER include/spdk/conf.h 00:12:07.647 TEST_HEADER include/spdk/config.h 00:12:07.647 TEST_HEADER include/spdk/cpuset.h 00:12:07.647 TEST_HEADER include/spdk/crc16.h 00:12:07.647 CC app/spdk_dd/spdk_dd.o 00:12:07.647 CC app/iscsi_tgt/iscsi_tgt.o 00:12:07.647 CC app/nvmf_tgt/nvmf_main.o 00:12:07.647 TEST_HEADER include/spdk/crc32.h 00:12:07.647 TEST_HEADER include/spdk/crc64.h 00:12:07.647 TEST_HEADER include/spdk/dif.h 00:12:07.647 CC app/vhost/vhost.o 00:12:07.647 TEST_HEADER include/spdk/dma.h 00:12:07.647 TEST_HEADER include/spdk/endian.h 00:12:07.647 TEST_HEADER include/spdk/env_dpdk.h 00:12:07.647 TEST_HEADER include/spdk/env.h 00:12:07.647 TEST_HEADER include/spdk/event.h 00:12:07.647 TEST_HEADER include/spdk/fd_group.h 00:12:07.647 TEST_HEADER include/spdk/fd.h 00:12:07.647 TEST_HEADER include/spdk/file.h 00:12:07.647 TEST_HEADER include/spdk/ftl.h 00:12:07.647 TEST_HEADER include/spdk/gpt_spec.h 00:12:07.647 CC app/spdk_tgt/spdk_tgt.o 00:12:07.647 TEST_HEADER include/spdk/hexlify.h 00:12:07.647 CC examples/nvme/nvme_manage/nvme_manage.o 00:12:07.647 TEST_HEADER include/spdk/histogram_data.h 00:12:07.647 CC examples/util/zipf/zipf.o 00:12:07.647 CC examples/nvme/reconnect/reconnect.o 00:12:07.647 CC test/app/jsoncat/jsoncat.o 00:12:07.647 CC test/app/histogram_perf/histogram_perf.o 00:12:07.647 TEST_HEADER include/spdk/idxd.h 00:12:07.647 CC test/event/event_perf/event_perf.o 00:12:07.647 TEST_HEADER include/spdk/idxd_spec.h 00:12:07.647 CC examples/ioat/perf/perf.o 00:12:07.647 TEST_HEADER include/spdk/init.h 00:12:07.647 CC examples/vmd/lsvmd/lsvmd.o 00:12:07.647 CC examples/idxd/perf/perf.o 00:12:07.647 TEST_HEADER include/spdk/ioat.h 00:12:07.647 CC examples/nvme/arbitration/arbitration.o 00:12:07.647 TEST_HEADER include/spdk/ioat_spec.h 00:12:07.647 CC test/app/stub/stub.o 00:12:07.647 CC examples/accel/perf/accel_perf.o 00:12:07.647 TEST_HEADER include/spdk/iscsi_spec.h 00:12:07.647 CC examples/nvme/hello_world/hello_world.o 00:12:07.647 CC examples/nvme/hotplug/hotplug.o 00:12:07.647 CC examples/nvme/cmb_copy/cmb_copy.o 00:12:07.647 TEST_HEADER include/spdk/json.h 00:12:07.647 TEST_HEADER include/spdk/jsonrpc.h 00:12:07.647 CC examples/sock/hello_world/hello_sock.o 00:12:07.647 TEST_HEADER include/spdk/keyring.h 00:12:07.647 CC app/fio/nvme/fio_plugin.o 00:12:07.647 TEST_HEADER include/spdk/keyring_module.h 00:12:07.647 CC test/thread/poller_perf/poller_perf.o 00:12:07.647 CC test/nvme/aer/aer.o 00:12:07.647 TEST_HEADER include/spdk/likely.h 00:12:07.910 TEST_HEADER include/spdk/log.h 00:12:07.910 TEST_HEADER include/spdk/lvol.h 00:12:07.910 TEST_HEADER include/spdk/memory.h 00:12:07.910 TEST_HEADER include/spdk/mmio.h 00:12:07.910 TEST_HEADER include/spdk/nbd.h 00:12:07.910 TEST_HEADER include/spdk/notify.h 00:12:07.910 TEST_HEADER include/spdk/nvme.h 00:12:07.910 TEST_HEADER include/spdk/nvme_intel.h 00:12:07.910 TEST_HEADER include/spdk/nvme_ocssd.h 00:12:07.910 CC examples/blob/cli/blobcli.o 00:12:07.910 CC examples/bdev/hello_world/hello_bdev.o 00:12:07.910 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:12:07.910 CC examples/blob/hello_world/hello_blob.o 00:12:07.910 TEST_HEADER include/spdk/nvme_spec.h 00:12:07.910 CC test/bdev/bdevio/bdevio.o 00:12:07.910 TEST_HEADER include/spdk/nvme_zns.h 00:12:07.910 CC examples/bdev/bdevperf/bdevperf.o 00:12:07.910 TEST_HEADER include/spdk/nvmf_cmd.h 00:12:07.910 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:12:07.910 CC examples/nvmf/nvmf/nvmf.o 00:12:07.910 TEST_HEADER include/spdk/nvmf.h 00:12:07.910 CC test/blobfs/mkfs/mkfs.o 00:12:07.910 CC examples/thread/thread/thread_ex.o 00:12:07.910 TEST_HEADER include/spdk/nvmf_spec.h 00:12:07.910 CC test/dma/test_dma/test_dma.o 00:12:07.910 CC test/accel/dif/dif.o 00:12:07.910 TEST_HEADER include/spdk/nvmf_transport.h 00:12:07.910 TEST_HEADER include/spdk/opal.h 00:12:07.910 CC test/app/bdev_svc/bdev_svc.o 00:12:07.910 TEST_HEADER include/spdk/opal_spec.h 00:12:07.910 TEST_HEADER include/spdk/pci_ids.h 00:12:07.910 TEST_HEADER include/spdk/pipe.h 00:12:07.910 TEST_HEADER include/spdk/queue.h 00:12:07.910 TEST_HEADER include/spdk/reduce.h 00:12:07.910 TEST_HEADER include/spdk/rpc.h 00:12:07.910 TEST_HEADER include/spdk/scheduler.h 00:12:07.910 TEST_HEADER include/spdk/scsi.h 00:12:07.910 TEST_HEADER include/spdk/scsi_spec.h 00:12:07.910 TEST_HEADER include/spdk/sock.h 00:12:07.910 TEST_HEADER include/spdk/stdinc.h 00:12:07.910 TEST_HEADER include/spdk/string.h 00:12:07.910 TEST_HEADER include/spdk/thread.h 00:12:07.910 TEST_HEADER include/spdk/trace.h 00:12:07.910 LINK spdk_lspci 00:12:07.910 TEST_HEADER include/spdk/trace_parser.h 00:12:07.910 TEST_HEADER include/spdk/tree.h 00:12:07.910 TEST_HEADER include/spdk/ublk.h 00:12:07.910 TEST_HEADER include/spdk/util.h 00:12:07.910 TEST_HEADER include/spdk/uuid.h 00:12:07.910 TEST_HEADER include/spdk/version.h 00:12:07.910 CC test/env/mem_callbacks/mem_callbacks.o 00:12:07.910 TEST_HEADER include/spdk/vfio_user_pci.h 00:12:07.910 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:12:07.910 TEST_HEADER include/spdk/vfio_user_spec.h 00:12:07.910 TEST_HEADER include/spdk/vhost.h 00:12:07.910 TEST_HEADER include/spdk/vmd.h 00:12:07.910 TEST_HEADER include/spdk/xor.h 00:12:07.910 TEST_HEADER include/spdk/zipf.h 00:12:07.910 CXX test/cpp_headers/accel.o 00:12:07.910 CC test/lvol/esnap/esnap.o 00:12:07.910 LINK rpc_client_test 00:12:07.910 LINK spdk_nvme_discover 00:12:07.910 LINK interrupt_tgt 00:12:08.176 LINK jsoncat 00:12:08.176 LINK zipf 00:12:08.176 LINK lsvmd 00:12:08.176 LINK histogram_perf 00:12:08.176 LINK nvmf_tgt 00:12:08.176 LINK event_perf 00:12:08.176 LINK spdk_trace_record 00:12:08.176 LINK poller_perf 00:12:08.176 LINK vhost 00:12:08.176 LINK iscsi_tgt 00:12:08.176 LINK stub 00:12:08.176 LINK cmb_copy 00:12:08.176 LINK spdk_tgt 00:12:08.176 LINK ioat_perf 00:12:08.176 LINK bdev_svc 00:12:08.176 LINK hello_world 00:12:08.176 LINK mkfs 00:12:08.176 LINK hotplug 00:12:08.176 LINK hello_sock 00:12:08.440 LINK hello_blob 00:12:08.440 LINK hello_bdev 00:12:08.440 LINK mem_callbacks 00:12:08.440 LINK thread 00:12:08.440 CXX test/cpp_headers/accel_module.o 00:12:08.440 LINK aer 00:12:08.440 LINK spdk_dd 00:12:08.440 CC examples/ioat/verify/verify.o 00:12:08.440 LINK idxd_perf 00:12:08.440 LINK reconnect 00:12:08.440 LINK arbitration 00:12:08.440 LINK nvmf 00:12:08.440 LINK spdk_trace 00:12:08.440 CC examples/nvme/abort/abort.o 00:12:08.704 CXX test/cpp_headers/assert.o 00:12:08.704 LINK bdevio 00:12:08.704 CXX test/cpp_headers/barrier.o 00:12:08.704 CC test/event/reactor/reactor.o 00:12:08.704 CXX test/cpp_headers/base64.o 00:12:08.704 CC test/env/vtophys/vtophys.o 00:12:08.704 LINK dif 00:12:08.704 CC examples/vmd/led/led.o 00:12:08.704 LINK test_dma 00:12:08.704 CC test/nvme/reset/reset.o 00:12:08.704 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:12:08.704 CC test/nvme/sgl/sgl.o 00:12:08.704 CXX test/cpp_headers/bdev.o 00:12:08.704 CC app/fio/bdev/fio_plugin.o 00:12:08.705 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:12:08.705 LINK accel_perf 00:12:08.705 CC test/event/reactor_perf/reactor_perf.o 00:12:08.705 LINK nvme_manage 00:12:08.705 CC test/event/app_repeat/app_repeat.o 00:12:08.705 LINK nvme_fuzz 00:12:08.705 CXX test/cpp_headers/bdev_module.o 00:12:08.705 CXX test/cpp_headers/bdev_zone.o 00:12:08.705 CXX test/cpp_headers/bit_array.o 00:12:08.966 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:12:08.966 LINK blobcli 00:12:08.966 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:12:08.966 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:12:08.966 LINK verify 00:12:08.966 LINK spdk_nvme 00:12:08.966 CC test/nvme/e2edp/nvme_dp.o 00:12:08.966 CC test/event/scheduler/scheduler.o 00:12:08.966 CXX test/cpp_headers/bit_pool.o 00:12:08.966 LINK reactor 00:12:08.966 CC test/nvme/overhead/overhead.o 00:12:08.966 LINK led 00:12:08.966 CC test/env/pci/pci_ut.o 00:12:08.966 CC test/nvme/err_injection/err_injection.o 00:12:08.966 CXX test/cpp_headers/blob_bdev.o 00:12:08.966 CC test/env/memory/memory_ut.o 00:12:08.966 LINK vtophys 00:12:08.966 CXX test/cpp_headers/blobfs_bdev.o 00:12:08.966 CXX test/cpp_headers/blobfs.o 00:12:08.966 CXX test/cpp_headers/blob.o 00:12:08.966 CC test/nvme/startup/startup.o 00:12:08.966 CC test/nvme/reserve/reserve.o 00:12:08.966 LINK reactor_perf 00:12:08.966 CXX test/cpp_headers/conf.o 00:12:08.966 LINK env_dpdk_post_init 00:12:08.966 CC test/nvme/simple_copy/simple_copy.o 00:12:09.231 CC test/nvme/connect_stress/connect_stress.o 00:12:09.231 LINK pmr_persistence 00:12:09.231 CXX test/cpp_headers/config.o 00:12:09.231 LINK app_repeat 00:12:09.231 CXX test/cpp_headers/cpuset.o 00:12:09.231 CXX test/cpp_headers/crc16.o 00:12:09.231 CXX test/cpp_headers/crc32.o 00:12:09.231 CC test/nvme/boot_partition/boot_partition.o 00:12:09.231 CXX test/cpp_headers/crc64.o 00:12:09.231 LINK reset 00:12:09.231 LINK spdk_nvme_perf 00:12:09.231 CXX test/cpp_headers/dif.o 00:12:09.231 CC test/nvme/compliance/nvme_compliance.o 00:12:09.231 CXX test/cpp_headers/dma.o 00:12:09.231 LINK sgl 00:12:09.231 CXX test/cpp_headers/endian.o 00:12:09.231 CXX test/cpp_headers/env_dpdk.o 00:12:09.231 CC test/nvme/fused_ordering/fused_ordering.o 00:12:09.231 CXX test/cpp_headers/env.o 00:12:09.231 CXX test/cpp_headers/event.o 00:12:09.231 CC test/nvme/fdp/fdp.o 00:12:09.231 CXX test/cpp_headers/fd_group.o 00:12:09.231 CC test/nvme/doorbell_aers/doorbell_aers.o 00:12:09.231 CXX test/cpp_headers/fd.o 00:12:09.231 CC test/nvme/cuse/cuse.o 00:12:09.231 LINK abort 00:12:09.231 LINK spdk_nvme_identify 00:12:09.231 LINK bdevperf 00:12:09.498 LINK err_injection 00:12:09.498 CXX test/cpp_headers/file.o 00:12:09.498 CXX test/cpp_headers/ftl.o 00:12:09.498 CXX test/cpp_headers/gpt_spec.o 00:12:09.498 LINK scheduler 00:12:09.498 CXX test/cpp_headers/hexlify.o 00:12:09.498 LINK startup 00:12:09.498 CXX test/cpp_headers/histogram_data.o 00:12:09.498 CXX test/cpp_headers/idxd.o 00:12:09.498 CXX test/cpp_headers/idxd_spec.o 00:12:09.498 LINK nvme_dp 00:12:09.498 LINK reserve 00:12:09.498 CXX test/cpp_headers/init.o 00:12:09.498 CXX test/cpp_headers/ioat.o 00:12:09.498 LINK connect_stress 00:12:09.498 CXX test/cpp_headers/ioat_spec.o 00:12:09.498 CXX test/cpp_headers/iscsi_spec.o 00:12:09.498 LINK spdk_top 00:12:09.498 CXX test/cpp_headers/json.o 00:12:09.498 CXX test/cpp_headers/jsonrpc.o 00:12:09.498 LINK overhead 00:12:09.498 CXX test/cpp_headers/keyring.o 00:12:09.498 CXX test/cpp_headers/keyring_module.o 00:12:09.498 LINK simple_copy 00:12:09.498 LINK boot_partition 00:12:09.498 CXX test/cpp_headers/likely.o 00:12:09.498 CXX test/cpp_headers/log.o 00:12:09.761 CXX test/cpp_headers/lvol.o 00:12:09.761 LINK vhost_fuzz 00:12:09.761 CXX test/cpp_headers/memory.o 00:12:09.761 CXX test/cpp_headers/mmio.o 00:12:09.761 CXX test/cpp_headers/nbd.o 00:12:09.761 LINK spdk_bdev 00:12:09.761 CXX test/cpp_headers/notify.o 00:12:09.761 CXX test/cpp_headers/nvme.o 00:12:09.761 CXX test/cpp_headers/nvme_intel.o 00:12:09.761 CXX test/cpp_headers/nvme_ocssd.o 00:12:09.761 CXX test/cpp_headers/nvme_ocssd_spec.o 00:12:09.761 CXX test/cpp_headers/nvme_spec.o 00:12:09.761 LINK pci_ut 00:12:09.761 LINK doorbell_aers 00:12:09.761 LINK fused_ordering 00:12:09.761 CXX test/cpp_headers/nvme_zns.o 00:12:09.761 CXX test/cpp_headers/nvmf_cmd.o 00:12:09.761 CXX test/cpp_headers/nvmf_fc_spec.o 00:12:09.761 CXX test/cpp_headers/nvmf.o 00:12:09.761 CXX test/cpp_headers/nvmf_spec.o 00:12:09.761 CXX test/cpp_headers/nvmf_transport.o 00:12:09.761 CXX test/cpp_headers/opal.o 00:12:09.761 CXX test/cpp_headers/opal_spec.o 00:12:09.761 CXX test/cpp_headers/pci_ids.o 00:12:09.761 CXX test/cpp_headers/pipe.o 00:12:09.761 CXX test/cpp_headers/queue.o 00:12:09.761 CXX test/cpp_headers/reduce.o 00:12:09.761 CXX test/cpp_headers/rpc.o 00:12:09.761 CXX test/cpp_headers/scheduler.o 00:12:09.761 CXX test/cpp_headers/scsi.o 00:12:09.761 CXX test/cpp_headers/scsi_spec.o 00:12:10.020 CXX test/cpp_headers/sock.o 00:12:10.020 CXX test/cpp_headers/stdinc.o 00:12:10.020 CXX test/cpp_headers/string.o 00:12:10.020 CXX test/cpp_headers/thread.o 00:12:10.020 CXX test/cpp_headers/trace.o 00:12:10.020 LINK nvme_compliance 00:12:10.020 CXX test/cpp_headers/trace_parser.o 00:12:10.020 LINK memory_ut 00:12:10.020 CXX test/cpp_headers/tree.o 00:12:10.020 CXX test/cpp_headers/ublk.o 00:12:10.020 CXX test/cpp_headers/util.o 00:12:10.020 CXX test/cpp_headers/uuid.o 00:12:10.020 LINK fdp 00:12:10.020 CXX test/cpp_headers/version.o 00:12:10.020 CXX test/cpp_headers/vfio_user_pci.o 00:12:10.020 CXX test/cpp_headers/vfio_user_spec.o 00:12:10.020 CXX test/cpp_headers/vhost.o 00:12:10.020 CXX test/cpp_headers/vmd.o 00:12:10.020 CXX test/cpp_headers/xor.o 00:12:10.020 CXX test/cpp_headers/zipf.o 00:12:10.953 LINK cuse 00:12:11.211 LINK iscsi_fuzz 00:12:14.493 LINK esnap 00:12:14.493 00:12:14.493 real 0m40.800s 00:12:14.493 user 7m36.837s 00:12:14.493 sys 1m52.395s 00:12:14.493 08:38:09 make -- common/autotest_common.sh@1123 -- $ xtrace_disable 00:12:14.493 08:38:09 make -- common/autotest_common.sh@10 -- $ set +x 00:12:14.493 ************************************ 00:12:14.493 END TEST make 00:12:14.493 ************************************ 00:12:14.493 08:38:09 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:12:14.493 08:38:09 -- pm/common@29 -- $ signal_monitor_resources TERM 00:12:14.493 08:38:09 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:12:14.493 08:38:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:12:14.493 08:38:09 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:12:14.493 08:38:09 -- pm/common@44 -- $ pid=2028520 00:12:14.493 08:38:09 -- pm/common@50 -- $ kill -TERM 2028520 00:12:14.493 08:38:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:12:14.493 08:38:09 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:12:14.493 08:38:09 -- pm/common@44 -- $ pid=2028522 00:12:14.493 08:38:09 -- pm/common@50 -- $ kill -TERM 2028522 00:12:14.493 08:38:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:12:14.493 08:38:09 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:12:14.493 08:38:09 -- pm/common@44 -- $ pid=2028524 00:12:14.493 08:38:09 -- pm/common@50 -- $ kill -TERM 2028524 00:12:14.493 08:38:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:12:14.493 08:38:09 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:12:14.493 08:38:09 -- pm/common@44 -- $ pid=2028559 00:12:14.493 08:38:09 -- pm/common@50 -- $ sudo -E kill -TERM 2028559 00:12:14.493 08:38:09 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:14.493 08:38:09 -- nvmf/common.sh@7 -- # uname -s 00:12:14.493 08:38:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:14.493 08:38:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:14.493 08:38:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:14.493 08:38:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:14.493 08:38:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:14.493 08:38:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:14.493 08:38:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:14.493 08:38:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:14.493 08:38:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:14.493 08:38:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:14.493 08:38:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:14.493 08:38:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:14.493 08:38:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:14.493 08:38:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:14.493 08:38:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:14.493 08:38:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:14.493 08:38:09 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:14.493 08:38:09 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:14.493 08:38:09 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:14.493 08:38:09 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:14.493 08:38:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.493 08:38:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.494 08:38:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.494 08:38:09 -- paths/export.sh@5 -- # export PATH 00:12:14.494 08:38:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.494 08:38:09 -- nvmf/common.sh@47 -- # : 0 00:12:14.494 08:38:09 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:14.494 08:38:09 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:14.494 08:38:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:14.494 08:38:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:14.494 08:38:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:14.494 08:38:09 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:14.494 08:38:09 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:14.494 08:38:09 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:14.494 08:38:09 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:12:14.494 08:38:09 -- spdk/autotest.sh@32 -- # uname -s 00:12:14.494 08:38:09 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:12:14.494 08:38:09 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:12:14.494 08:38:09 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:12:14.494 08:38:09 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:12:14.494 08:38:09 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:12:14.494 08:38:09 -- spdk/autotest.sh@44 -- # modprobe nbd 00:12:14.494 08:38:09 -- spdk/autotest.sh@46 -- # type -P udevadm 00:12:14.494 08:38:09 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:12:14.494 08:38:09 -- spdk/autotest.sh@48 -- # udevadm_pid=2103698 00:12:14.494 08:38:09 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:12:14.494 08:38:09 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:12:14.494 08:38:09 -- pm/common@17 -- # local monitor 00:12:14.494 08:38:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:12:14.494 08:38:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:12:14.494 08:38:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:12:14.494 08:38:09 -- pm/common@21 -- # date +%s 00:12:14.494 08:38:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:12:14.494 08:38:09 -- pm/common@21 -- # date +%s 00:12:14.494 08:38:09 -- pm/common@25 -- # sleep 1 00:12:14.494 08:38:09 -- pm/common@21 -- # date +%s 00:12:14.494 08:38:09 -- pm/common@21 -- # date +%s 00:12:14.494 08:38:09 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715755089 00:12:14.494 08:38:09 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715755089 00:12:14.494 08:38:09 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715755089 00:12:14.494 08:38:09 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715755089 00:12:14.494 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715755089_collect-vmstat.pm.log 00:12:14.494 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715755089_collect-cpu-load.pm.log 00:12:14.494 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715755089_collect-cpu-temp.pm.log 00:12:14.494 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715755089_collect-bmc-pm.bmc.pm.log 00:12:15.868 08:38:10 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:12:15.868 08:38:10 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:12:15.868 08:38:10 -- common/autotest_common.sh@721 -- # xtrace_disable 00:12:15.868 08:38:10 -- common/autotest_common.sh@10 -- # set +x 00:12:15.868 08:38:10 -- spdk/autotest.sh@59 -- # create_test_list 00:12:15.868 08:38:10 -- common/autotest_common.sh@745 -- # xtrace_disable 00:12:15.868 08:38:10 -- common/autotest_common.sh@10 -- # set +x 00:12:15.868 08:38:10 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:12:15.868 08:38:10 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:15.868 08:38:10 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:15.868 08:38:10 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:12:15.868 08:38:10 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:15.868 08:38:10 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:12:15.868 08:38:10 -- common/autotest_common.sh@1452 -- # uname 00:12:15.868 08:38:10 -- common/autotest_common.sh@1452 -- # '[' Linux = FreeBSD ']' 00:12:15.868 08:38:10 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:12:15.868 08:38:10 -- common/autotest_common.sh@1472 -- # uname 00:12:15.868 08:38:10 -- common/autotest_common.sh@1472 -- # [[ Linux = FreeBSD ]] 00:12:15.868 08:38:10 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:12:15.868 08:38:10 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:12:15.868 08:38:10 -- spdk/autotest.sh@72 -- # hash lcov 00:12:15.868 08:38:10 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:12:15.868 08:38:10 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:12:15.868 --rc lcov_branch_coverage=1 00:12:15.868 --rc lcov_function_coverage=1 00:12:15.868 --rc genhtml_branch_coverage=1 00:12:15.868 --rc genhtml_function_coverage=1 00:12:15.868 --rc genhtml_legend=1 00:12:15.868 --rc geninfo_all_blocks=1 00:12:15.868 ' 00:12:15.868 08:38:10 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:12:15.868 --rc lcov_branch_coverage=1 00:12:15.868 --rc lcov_function_coverage=1 00:12:15.868 --rc genhtml_branch_coverage=1 00:12:15.868 --rc genhtml_function_coverage=1 00:12:15.868 --rc genhtml_legend=1 00:12:15.868 --rc geninfo_all_blocks=1 00:12:15.868 ' 00:12:15.868 08:38:10 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:12:15.868 --rc lcov_branch_coverage=1 00:12:15.868 --rc lcov_function_coverage=1 00:12:15.868 --rc genhtml_branch_coverage=1 00:12:15.868 --rc genhtml_function_coverage=1 00:12:15.868 --rc genhtml_legend=1 00:12:15.868 --rc geninfo_all_blocks=1 00:12:15.868 --no-external' 00:12:15.868 08:38:10 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:12:15.868 --rc lcov_branch_coverage=1 00:12:15.868 --rc lcov_function_coverage=1 00:12:15.868 --rc genhtml_branch_coverage=1 00:12:15.868 --rc genhtml_function_coverage=1 00:12:15.868 --rc genhtml_legend=1 00:12:15.868 --rc geninfo_all_blocks=1 00:12:15.868 --no-external' 00:12:15.868 08:38:10 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:12:15.868 lcov: LCOV version 1.14 00:12:15.868 08:38:10 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:12:28.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:12:28.055 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:12:29.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:12:29.425 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:12:29.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:12:29.425 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:12:29.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:12:29.425 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:12:47.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:12:47.539 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:12:47.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:12:47.539 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:12:47.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:12:47.539 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:12:47.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:12:47.539 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:12:47.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:12:47.539 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:12:47.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:12:47.539 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:12:47.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:12:47.539 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:12:47.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:12:47.539 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:12:47.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:12:47.539 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:12:47.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:12:47.539 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:12:47.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:12:47.539 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:12:47.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:12:47.539 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:12:47.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:12:47.539 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:12:47.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:12:47.539 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:12:47.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:12:47.539 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:12:47.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:12:47.539 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:12:47.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:12:47.539 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:12:47.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:12:47.539 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:12:47.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:12:47.539 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:12:47.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:12:47.539 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:12:47.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:12:47.539 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:12:47.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:12:47.539 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:12:47.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:12:47.539 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:12:47.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:12:47.539 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:12:47.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:12:47.539 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:12:47.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:12:47.539 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:12:47.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:12:47.539 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:12:47.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:12:47.539 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:12:47.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:12:47.539 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:12:47.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:12:47.539 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:12:47.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:12:47.539 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:12:47.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:12:47.539 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:12:47.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:12:47.539 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:12:47.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:12:47.539 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:12:47.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:12:47.539 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:12:47.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:12:47.539 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:12:47.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:12:47.539 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:12:47.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:12:47.539 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:12:47.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:12:47.539 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:12:47.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:12:47.539 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:12:47.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:12:47.539 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:12:47.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:12:47.539 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:12:47.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:12:47.539 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:12:47.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:12:47.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:12:47.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:12:47.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:12:47.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:12:47.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:12:47.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:12:47.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:12:47.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:12:47.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:12:47.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:12:47.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:12:47.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:12:47.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:12:47.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:12:47.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:12:47.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:12:47.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:12:47.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:12:47.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:12:47.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:12:47.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:12:47.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:12:47.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:12:47.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:12:47.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:12:47.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:12:47.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:12:47.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:12:47.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:12:47.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:12:47.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:12:47.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:12:47.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:12:47.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:12:47.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:12:47.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:12:47.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:12:47.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:12:47.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:12:47.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:12:47.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:12:47.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:12:47.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:12:47.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:12:47.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:12:47.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:12:47.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:12:47.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:12:47.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:12:47.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:12:47.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:12:47.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:12:47.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:12:47.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:12:47.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:12:47.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:12:47.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:12:47.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:12:47.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:12:47.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:12:47.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:12:47.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:12:47.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:12:47.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:12:47.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:12:47.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:12:47.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:12:47.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:12:47.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:12:47.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:12:47.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:12:47.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:12:47.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:12:47.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:12:47.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:12:47.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:12:47.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:12:47.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:12:47.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:12:47.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:12:47.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:12:47.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:12:47.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:12:47.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:12:47.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:12:47.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:12:47.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:12:47.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:12:47.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:12:48.480 08:38:43 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:12:48.480 08:38:43 -- common/autotest_common.sh@721 -- # xtrace_disable 00:12:48.480 08:38:43 -- common/autotest_common.sh@10 -- # set +x 00:12:48.480 08:38:43 -- spdk/autotest.sh@91 -- # rm -f 00:12:48.480 08:38:43 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:12:49.855 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:12:49.855 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:12:49.855 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:12:49.855 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:12:49.855 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:12:49.855 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:12:49.855 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:12:49.855 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:12:49.855 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:12:49.855 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:12:49.855 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:12:49.855 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:12:49.855 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:12:49.855 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:12:49.855 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:12:49.855 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:12:50.114 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:12:50.114 08:38:44 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:12:50.114 08:38:44 -- common/autotest_common.sh@1666 -- # zoned_devs=() 00:12:50.114 08:38:44 -- common/autotest_common.sh@1666 -- # local -gA zoned_devs 00:12:50.114 08:38:44 -- common/autotest_common.sh@1667 -- # local nvme bdf 00:12:50.114 08:38:44 -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:12:50.114 08:38:44 -- common/autotest_common.sh@1670 -- # is_block_zoned nvme0n1 00:12:50.114 08:38:44 -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:12:50.114 08:38:44 -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:12:50.114 08:38:44 -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:12:50.114 08:38:44 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:12:50.114 08:38:44 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:12:50.114 08:38:44 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:12:50.114 08:38:44 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:12:50.114 08:38:44 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:12:50.114 08:38:44 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:12:50.114 No valid GPT data, bailing 00:12:50.114 08:38:44 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:12:50.114 08:38:44 -- scripts/common.sh@391 -- # pt= 00:12:50.114 08:38:44 -- scripts/common.sh@392 -- # return 1 00:12:50.114 08:38:44 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:12:50.114 1+0 records in 00:12:50.114 1+0 records out 00:12:50.114 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00246555 s, 425 MB/s 00:12:50.114 08:38:44 -- spdk/autotest.sh@118 -- # sync 00:12:50.114 08:38:44 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:12:50.114 08:38:44 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:12:50.114 08:38:44 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:12:52.016 08:38:46 -- spdk/autotest.sh@124 -- # uname -s 00:12:52.016 08:38:46 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:12:52.016 08:38:46 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:12:52.016 08:38:46 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:12:52.016 08:38:46 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:52.016 08:38:46 -- common/autotest_common.sh@10 -- # set +x 00:12:52.016 ************************************ 00:12:52.016 START TEST setup.sh 00:12:52.016 ************************************ 00:12:52.016 08:38:46 setup.sh -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:12:52.016 * Looking for test storage... 00:12:52.016 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:12:52.016 08:38:46 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:12:52.016 08:38:46 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:12:52.016 08:38:46 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:12:52.016 08:38:46 setup.sh -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:12:52.016 08:38:46 setup.sh -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:52.016 08:38:46 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:12:52.016 ************************************ 00:12:52.016 START TEST acl 00:12:52.016 ************************************ 00:12:52.016 08:38:46 setup.sh.acl -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:12:52.016 * Looking for test storage... 00:12:52.016 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:12:52.016 08:38:46 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:12:52.016 08:38:46 setup.sh.acl -- common/autotest_common.sh@1666 -- # zoned_devs=() 00:12:52.016 08:38:46 setup.sh.acl -- common/autotest_common.sh@1666 -- # local -gA zoned_devs 00:12:52.016 08:38:46 setup.sh.acl -- common/autotest_common.sh@1667 -- # local nvme bdf 00:12:52.016 08:38:46 setup.sh.acl -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:12:52.016 08:38:46 setup.sh.acl -- common/autotest_common.sh@1670 -- # is_block_zoned nvme0n1 00:12:52.016 08:38:46 setup.sh.acl -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:12:52.016 08:38:46 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:12:52.016 08:38:46 setup.sh.acl -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:12:52.016 08:38:46 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:12:52.016 08:38:46 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:12:52.016 08:38:46 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:12:52.016 08:38:46 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:12:52.016 08:38:46 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:12:52.016 08:38:46 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:12:52.016 08:38:46 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:12:53.920 08:38:48 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:12:53.920 08:38:48 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:12:53.920 08:38:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:12:53.920 08:38:48 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:12:53.920 08:38:48 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:12:53.920 08:38:48 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:12:55.295 Hugepages 00:12:55.295 node hugesize free / total 00:12:55.295 08:38:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:12:55.295 08:38:49 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:12:55.295 08:38:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:12:55.295 08:38:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:12:55.295 08:38:49 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:12:55.295 08:38:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:12:55.295 08:38:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:12:55.295 08:38:49 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:12:55.295 08:38:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:12:55.295 00:12:55.295 Type BDF Vendor Device NUMA Driver Device Block devices 00:12:55.295 08:38:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:12:55.295 08:38:49 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:12:55.295 08:38:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:12:55.295 08:38:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:0b:00.0 == *:*:*.* ]] 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\b\:\0\0\.\0* ]] 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:12:55.296 08:38:49 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:12:55.296 08:38:49 setup.sh.acl -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:12:55.296 08:38:49 setup.sh.acl -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:55.296 08:38:49 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:12:55.296 ************************************ 00:12:55.296 START TEST denied 00:12:55.296 ************************************ 00:12:55.296 08:38:49 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # denied 00:12:55.296 08:38:49 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:0b:00.0' 00:12:55.296 08:38:49 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:12:55.296 08:38:49 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:0b:00.0' 00:12:55.296 08:38:49 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:12:55.296 08:38:49 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:12:57.197 0000:0b:00.0 (8086 0a54): Skipping denied controller at 0000:0b:00.0 00:12:57.197 08:38:51 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:0b:00.0 00:12:57.197 08:38:51 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:12:57.197 08:38:51 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:12:57.197 08:38:51 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:0b:00.0 ]] 00:12:57.197 08:38:51 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:0b:00.0/driver 00:12:57.197 08:38:51 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:12:57.197 08:38:51 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:12:57.197 08:38:51 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:12:57.197 08:38:51 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:12:57.197 08:38:51 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:12:59.097 00:12:59.097 real 0m3.970s 00:12:59.097 user 0m1.202s 00:12:59.097 sys 0m1.944s 00:12:59.097 08:38:53 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:59.097 08:38:53 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:12:59.097 ************************************ 00:12:59.097 END TEST denied 00:12:59.097 ************************************ 00:12:59.355 08:38:53 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:12:59.355 08:38:53 setup.sh.acl -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:12:59.355 08:38:53 setup.sh.acl -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:59.355 08:38:53 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:12:59.355 ************************************ 00:12:59.355 START TEST allowed 00:12:59.355 ************************************ 00:12:59.355 08:38:53 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # allowed 00:12:59.355 08:38:53 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:0b:00.0 00:12:59.355 08:38:53 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:12:59.355 08:38:53 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:0b:00.0 .*: nvme -> .*' 00:12:59.355 08:38:53 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:12:59.355 08:38:53 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:13:01.885 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:13:01.885 08:38:56 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:13:01.885 08:38:56 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:13:01.885 08:38:56 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:13:01.885 08:38:56 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:13:01.885 08:38:56 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:13:03.819 00:13:03.819 real 0m4.131s 00:13:03.819 user 0m1.169s 00:13:03.819 sys 0m1.939s 00:13:03.820 08:38:58 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:03.820 08:38:58 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:13:03.820 ************************************ 00:13:03.820 END TEST allowed 00:13:03.820 ************************************ 00:13:03.820 00:13:03.820 real 0m11.341s 00:13:03.820 user 0m3.650s 00:13:03.820 sys 0m5.937s 00:13:03.820 08:38:58 setup.sh.acl -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:03.820 08:38:58 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:13:03.820 ************************************ 00:13:03.820 END TEST acl 00:13:03.820 ************************************ 00:13:03.820 08:38:58 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:13:03.820 08:38:58 setup.sh -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:13:03.820 08:38:58 setup.sh -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:03.820 08:38:58 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:13:03.820 ************************************ 00:13:03.820 START TEST hugepages 00:13:03.820 ************************************ 00:13:03.820 08:38:58 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:13:03.820 * Looking for test storage... 00:13:03.820 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 35587364 kB' 'MemAvailable: 40319800 kB' 'Buffers: 11004 kB' 'Cached: 18362168 kB' 'SwapCached: 0 kB' 'Active: 14336956 kB' 'Inactive: 4489080 kB' 'Active(anon): 13721720 kB' 'Inactive(anon): 0 kB' 'Active(file): 615236 kB' 'Inactive(file): 4489080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 456480 kB' 'Mapped: 208724 kB' 'Shmem: 13268856 kB' 'KReclaimable: 243916 kB' 'Slab: 618060 kB' 'SReclaimable: 243916 kB' 'SUnreclaim: 374144 kB' 'KernelStack: 12992 kB' 'PageTables: 8732 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562316 kB' 'Committed_AS: 14853128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198364 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1734236 kB' 'DirectMap2M: 20205568 kB' 'DirectMap1G: 47185920 kB' 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:03.820 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:03.821 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:03.822 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:03.822 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:03.822 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:03.822 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:03.822 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:03.822 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:03.822 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:03.822 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:03.822 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:03.822 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:03.822 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:03.822 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:03.822 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:03.822 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:03.822 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:03.822 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:03.822 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:13:03.822 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:13:03.822 08:38:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:13:03.822 08:38:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:03.822 08:38:58 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:13:03.822 08:38:58 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:13:03.822 08:38:58 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:13:03.822 08:38:58 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:13:03.822 08:38:58 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:13:03.822 08:38:58 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:13:03.822 08:38:58 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:13:03.822 08:38:58 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:13:03.822 08:38:58 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:13:03.822 08:38:58 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:13:03.822 08:38:58 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:13:03.822 08:38:58 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:13:03.822 08:38:58 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:13:03.822 08:38:58 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:13:03.822 08:38:58 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:13:03.822 08:38:58 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:13:03.822 08:38:58 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:13:03.822 08:38:58 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:13:03.822 08:38:58 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:13:03.822 08:38:58 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:13:03.822 08:38:58 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:13:03.822 08:38:58 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:13:03.822 08:38:58 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:13:03.822 08:38:58 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:13:03.822 08:38:58 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:13:03.822 08:38:58 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:13:03.822 08:38:58 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:13:03.822 08:38:58 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:13:03.822 08:38:58 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:13:03.822 08:38:58 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:13:03.822 08:38:58 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:13:03.822 08:38:58 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:13:03.822 08:38:58 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:13:03.822 08:38:58 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:03.822 08:38:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:13:03.822 ************************************ 00:13:03.822 START TEST default_setup 00:13:03.822 ************************************ 00:13:03.822 08:38:58 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # default_setup 00:13:03.822 08:38:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:13:03.822 08:38:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:13:03.822 08:38:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:13:03.822 08:38:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:13:03.822 08:38:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:13:03.822 08:38:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:13:03.822 08:38:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:13:03.822 08:38:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:13:03.822 08:38:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:13:03.822 08:38:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:13:03.822 08:38:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:13:03.822 08:38:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:13:03.822 08:38:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:13:03.822 08:38:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:13:03.822 08:38:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:13:03.822 08:38:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:13:03.822 08:38:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:13:03.822 08:38:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:13:03.822 08:38:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:13:03.822 08:38:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:13:03.822 08:38:58 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:13:03.822 08:38:58 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:13:05.197 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:13:05.197 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:13:05.197 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:13:05.197 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:13:05.197 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:13:05.197 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:13:05.197 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:13:05.197 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:13:05.197 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:13:05.197 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:13:05.197 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:13:05.197 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:13:05.197 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:13:05.197 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:13:05.197 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:13:05.197 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:13:06.136 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37658276 kB' 'MemAvailable: 42390804 kB' 'Buffers: 11004 kB' 'Cached: 18362268 kB' 'SwapCached: 0 kB' 'Active: 14364648 kB' 'Inactive: 4489080 kB' 'Active(anon): 13749412 kB' 'Inactive(anon): 0 kB' 'Active(file): 615236 kB' 'Inactive(file): 4489080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 483764 kB' 'Mapped: 209620 kB' 'Shmem: 13268956 kB' 'KReclaimable: 244100 kB' 'Slab: 618348 kB' 'SReclaimable: 244100 kB' 'SUnreclaim: 374248 kB' 'KernelStack: 13664 kB' 'PageTables: 10196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14886604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198892 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1734236 kB' 'DirectMap2M: 20205568 kB' 'DirectMap1G: 47185920 kB' 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.136 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.137 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37668360 kB' 'MemAvailable: 42400888 kB' 'Buffers: 11004 kB' 'Cached: 18362268 kB' 'SwapCached: 0 kB' 'Active: 14370020 kB' 'Inactive: 4489080 kB' 'Active(anon): 13754784 kB' 'Inactive(anon): 0 kB' 'Active(file): 615236 kB' 'Inactive(file): 4489080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 489532 kB' 'Mapped: 209744 kB' 'Shmem: 13268956 kB' 'KReclaimable: 244100 kB' 'Slab: 618396 kB' 'SReclaimable: 244100 kB' 'SUnreclaim: 374296 kB' 'KernelStack: 13664 kB' 'PageTables: 11540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14892496 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198884 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1734236 kB' 'DirectMap2M: 20205568 kB' 'DirectMap1G: 47185920 kB' 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.138 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.139 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37668396 kB' 'MemAvailable: 42400924 kB' 'Buffers: 11004 kB' 'Cached: 18362284 kB' 'SwapCached: 0 kB' 'Active: 14368792 kB' 'Inactive: 4489080 kB' 'Active(anon): 13753556 kB' 'Inactive(anon): 0 kB' 'Active(file): 615236 kB' 'Inactive(file): 4489080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 485216 kB' 'Mapped: 209464 kB' 'Shmem: 13268972 kB' 'KReclaimable: 244100 kB' 'Slab: 618292 kB' 'SReclaimable: 244100 kB' 'SUnreclaim: 374192 kB' 'KernelStack: 13488 kB' 'PageTables: 10520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14889224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198512 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1734236 kB' 'DirectMap2M: 20205568 kB' 'DirectMap1G: 47185920 kB' 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:06.140 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:13:06.141 nr_hugepages=1024 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:13:06.141 resv_hugepages=0 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:13:06.141 surplus_hugepages=0 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:13:06.141 anon_hugepages=0 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:13:06.141 08:39:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37670008 kB' 'MemAvailable: 42402536 kB' 'Buffers: 11004 kB' 'Cached: 18362308 kB' 'SwapCached: 0 kB' 'Active: 14364508 kB' 'Inactive: 4489080 kB' 'Active(anon): 13749272 kB' 'Inactive(anon): 0 kB' 'Active(file): 615236 kB' 'Inactive(file): 4489080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 484020 kB' 'Mapped: 209544 kB' 'Shmem: 13268996 kB' 'KReclaimable: 244100 kB' 'Slab: 618396 kB' 'SReclaimable: 244100 kB' 'SUnreclaim: 374296 kB' 'KernelStack: 13120 kB' 'PageTables: 9028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14891628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198496 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1734236 kB' 'DirectMap2M: 20205568 kB' 'DirectMap1G: 47185920 kB' 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.142 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:06.143 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20915464 kB' 'MemUsed: 11961476 kB' 'SwapCached: 0 kB' 'Active: 7875228 kB' 'Inactive: 1097524 kB' 'Active(anon): 7542728 kB' 'Inactive(anon): 0 kB' 'Active(file): 332500 kB' 'Inactive(file): 1097524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8649568 kB' 'Mapped: 71764 kB' 'AnonPages: 326344 kB' 'Shmem: 7219544 kB' 'KernelStack: 8344 kB' 'PageTables: 5888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 130276 kB' 'Slab: 307280 kB' 'SReclaimable: 130276 kB' 'SUnreclaim: 177004 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.144 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.145 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.145 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.145 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.145 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.145 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.145 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.145 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.145 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.145 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.145 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.145 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.145 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.145 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.145 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.145 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.145 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.145 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.145 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.145 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.145 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.145 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.145 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.145 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.145 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.145 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.145 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.145 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.145 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:13:06.145 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:13:06.145 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:13:06.145 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:06.145 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:13:06.145 08:39:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:13:06.145 08:39:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:13:06.145 08:39:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:13:06.145 08:39:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:13:06.145 08:39:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:13:06.145 08:39:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:13:06.145 node0=1024 expecting 1024 00:13:06.145 08:39:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:13:06.145 00:13:06.145 real 0m2.619s 00:13:06.145 user 0m0.675s 00:13:06.145 sys 0m0.983s 00:13:06.145 08:39:00 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:06.145 08:39:00 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:13:06.145 ************************************ 00:13:06.145 END TEST default_setup 00:13:06.145 ************************************ 00:13:06.145 08:39:00 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:13:06.145 08:39:00 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:13:06.145 08:39:00 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:06.145 08:39:00 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:13:06.403 ************************************ 00:13:06.403 START TEST per_node_1G_alloc 00:13:06.403 ************************************ 00:13:06.403 08:39:00 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # per_node_1G_alloc 00:13:06.403 08:39:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:13:06.403 08:39:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:13:06.403 08:39:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:13:06.403 08:39:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:13:06.403 08:39:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:13:06.403 08:39:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:13:06.403 08:39:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:13:06.403 08:39:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:13:06.403 08:39:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:13:06.403 08:39:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:13:06.403 08:39:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:13:06.403 08:39:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:13:06.403 08:39:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:13:06.403 08:39:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:13:06.403 08:39:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:13:06.403 08:39:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:13:06.403 08:39:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:13:06.403 08:39:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:13:06.403 08:39:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:13:06.403 08:39:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:13:06.403 08:39:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:13:06.403 08:39:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:13:06.403 08:39:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:13:06.403 08:39:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:13:06.403 08:39:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:13:06.403 08:39:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:13:06.404 08:39:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:13:07.337 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:13:07.337 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:13:07.337 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:13:07.337 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:13:07.337 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:13:07.337 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:13:07.337 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:13:07.337 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:13:07.337 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:13:07.337 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:13:07.337 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:13:07.337 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:13:07.337 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:13:07.337 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:13:07.337 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:13:07.337 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:13:07.337 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:13:07.609 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:13:07.609 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:13:07.609 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:13:07.609 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:13:07.609 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:13:07.609 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:13:07.609 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:13:07.609 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:13:07.609 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:13:07.609 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:13:07.609 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:13:07.609 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:13:07.609 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:13:07.609 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:13:07.609 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:07.609 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:07.609 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:07.609 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:13:07.609 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:07.609 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.609 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.610 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37627232 kB' 'MemAvailable: 42359760 kB' 'Buffers: 11004 kB' 'Cached: 18362384 kB' 'SwapCached: 0 kB' 'Active: 14361464 kB' 'Inactive: 4489080 kB' 'Active(anon): 13746228 kB' 'Inactive(anon): 0 kB' 'Active(file): 615236 kB' 'Inactive(file): 4489080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 480864 kB' 'Mapped: 209628 kB' 'Shmem: 13269072 kB' 'KReclaimable: 244100 kB' 'Slab: 618164 kB' 'SReclaimable: 244100 kB' 'SUnreclaim: 374064 kB' 'KernelStack: 13024 kB' 'PageTables: 8748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14882648 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198656 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1734236 kB' 'DirectMap2M: 20205568 kB' 'DirectMap1G: 47185920 kB' 00:13:07.610 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:07.610 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.610 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.610 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.610 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:07.610 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.610 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.610 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.610 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:07.610 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.610 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.610 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.610 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:07.610 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.610 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.610 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.610 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.611 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37628804 kB' 'MemAvailable: 42361332 kB' 'Buffers: 11004 kB' 'Cached: 18362384 kB' 'SwapCached: 0 kB' 'Active: 14361808 kB' 'Inactive: 4489080 kB' 'Active(anon): 13746572 kB' 'Inactive(anon): 0 kB' 'Active(file): 615236 kB' 'Inactive(file): 4489080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 480796 kB' 'Mapped: 209704 kB' 'Shmem: 13269072 kB' 'KReclaimable: 244100 kB' 'Slab: 618184 kB' 'SReclaimable: 244100 kB' 'SUnreclaim: 374084 kB' 'KernelStack: 12976 kB' 'PageTables: 8616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14882664 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198624 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1734236 kB' 'DirectMap2M: 20205568 kB' 'DirectMap1G: 47185920 kB' 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.612 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.613 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.613 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.613 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.613 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.613 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.613 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.613 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.613 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.613 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.613 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.613 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.613 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.613 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.613 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.613 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.613 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.613 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.613 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.613 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.613 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.613 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.613 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.613 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.613 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.613 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.613 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.613 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.613 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.613 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.613 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.613 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.613 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.613 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.613 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.613 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.613 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.613 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.613 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.613 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.613 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.613 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.613 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.613 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.613 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.613 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.613 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.613 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.613 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.613 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.613 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.613 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.613 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.613 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.613 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.613 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.613 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.617 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.617 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.617 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.618 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.618 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.618 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.618 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.618 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.618 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.618 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.618 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.618 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.618 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.618 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.619 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.619 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.619 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.619 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.619 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.619 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.619 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.619 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.619 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.619 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.619 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.619 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.619 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.619 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.619 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.619 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.619 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.619 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.619 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.619 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.619 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.619 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.619 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.619 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.619 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.619 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.619 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.619 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.619 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.619 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.620 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.620 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.620 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.620 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.620 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.620 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.620 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.620 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.620 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.620 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.620 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.620 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.620 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.620 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.620 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.620 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.620 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.620 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.620 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.620 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.620 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.620 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.620 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.620 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.621 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.621 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.621 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.621 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.621 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.621 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.621 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.621 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.621 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.621 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.621 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.621 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.621 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.621 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.621 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.621 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.621 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:13:07.621 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:13:07.621 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:13:07.621 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:13:07.622 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:13:07.622 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:13:07.622 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:13:07.622 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:13:07.622 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:07.622 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:07.622 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:07.622 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:13:07.622 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:07.622 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.622 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.623 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37630368 kB' 'MemAvailable: 42362896 kB' 'Buffers: 11004 kB' 'Cached: 18362408 kB' 'SwapCached: 0 kB' 'Active: 14361696 kB' 'Inactive: 4489080 kB' 'Active(anon): 13746460 kB' 'Inactive(anon): 0 kB' 'Active(file): 615236 kB' 'Inactive(file): 4489080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 480640 kB' 'Mapped: 209624 kB' 'Shmem: 13269096 kB' 'KReclaimable: 244100 kB' 'Slab: 618192 kB' 'SReclaimable: 244100 kB' 'SUnreclaim: 374092 kB' 'KernelStack: 13072 kB' 'PageTables: 8796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14882688 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198624 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1734236 kB' 'DirectMap2M: 20205568 kB' 'DirectMap1G: 47185920 kB' 00:13:07.623 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:07.623 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.623 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.623 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.623 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:07.623 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.623 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.623 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.623 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:07.623 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.623 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.623 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.623 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:07.623 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.623 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.623 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.623 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:07.623 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.623 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.623 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.623 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:07.623 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.623 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.623 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.623 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:07.623 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.623 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.624 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.624 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:07.624 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.624 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.624 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.624 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:07.624 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.624 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.624 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.624 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:07.624 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.624 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.624 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.624 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:07.624 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.624 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.625 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.625 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:07.625 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.625 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.625 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.625 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:07.625 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.625 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.625 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.625 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:07.625 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.625 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.625 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.625 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:07.625 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.625 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.625 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.625 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:07.625 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.625 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.625 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.625 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:07.625 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.625 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.625 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.625 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:07.625 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.625 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.625 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.628 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:07.628 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.628 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.628 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.628 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:07.628 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.628 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.628 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.628 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:07.628 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.628 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.628 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.628 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:07.628 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.628 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.628 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.628 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:07.628 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.628 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.628 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.628 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:07.628 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.628 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.628 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.628 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:07.628 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.628 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.628 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.628 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:07.628 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.628 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.628 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.628 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:07.628 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.628 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.628 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.628 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:07.628 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.628 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.628 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.628 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:07.628 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.628 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.628 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.628 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:07.628 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.629 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.629 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.629 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:07.629 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.629 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.629 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.629 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:07.629 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.629 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.629 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.629 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:07.629 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.629 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.629 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.629 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:07.629 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.629 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.629 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.630 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:07.630 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.630 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.630 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.630 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:07.630 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.630 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.630 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.630 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:07.630 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.630 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.630 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.630 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:07.630 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.630 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.634 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.635 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:07.635 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.635 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.635 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.635 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:07.635 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.635 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.635 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.635 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:07.635 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.635 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.635 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.635 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:07.635 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.635 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.635 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.635 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:07.635 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.635 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.635 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.635 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:07.635 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.635 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.635 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.635 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:07.635 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.635 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.635 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.635 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:07.635 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.635 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.635 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.635 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:07.635 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.635 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.635 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.635 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:07.635 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.635 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.635 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.635 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:07.635 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.635 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.635 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.635 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:07.635 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.636 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.636 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.636 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:07.636 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:13:07.636 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:13:07.636 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:13:07.636 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:13:07.636 nr_hugepages=1024 00:13:07.636 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:13:07.636 resv_hugepages=0 00:13:07.636 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:13:07.636 surplus_hugepages=0 00:13:07.636 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:13:07.636 anon_hugepages=0 00:13:07.636 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:13:07.636 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:13:07.636 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:13:07.636 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:13:07.636 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:13:07.636 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:13:07.636 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:13:07.636 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:07.636 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:07.636 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:07.636 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:13:07.636 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:07.636 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.636 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.636 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37630368 kB' 'MemAvailable: 42362896 kB' 'Buffers: 11004 kB' 'Cached: 18362432 kB' 'SwapCached: 0 kB' 'Active: 14361720 kB' 'Inactive: 4489080 kB' 'Active(anon): 13746484 kB' 'Inactive(anon): 0 kB' 'Active(file): 615236 kB' 'Inactive(file): 4489080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 480644 kB' 'Mapped: 209624 kB' 'Shmem: 13269120 kB' 'KReclaimable: 244100 kB' 'Slab: 618192 kB' 'SReclaimable: 244100 kB' 'SUnreclaim: 374092 kB' 'KernelStack: 13072 kB' 'PageTables: 8796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14882712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198624 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1734236 kB' 'DirectMap2M: 20205568 kB' 'DirectMap1G: 47185920 kB' 00:13:07.636 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:07.636 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.636 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.637 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.637 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:07.637 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.637 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.637 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.637 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:07.637 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.637 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.637 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.637 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:07.637 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.637 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.637 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.637 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:07.637 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.637 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.637 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.637 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:07.637 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.638 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.638 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.638 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:07.638 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.638 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.638 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.638 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:07.638 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.638 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.638 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.638 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:07.638 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.638 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.638 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.638 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:07.638 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.638 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.638 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.638 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:07.638 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.638 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.638 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.639 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:07.639 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.639 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.639 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.639 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:07.639 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.639 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.639 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.639 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:07.639 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.639 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.639 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.639 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:07.639 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.639 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.639 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.639 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:07.639 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.639 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.639 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.639 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:07.639 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.639 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.640 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.640 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:07.640 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.640 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.640 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.640 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:07.640 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.640 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.640 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.640 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:07.640 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.640 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.640 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.640 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:07.641 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.641 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.641 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.641 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:07.641 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.641 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.641 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.641 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:07.641 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.641 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.641 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.641 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:07.641 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.641 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.641 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.641 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:07.641 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.641 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.641 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.641 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:07.641 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.641 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.641 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.642 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:07.642 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.642 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.642 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.642 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:07.642 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.642 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.642 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.642 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:07.642 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.642 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.642 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.642 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:07.642 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.642 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.642 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.642 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:07.642 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.642 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.642 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.642 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:07.642 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.642 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.643 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.643 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:07.643 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.643 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.643 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.643 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:07.643 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.643 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.643 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.643 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:07.643 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.643 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.643 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.643 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:07.643 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.643 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.643 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.643 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:07.643 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.643 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.643 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.643 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:07.643 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.643 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.643 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.643 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:07.643 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.643 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.643 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.643 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:07.643 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.643 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.643 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.643 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:07.643 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.643 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.643 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.644 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:07.644 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.644 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.644 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.644 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:07.644 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.644 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.644 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.644 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:07.644 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.644 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.644 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.644 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:07.644 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.644 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.644 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.644 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:07.644 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.644 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.644 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.644 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:07.644 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.644 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.644 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.644 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:07.644 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.644 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.644 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.644 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:07.644 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:13:07.644 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:13:07.644 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:13:07.644 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:13:07.644 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:13:07.645 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:13:07.645 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:13:07.645 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:13:07.645 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:13:07.645 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:13:07.645 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:13:07.645 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:13:07.645 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:13:07.645 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:13:07.645 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:13:07.645 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:13:07.645 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:13:07.645 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:13:07.645 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:07.645 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:13:07.645 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:13:07.645 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:13:07.645 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:07.645 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.645 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.645 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21946408 kB' 'MemUsed: 10930532 kB' 'SwapCached: 0 kB' 'Active: 7869112 kB' 'Inactive: 1097524 kB' 'Active(anon): 7536612 kB' 'Inactive(anon): 0 kB' 'Active(file): 332500 kB' 'Inactive(file): 1097524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8649564 kB' 'Mapped: 71780 kB' 'AnonPages: 320240 kB' 'Shmem: 7219540 kB' 'KernelStack: 8232 kB' 'PageTables: 5516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 130276 kB' 'Slab: 307196 kB' 'SReclaimable: 130276 kB' 'SUnreclaim: 176920 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:13:07.645 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.645 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.645 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.648 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.649 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.649 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.649 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.649 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.649 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.649 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.649 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.649 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.649 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.649 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.649 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.649 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.649 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.649 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.649 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.649 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.649 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.649 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.649 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.649 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.649 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.649 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.649 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.649 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.649 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.649 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.649 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.649 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.649 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.649 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.649 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:13:07.649 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:13:07.649 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:13:07.649 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:13:07.649 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:13:07.649 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:13:07.649 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:13:07.649 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:13:07.649 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:13:07.649 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:13:07.649 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:07.649 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:13:07.649 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:13:07.649 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:13:07.649 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:07.649 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.649 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.649 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664788 kB' 'MemFree: 15683960 kB' 'MemUsed: 11980828 kB' 'SwapCached: 0 kB' 'Active: 6492412 kB' 'Inactive: 3391556 kB' 'Active(anon): 6209676 kB' 'Inactive(anon): 0 kB' 'Active(file): 282736 kB' 'Inactive(file): 3391556 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9723876 kB' 'Mapped: 137844 kB' 'AnonPages: 160176 kB' 'Shmem: 6049584 kB' 'KernelStack: 4808 kB' 'PageTables: 3176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 113824 kB' 'Slab: 310996 kB' 'SReclaimable: 113824 kB' 'SUnreclaim: 197172 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:13:07.649 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.649 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.649 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.649 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.649 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.649 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.909 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.909 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.909 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.909 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.909 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.909 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.909 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.909 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.909 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.909 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.909 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.909 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.909 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.909 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.909 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.909 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.909 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.909 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.909 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.909 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.909 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.909 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.909 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.909 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.909 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.909 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:13:07.910 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:13:07.911 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:13:07.911 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:13:07.911 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:13:07.911 node0=512 expecting 512 00:13:07.911 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:13:07.911 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:13:07.911 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:13:07.911 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:13:07.911 node1=512 expecting 512 00:13:07.911 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:13:07.911 00:13:07.911 real 0m1.471s 00:13:07.911 user 0m0.610s 00:13:07.911 sys 0m0.824s 00:13:07.911 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:07.911 08:39:02 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:13:07.911 ************************************ 00:13:07.911 END TEST per_node_1G_alloc 00:13:07.911 ************************************ 00:13:07.911 08:39:02 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:13:07.911 08:39:02 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:13:07.911 08:39:02 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:07.911 08:39:02 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:13:07.911 ************************************ 00:13:07.911 START TEST even_2G_alloc 00:13:07.911 ************************************ 00:13:07.911 08:39:02 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # even_2G_alloc 00:13:07.911 08:39:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:13:07.911 08:39:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:13:07.911 08:39:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:13:07.911 08:39:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:13:07.911 08:39:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:13:07.911 08:39:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:13:07.911 08:39:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:13:07.911 08:39:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:13:07.911 08:39:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:13:07.911 08:39:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:13:07.911 08:39:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:13:07.911 08:39:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:13:07.911 08:39:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:13:07.911 08:39:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:13:07.911 08:39:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:13:07.911 08:39:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:13:07.911 08:39:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:13:07.911 08:39:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:13:07.911 08:39:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:13:07.911 08:39:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:13:07.911 08:39:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:13:07.911 08:39:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:13:07.911 08:39:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:13:07.911 08:39:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:13:07.911 08:39:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:13:07.911 08:39:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:13:07.911 08:39:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:13:07.911 08:39:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:13:09.291 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:13:09.291 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:13:09.291 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:13:09.291 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:13:09.291 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:13:09.291 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:13:09.291 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:13:09.291 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:13:09.291 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:13:09.291 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:13:09.291 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:13:09.291 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:13:09.291 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:13:09.291 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:13:09.291 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:13:09.291 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:13:09.291 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:13:09.291 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:13:09.291 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:13:09.291 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:13:09.291 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:13:09.291 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:13:09.291 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:13:09.291 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:13:09.291 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:13:09.291 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:13:09.291 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:13:09.291 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:13:09.291 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:13:09.291 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:13:09.291 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:09.291 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:09.291 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:09.291 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:13:09.291 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:09.291 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.291 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.291 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37655420 kB' 'MemAvailable: 42387948 kB' 'Buffers: 11004 kB' 'Cached: 18362520 kB' 'SwapCached: 0 kB' 'Active: 14362232 kB' 'Inactive: 4489080 kB' 'Active(anon): 13746996 kB' 'Inactive(anon): 0 kB' 'Active(file): 615236 kB' 'Inactive(file): 4489080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 480508 kB' 'Mapped: 209632 kB' 'Shmem: 13269208 kB' 'KReclaimable: 244100 kB' 'Slab: 618048 kB' 'SReclaimable: 244100 kB' 'SUnreclaim: 373948 kB' 'KernelStack: 13040 kB' 'PageTables: 8696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14882856 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198672 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1734236 kB' 'DirectMap2M: 20205568 kB' 'DirectMap1G: 47185920 kB' 00:13:09.291 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:09.291 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.291 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.291 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.291 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:09.291 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.291 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.291 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.291 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:09.291 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.291 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.291 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.291 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:09.291 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.291 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.291 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.291 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:09.291 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.291 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.291 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.291 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:09.291 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.291 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.292 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37655796 kB' 'MemAvailable: 42388324 kB' 'Buffers: 11004 kB' 'Cached: 18362524 kB' 'SwapCached: 0 kB' 'Active: 14362904 kB' 'Inactive: 4489080 kB' 'Active(anon): 13747668 kB' 'Inactive(anon): 0 kB' 'Active(file): 615236 kB' 'Inactive(file): 4489080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 481324 kB' 'Mapped: 209708 kB' 'Shmem: 13269212 kB' 'KReclaimable: 244100 kB' 'Slab: 618132 kB' 'SReclaimable: 244100 kB' 'SUnreclaim: 374032 kB' 'KernelStack: 13088 kB' 'PageTables: 8796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14882624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198608 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1734236 kB' 'DirectMap2M: 20205568 kB' 'DirectMap1G: 47185920 kB' 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.293 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.294 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37657244 kB' 'MemAvailable: 42389772 kB' 'Buffers: 11004 kB' 'Cached: 18362540 kB' 'SwapCached: 0 kB' 'Active: 14361884 kB' 'Inactive: 4489080 kB' 'Active(anon): 13746648 kB' 'Inactive(anon): 0 kB' 'Active(file): 615236 kB' 'Inactive(file): 4489080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 480652 kB' 'Mapped: 209632 kB' 'Shmem: 13269228 kB' 'KReclaimable: 244100 kB' 'Slab: 618160 kB' 'SReclaimable: 244100 kB' 'SUnreclaim: 374060 kB' 'KernelStack: 13056 kB' 'PageTables: 8688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14882648 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198608 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1734236 kB' 'DirectMap2M: 20205568 kB' 'DirectMap1G: 47185920 kB' 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.295 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.296 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:13:09.297 nr_hugepages=1024 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:13:09.297 resv_hugepages=0 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:13:09.297 surplus_hugepages=0 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:13:09.297 anon_hugepages=0 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.297 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37656992 kB' 'MemAvailable: 42389520 kB' 'Buffers: 11004 kB' 'Cached: 18362560 kB' 'SwapCached: 0 kB' 'Active: 14362168 kB' 'Inactive: 4489080 kB' 'Active(anon): 13746932 kB' 'Inactive(anon): 0 kB' 'Active(file): 615236 kB' 'Inactive(file): 4489080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 480924 kB' 'Mapped: 209632 kB' 'Shmem: 13269248 kB' 'KReclaimable: 244100 kB' 'Slab: 618160 kB' 'SReclaimable: 244100 kB' 'SUnreclaim: 374060 kB' 'KernelStack: 13056 kB' 'PageTables: 8688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14882668 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198608 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1734236 kB' 'DirectMap2M: 20205568 kB' 'DirectMap1G: 47185920 kB' 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.298 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21957764 kB' 'MemUsed: 10919176 kB' 'SwapCached: 0 kB' 'Active: 7869060 kB' 'Inactive: 1097524 kB' 'Active(anon): 7536560 kB' 'Inactive(anon): 0 kB' 'Active(file): 332500 kB' 'Inactive(file): 1097524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8649564 kB' 'Mapped: 71788 kB' 'AnonPages: 320208 kB' 'Shmem: 7219540 kB' 'KernelStack: 8248 kB' 'PageTables: 5608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 130276 kB' 'Slab: 307108 kB' 'SReclaimable: 130276 kB' 'SUnreclaim: 176832 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.299 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.300 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.301 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.301 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.301 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.301 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.301 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.301 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.301 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.301 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.301 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.301 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.301 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.301 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.301 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.301 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.301 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.301 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.301 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:13:09.301 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:13:09.301 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:13:09.301 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:13:09.301 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:13:09.301 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:13:09.301 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:13:09.301 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:13:09.301 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:13:09.301 08:39:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:13:09.301 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:09.301 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:13:09.301 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:13:09.301 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:13:09.301 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:09.301 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.301 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664788 kB' 'MemFree: 15699228 kB' 'MemUsed: 11965560 kB' 'SwapCached: 0 kB' 'Active: 6493048 kB' 'Inactive: 3391556 kB' 'Active(anon): 6210312 kB' 'Inactive(anon): 0 kB' 'Active(file): 282736 kB' 'Inactive(file): 3391556 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9724020 kB' 'Mapped: 137844 kB' 'AnonPages: 160684 kB' 'Shmem: 6049728 kB' 'KernelStack: 4792 kB' 'PageTables: 3028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 113824 kB' 'Slab: 311052 kB' 'SReclaimable: 113824 kB' 'SUnreclaim: 197228 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:13:09.301 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.301 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.301 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.301 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.301 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.301 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.301 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.301 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.301 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.301 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.301 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.301 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.301 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.301 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.301 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.301 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.301 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.301 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.301 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.301 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.301 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.301 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.301 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.301 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.301 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.301 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.301 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.301 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.301 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.301 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.301 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.301 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.301 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.301 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.301 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.301 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.301 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.301 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.301 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.301 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.301 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.301 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.301 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.301 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.301 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.301 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.301 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.301 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.301 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.301 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.301 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.301 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.301 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.301 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.301 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.301 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.301 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:13:09.302 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:13:09.302 node0=512 expecting 512 00:13:09.303 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:13:09.303 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:13:09.303 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:13:09.303 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:13:09.303 node1=512 expecting 512 00:13:09.303 08:39:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:13:09.303 00:13:09.303 real 0m1.560s 00:13:09.303 user 0m0.662s 00:13:09.303 sys 0m0.867s 00:13:09.303 08:39:04 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:09.303 08:39:04 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:13:09.303 ************************************ 00:13:09.303 END TEST even_2G_alloc 00:13:09.303 ************************************ 00:13:09.303 08:39:04 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:13:09.303 08:39:04 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:13:09.303 08:39:04 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:09.303 08:39:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:13:09.303 ************************************ 00:13:09.303 START TEST odd_alloc 00:13:09.303 ************************************ 00:13:09.303 08:39:04 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # odd_alloc 00:13:09.303 08:39:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:13:09.303 08:39:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:13:09.303 08:39:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:13:09.303 08:39:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:13:09.303 08:39:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:13:09.303 08:39:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:13:09.303 08:39:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:13:09.303 08:39:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:13:09.303 08:39:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:13:09.303 08:39:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:13:09.303 08:39:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:13:09.303 08:39:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:13:09.303 08:39:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:13:09.303 08:39:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:13:09.303 08:39:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:13:09.303 08:39:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:13:09.303 08:39:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:13:09.303 08:39:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:13:09.303 08:39:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:13:09.303 08:39:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:13:09.303 08:39:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:13:09.303 08:39:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:13:09.303 08:39:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:13:09.303 08:39:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:13:09.303 08:39:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:13:09.303 08:39:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:13:09.303 08:39:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:13:09.303 08:39:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:13:10.676 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:13:10.676 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:13:10.676 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:13:10.676 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:13:10.676 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:13:10.676 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:13:10.676 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:13:10.676 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:13:10.676 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:13:10.676 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:13:10.676 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:13:10.676 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:13:10.676 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:13:10.676 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:13:10.676 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:13:10.676 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:13:10.676 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37651916 kB' 'MemAvailable: 42384440 kB' 'Buffers: 11004 kB' 'Cached: 18362656 kB' 'SwapCached: 0 kB' 'Active: 14362076 kB' 'Inactive: 4489080 kB' 'Active(anon): 13746840 kB' 'Inactive(anon): 0 kB' 'Active(file): 615236 kB' 'Inactive(file): 4489080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 480728 kB' 'Mapped: 208652 kB' 'Shmem: 13269344 kB' 'KReclaimable: 244092 kB' 'Slab: 617900 kB' 'SReclaimable: 244092 kB' 'SUnreclaim: 373808 kB' 'KernelStack: 12928 kB' 'PageTables: 8116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 14863120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198612 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1734236 kB' 'DirectMap2M: 20205568 kB' 'DirectMap1G: 47185920 kB' 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:10.940 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.941 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37656184 kB' 'MemAvailable: 42388708 kB' 'Buffers: 11004 kB' 'Cached: 18362656 kB' 'SwapCached: 0 kB' 'Active: 14362536 kB' 'Inactive: 4489080 kB' 'Active(anon): 13747300 kB' 'Inactive(anon): 0 kB' 'Active(file): 615236 kB' 'Inactive(file): 4489080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 481188 kB' 'Mapped: 208652 kB' 'Shmem: 13269344 kB' 'KReclaimable: 244092 kB' 'Slab: 617884 kB' 'SReclaimable: 244092 kB' 'SUnreclaim: 373792 kB' 'KernelStack: 12944 kB' 'PageTables: 8060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 14863136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198564 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1734236 kB' 'DirectMap2M: 20205568 kB' 'DirectMap1G: 47185920 kB' 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.942 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37657468 kB' 'MemAvailable: 42389992 kB' 'Buffers: 11004 kB' 'Cached: 18362660 kB' 'SwapCached: 0 kB' 'Active: 14358428 kB' 'Inactive: 4489080 kB' 'Active(anon): 13743192 kB' 'Inactive(anon): 0 kB' 'Active(file): 615236 kB' 'Inactive(file): 4489080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 477580 kB' 'Mapped: 208588 kB' 'Shmem: 13269348 kB' 'KReclaimable: 244092 kB' 'Slab: 617876 kB' 'SReclaimable: 244092 kB' 'SUnreclaim: 373784 kB' 'KernelStack: 12992 kB' 'PageTables: 8216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 14859580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198560 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1734236 kB' 'DirectMap2M: 20205568 kB' 'DirectMap1G: 47185920 kB' 00:13:10.943 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.944 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:13:10.945 nr_hugepages=1025 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:13:10.945 resv_hugepages=0 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:13:10.945 surplus_hugepages=0 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:13:10.945 anon_hugepages=0 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:10.945 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37654812 kB' 'MemAvailable: 42387336 kB' 'Buffers: 11004 kB' 'Cached: 18362680 kB' 'SwapCached: 0 kB' 'Active: 14362368 kB' 'Inactive: 4489080 kB' 'Active(anon): 13747132 kB' 'Inactive(anon): 0 kB' 'Active(file): 615236 kB' 'Inactive(file): 4489080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 481020 kB' 'Mapped: 208588 kB' 'Shmem: 13269368 kB' 'KReclaimable: 244092 kB' 'Slab: 617876 kB' 'SReclaimable: 244092 kB' 'SUnreclaim: 373784 kB' 'KernelStack: 12992 kB' 'PageTables: 8200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 14863180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198564 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1734236 kB' 'DirectMap2M: 20205568 kB' 'DirectMap1G: 47185920 kB' 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.946 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:10.947 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21965100 kB' 'MemUsed: 10911840 kB' 'SwapCached: 0 kB' 'Active: 7864720 kB' 'Inactive: 1097524 kB' 'Active(anon): 7532220 kB' 'Inactive(anon): 0 kB' 'Active(file): 332500 kB' 'Inactive(file): 1097524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8649612 kB' 'Mapped: 70840 kB' 'AnonPages: 315860 kB' 'Shmem: 7219588 kB' 'KernelStack: 8136 kB' 'PageTables: 4988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 130276 kB' 'Slab: 306872 kB' 'SReclaimable: 130276 kB' 'SUnreclaim: 176596 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.948 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664788 kB' 'MemFree: 15689964 kB' 'MemUsed: 11974824 kB' 'SwapCached: 0 kB' 'Active: 6497212 kB' 'Inactive: 3391556 kB' 'Active(anon): 6214476 kB' 'Inactive(anon): 0 kB' 'Active(file): 282736 kB' 'Inactive(file): 3391556 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9724108 kB' 'Mapped: 137748 kB' 'AnonPages: 164780 kB' 'Shmem: 6049816 kB' 'KernelStack: 4840 kB' 'PageTables: 3172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 113816 kB' 'Slab: 311004 kB' 'SReclaimable: 113816 kB' 'SUnreclaim: 197188 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.949 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:13:10.950 node0=512 expecting 513 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:13:10.950 node1=513 expecting 512 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:13:10.950 00:13:10.950 real 0m1.643s 00:13:10.950 user 0m0.719s 00:13:10.950 sys 0m0.895s 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:10.950 08:39:05 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:13:10.950 ************************************ 00:13:10.950 END TEST odd_alloc 00:13:10.950 ************************************ 00:13:11.209 08:39:05 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:13:11.209 08:39:05 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:13:11.209 08:39:05 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:11.209 08:39:05 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:13:11.209 ************************************ 00:13:11.209 START TEST custom_alloc 00:13:11.209 ************************************ 00:13:11.209 08:39:05 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # custom_alloc 00:13:11.209 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:13:11.209 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:13:11.209 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:13:11.209 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:13:11.209 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:13:11.209 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:13:11.209 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:13:11.209 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:13:11.209 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:13:11.209 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:13:11.209 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:13:11.209 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:13:11.209 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:13:11.209 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:13:11.209 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:13:11.209 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:13:11.209 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:13:11.209 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:13:11.209 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:13:11.209 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:13:11.209 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:13:11.209 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:13:11.209 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:13:11.209 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:13:11.209 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:13:11.209 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:13:11.209 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:13:11.209 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:13:11.209 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:13:11.209 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:13:11.209 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:13:11.209 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:13:11.209 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:13:11.209 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:13:11.210 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:13:11.210 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:13:11.210 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:13:11.210 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:13:11.210 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:13:11.210 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:13:11.210 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:13:11.210 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:13:11.210 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:13:11.210 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:13:11.210 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:13:11.210 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:13:11.210 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:13:11.210 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:13:11.210 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:13:11.210 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:13:11.210 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:13:11.210 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:13:11.210 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:13:11.210 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:13:11.210 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:13:11.210 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:13:11.210 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:13:11.210 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:13:11.210 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:13:11.210 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:13:11.210 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:13:11.210 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:13:11.210 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:13:11.210 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:13:11.210 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:13:11.210 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:13:11.210 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:13:11.210 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:13:11.210 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:13:11.210 08:39:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:13:11.210 08:39:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:13:11.210 08:39:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:13:12.590 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:13:12.590 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:13:12.590 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:13:12.590 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:13:12.590 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:13:12.590 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:13:12.590 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:13:12.590 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:13:12.590 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:13:12.590 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:13:12.590 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:13:12.590 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:13:12.590 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:13:12.590 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:13:12.590 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:13:12.590 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:13:12.590 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:13:12.590 08:39:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:13:12.590 08:39:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:13:12.590 08:39:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:13:12.590 08:39:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:13:12.590 08:39:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:13:12.590 08:39:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:13:12.590 08:39:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:13:12.590 08:39:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:13:12.590 08:39:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:13:12.590 08:39:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:13:12.590 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:13:12.590 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:13:12.590 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:13:12.590 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:13:12.590 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:12.590 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:12.590 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:12.590 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:13:12.590 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:12.590 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.590 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.590 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36623732 kB' 'MemAvailable: 41356244 kB' 'Buffers: 11004 kB' 'Cached: 18362788 kB' 'SwapCached: 0 kB' 'Active: 14357588 kB' 'Inactive: 4489080 kB' 'Active(anon): 13742352 kB' 'Inactive(anon): 0 kB' 'Active(file): 615236 kB' 'Inactive(file): 4489080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 475752 kB' 'Mapped: 208664 kB' 'Shmem: 13269476 kB' 'KReclaimable: 244068 kB' 'Slab: 617668 kB' 'SReclaimable: 244068 kB' 'SUnreclaim: 373600 kB' 'KernelStack: 12976 kB' 'PageTables: 8188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 14857264 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198608 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1734236 kB' 'DirectMap2M: 20205568 kB' 'DirectMap1G: 47185920 kB' 00:13:12.590 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:12.590 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.590 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.590 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.590 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:12.590 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.590 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.590 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.590 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:12.590 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.590 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.590 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.590 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:12.590 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.590 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.590 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.590 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:12.590 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.590 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.590 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.590 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:12.590 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.590 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.590 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.590 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:12.590 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.590 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.590 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.590 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:12.590 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.590 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.590 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:12.591 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36627708 kB' 'MemAvailable: 41360220 kB' 'Buffers: 11004 kB' 'Cached: 18362788 kB' 'SwapCached: 0 kB' 'Active: 14357400 kB' 'Inactive: 4489080 kB' 'Active(anon): 13742164 kB' 'Inactive(anon): 0 kB' 'Active(file): 615236 kB' 'Inactive(file): 4489080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 476008 kB' 'Mapped: 208664 kB' 'Shmem: 13269476 kB' 'KReclaimable: 244068 kB' 'Slab: 617652 kB' 'SReclaimable: 244068 kB' 'SUnreclaim: 373584 kB' 'KernelStack: 12992 kB' 'PageTables: 8236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 14857280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198560 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1734236 kB' 'DirectMap2M: 20205568 kB' 'DirectMap1G: 47185920 kB' 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.592 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:13:12.593 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36628328 kB' 'MemAvailable: 41360840 kB' 'Buffers: 11004 kB' 'Cached: 18362792 kB' 'SwapCached: 0 kB' 'Active: 14357112 kB' 'Inactive: 4489080 kB' 'Active(anon): 13741876 kB' 'Inactive(anon): 0 kB' 'Active(file): 615236 kB' 'Inactive(file): 4489080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 475664 kB' 'Mapped: 208600 kB' 'Shmem: 13269480 kB' 'KReclaimable: 244068 kB' 'Slab: 617676 kB' 'SReclaimable: 244068 kB' 'SUnreclaim: 373608 kB' 'KernelStack: 13008 kB' 'PageTables: 8248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 14857304 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198560 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1734236 kB' 'DirectMap2M: 20205568 kB' 'DirectMap1G: 47185920 kB' 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.594 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.595 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:13:12.596 nr_hugepages=1536 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:13:12.596 resv_hugepages=0 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:13:12.596 surplus_hugepages=0 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:13:12.596 anon_hugepages=0 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36631192 kB' 'MemAvailable: 41363704 kB' 'Buffers: 11004 kB' 'Cached: 18362808 kB' 'SwapCached: 0 kB' 'Active: 14352536 kB' 'Inactive: 4489080 kB' 'Active(anon): 13737300 kB' 'Inactive(anon): 0 kB' 'Active(file): 615236 kB' 'Inactive(file): 4489080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 471496 kB' 'Mapped: 208164 kB' 'Shmem: 13269496 kB' 'KReclaimable: 244068 kB' 'Slab: 617676 kB' 'SReclaimable: 244068 kB' 'SUnreclaim: 373608 kB' 'KernelStack: 12992 kB' 'PageTables: 8192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 14853880 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198556 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1734236 kB' 'DirectMap2M: 20205568 kB' 'DirectMap1G: 47185920 kB' 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.596 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:13:12.597 08:39:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21982316 kB' 'MemUsed: 10894624 kB' 'SwapCached: 0 kB' 'Active: 7865452 kB' 'Inactive: 1097524 kB' 'Active(anon): 7532952 kB' 'Inactive(anon): 0 kB' 'Active(file): 332500 kB' 'Inactive(file): 1097524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8649700 kB' 'Mapped: 70852 kB' 'AnonPages: 316500 kB' 'Shmem: 7219676 kB' 'KernelStack: 8152 kB' 'PageTables: 5036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 130252 kB' 'Slab: 306784 kB' 'SReclaimable: 130252 kB' 'SUnreclaim: 176532 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.598 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:13:12.599 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:13:12.858 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:12.858 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.858 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.858 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664788 kB' 'MemFree: 14648296 kB' 'MemUsed: 13016492 kB' 'SwapCached: 0 kB' 'Active: 6491088 kB' 'Inactive: 3391556 kB' 'Active(anon): 6208352 kB' 'Inactive(anon): 0 kB' 'Active(file): 282736 kB' 'Inactive(file): 3391556 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9724112 kB' 'Mapped: 137748 kB' 'AnonPages: 158616 kB' 'Shmem: 6049820 kB' 'KernelStack: 4824 kB' 'PageTables: 3108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 113816 kB' 'Slab: 310892 kB' 'SReclaimable: 113816 kB' 'SUnreclaim: 197076 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:13:12.858 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.858 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.858 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.858 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.858 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.858 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.858 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.858 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.858 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.858 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.858 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.858 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.858 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.858 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.858 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.858 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.858 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.858 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.858 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.858 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.858 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.858 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.858 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.858 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.858 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.858 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.858 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.858 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.858 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.858 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.858 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.858 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.858 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:13:12.859 08:39:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:13:12.859 node0=512 expecting 512 00:13:12.860 08:39:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:13:12.860 08:39:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:13:12.860 08:39:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:13:12.860 08:39:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:13:12.860 node1=1024 expecting 1024 00:13:12.860 08:39:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:13:12.860 00:13:12.860 real 0m1.631s 00:13:12.860 user 0m0.653s 00:13:12.860 sys 0m0.948s 00:13:12.860 08:39:07 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:12.860 08:39:07 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:13:12.860 ************************************ 00:13:12.860 END TEST custom_alloc 00:13:12.860 ************************************ 00:13:12.860 08:39:07 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:13:12.860 08:39:07 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:13:12.860 08:39:07 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:12.860 08:39:07 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:13:12.860 ************************************ 00:13:12.860 START TEST no_shrink_alloc 00:13:12.860 ************************************ 00:13:12.860 08:39:07 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # no_shrink_alloc 00:13:12.860 08:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:13:12.860 08:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:13:12.860 08:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:13:12.860 08:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:13:12.860 08:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:13:12.860 08:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:13:12.860 08:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:13:12.860 08:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:13:12.860 08:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:13:12.860 08:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:13:12.860 08:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:13:12.860 08:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:13:12.860 08:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:13:12.860 08:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:13:12.860 08:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:13:12.860 08:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:13:12.860 08:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:13:12.860 08:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:13:12.860 08:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:13:12.860 08:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:13:12.860 08:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:13:12.860 08:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:13:14.239 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:13:14.239 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:13:14.239 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:13:14.239 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:13:14.239 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:13:14.239 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:13:14.239 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:13:14.239 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:13:14.239 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:13:14.239 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:13:14.239 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:13:14.239 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:13:14.239 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:13:14.239 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:13:14.239 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:13:14.239 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:13:14.239 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:13:14.239 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:13:14.239 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:13:14.239 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:13:14.239 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:13:14.239 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:13:14.239 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:13:14.239 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:13:14.239 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:13:14.239 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:13:14.239 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:13:14.239 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:13:14.239 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:13:14.239 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:13:14.239 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:14.239 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:14.239 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:14.239 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37602864 kB' 'MemAvailable: 42335376 kB' 'Buffers: 11004 kB' 'Cached: 18362916 kB' 'SwapCached: 0 kB' 'Active: 14355932 kB' 'Inactive: 4489080 kB' 'Active(anon): 13740696 kB' 'Inactive(anon): 0 kB' 'Active(file): 615236 kB' 'Inactive(file): 4489080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 474636 kB' 'Mapped: 208196 kB' 'Shmem: 13269604 kB' 'KReclaimable: 244068 kB' 'Slab: 617812 kB' 'SReclaimable: 244068 kB' 'SUnreclaim: 373744 kB' 'KernelStack: 12992 kB' 'PageTables: 7844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14856872 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198572 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1734236 kB' 'DirectMap2M: 20205568 kB' 'DirectMap1G: 47185920 kB' 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.240 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37603280 kB' 'MemAvailable: 42335792 kB' 'Buffers: 11004 kB' 'Cached: 18362920 kB' 'SwapCached: 0 kB' 'Active: 14358048 kB' 'Inactive: 4489080 kB' 'Active(anon): 13742812 kB' 'Inactive(anon): 0 kB' 'Active(file): 615236 kB' 'Inactive(file): 4489080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 476476 kB' 'Mapped: 208764 kB' 'Shmem: 13269608 kB' 'KReclaimable: 244068 kB' 'Slab: 617848 kB' 'SReclaimable: 244068 kB' 'SUnreclaim: 373780 kB' 'KernelStack: 13072 kB' 'PageTables: 8456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14858356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198608 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1734236 kB' 'DirectMap2M: 20205568 kB' 'DirectMap1G: 47185920 kB' 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.241 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.242 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37604852 kB' 'MemAvailable: 42337364 kB' 'Buffers: 11004 kB' 'Cached: 18362936 kB' 'SwapCached: 0 kB' 'Active: 14352428 kB' 'Inactive: 4489080 kB' 'Active(anon): 13737192 kB' 'Inactive(anon): 0 kB' 'Active(file): 615236 kB' 'Inactive(file): 4489080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 470908 kB' 'Mapped: 207740 kB' 'Shmem: 13269624 kB' 'KReclaimable: 244068 kB' 'Slab: 617808 kB' 'SReclaimable: 244068 kB' 'SUnreclaim: 373740 kB' 'KernelStack: 12960 kB' 'PageTables: 8136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14853640 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198588 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1734236 kB' 'DirectMap2M: 20205568 kB' 'DirectMap1G: 47185920 kB' 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.243 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:14.244 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.244 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.244 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.244 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:14.244 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.244 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.244 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.244 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:14.244 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.244 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.244 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.244 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:14.244 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.244 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.244 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.244 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:14.244 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.244 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.244 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.244 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:14.244 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.244 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.244 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.244 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:14.244 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.244 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.244 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.244 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:14.245 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.245 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.245 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.245 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:14.245 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.245 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.245 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.245 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:14.245 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.245 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.245 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.245 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:14.245 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.245 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.245 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.245 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:14.245 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.245 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.245 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.245 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:14.245 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.245 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.245 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.245 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:14.245 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.245 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.245 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.245 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:14.245 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.245 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.245 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.245 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:14.245 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.245 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.245 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.245 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:14.245 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.245 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.245 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.245 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:14.245 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.245 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.245 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.245 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:14.245 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.245 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.245 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.245 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:14.245 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.245 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.245 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.245 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:14.245 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.245 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.245 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.245 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:14.245 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.245 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.245 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.245 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:14.245 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.245 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.245 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.245 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:13:14.246 nr_hugepages=1024 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:13:14.246 resv_hugepages=0 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:13:14.246 surplus_hugepages=0 00:13:14.246 08:39:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:13:14.246 anon_hugepages=0 00:13:14.246 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:13:14.246 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:13:14.246 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:13:14.246 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:13:14.246 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:13:14.246 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:13:14.246 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:13:14.246 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:14.246 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:14.246 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:14.246 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:13:14.246 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:14.246 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.246 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.246 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37603836 kB' 'MemAvailable: 42336348 kB' 'Buffers: 11004 kB' 'Cached: 18362960 kB' 'SwapCached: 0 kB' 'Active: 14354476 kB' 'Inactive: 4489080 kB' 'Active(anon): 13739240 kB' 'Inactive(anon): 0 kB' 'Active(file): 615236 kB' 'Inactive(file): 4489080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 472972 kB' 'Mapped: 208132 kB' 'Shmem: 13269648 kB' 'KReclaimable: 244068 kB' 'Slab: 617808 kB' 'SReclaimable: 244068 kB' 'SUnreclaim: 373740 kB' 'KernelStack: 12960 kB' 'PageTables: 8096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14855352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198556 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1734236 kB' 'DirectMap2M: 20205568 kB' 'DirectMap1G: 47185920 kB' 00:13:14.246 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:14.246 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.246 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.246 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.246 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:14.246 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.246 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.246 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.246 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:14.246 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.246 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.246 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.246 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:14.246 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.246 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.246 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.247 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.248 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:14.248 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.248 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.248 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.248 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:14.248 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.248 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.248 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.248 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:14.248 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.248 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.248 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.248 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:14.248 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.248 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.248 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.248 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:14.248 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.248 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.248 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.248 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:14.248 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.248 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.248 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.248 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:14.248 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.248 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.248 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.248 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:14.248 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.248 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.248 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.248 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:14.248 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.248 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.248 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.248 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:14.248 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.248 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.248 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.248 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:14.248 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.248 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.248 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.248 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:14.248 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.248 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.248 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.248 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:14.248 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.248 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.248 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.248 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:14.248 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.248 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.248 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.248 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:14.248 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20880968 kB' 'MemUsed: 11995972 kB' 'SwapCached: 0 kB' 'Active: 7872020 kB' 'Inactive: 1097524 kB' 'Active(anon): 7539520 kB' 'Inactive(anon): 0 kB' 'Active(file): 332500 kB' 'Inactive(file): 1097524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8649792 kB' 'Mapped: 70876 kB' 'AnonPages: 322992 kB' 'Shmem: 7219768 kB' 'KernelStack: 8168 kB' 'PageTables: 5116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 130252 kB' 'Slab: 306832 kB' 'SReclaimable: 130252 kB' 'SUnreclaim: 176580 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.508 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:13:14.509 node0=1024 expecting 1024 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:13:14.509 08:39:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:13:15.894 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:13:15.894 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:13:15.894 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:13:15.894 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:13:15.894 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:13:15.894 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:13:15.894 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:13:15.894 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:13:15.894 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:13:15.894 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:13:15.894 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:13:15.894 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:13:15.894 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:13:15.894 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:13:15.894 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:13:15.894 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:13:15.894 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:13:15.894 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:13:15.894 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:13:15.894 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:13:15.894 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:13:15.894 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:13:15.894 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:13:15.894 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:13:15.894 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:13:15.894 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:13:15.894 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:13:15.894 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:13:15.894 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:13:15.894 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:13:15.894 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:13:15.894 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:15.894 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:15.894 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:15.894 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37590336 kB' 'MemAvailable: 42322856 kB' 'Buffers: 11004 kB' 'Cached: 18363032 kB' 'SwapCached: 0 kB' 'Active: 14350912 kB' 'Inactive: 4489080 kB' 'Active(anon): 13735676 kB' 'Inactive(anon): 0 kB' 'Active(file): 615236 kB' 'Inactive(file): 4489080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 469384 kB' 'Mapped: 207764 kB' 'Shmem: 13269720 kB' 'KReclaimable: 244084 kB' 'Slab: 617816 kB' 'SReclaimable: 244084 kB' 'SUnreclaim: 373732 kB' 'KernelStack: 12912 kB' 'PageTables: 7924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14848968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198476 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1734236 kB' 'DirectMap2M: 20205568 kB' 'DirectMap1G: 47185920 kB' 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.895 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37592952 kB' 'MemAvailable: 42325472 kB' 'Buffers: 11004 kB' 'Cached: 18363036 kB' 'SwapCached: 0 kB' 'Active: 14350720 kB' 'Inactive: 4489080 kB' 'Active(anon): 13735484 kB' 'Inactive(anon): 0 kB' 'Active(file): 615236 kB' 'Inactive(file): 4489080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 469204 kB' 'Mapped: 207700 kB' 'Shmem: 13269724 kB' 'KReclaimable: 244084 kB' 'Slab: 617784 kB' 'SReclaimable: 244084 kB' 'SUnreclaim: 373700 kB' 'KernelStack: 12928 kB' 'PageTables: 7900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14848988 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198428 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1734236 kB' 'DirectMap2M: 20205568 kB' 'DirectMap1G: 47185920 kB' 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.896 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.897 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37593676 kB' 'MemAvailable: 42326196 kB' 'Buffers: 11004 kB' 'Cached: 18363052 kB' 'SwapCached: 0 kB' 'Active: 14350600 kB' 'Inactive: 4489080 kB' 'Active(anon): 13735364 kB' 'Inactive(anon): 0 kB' 'Active(file): 615236 kB' 'Inactive(file): 4489080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 469032 kB' 'Mapped: 207700 kB' 'Shmem: 13269740 kB' 'KReclaimable: 244084 kB' 'Slab: 617864 kB' 'SReclaimable: 244084 kB' 'SUnreclaim: 373780 kB' 'KernelStack: 12928 kB' 'PageTables: 7912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14849008 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198428 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1734236 kB' 'DirectMap2M: 20205568 kB' 'DirectMap1G: 47185920 kB' 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.898 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.899 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:13:15.900 nr_hugepages=1024 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:13:15.900 resv_hugepages=0 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:13:15.900 surplus_hugepages=0 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:13:15.900 anon_hugepages=0 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37593676 kB' 'MemAvailable: 42326196 kB' 'Buffers: 11004 kB' 'Cached: 18363076 kB' 'SwapCached: 0 kB' 'Active: 14350436 kB' 'Inactive: 4489080 kB' 'Active(anon): 13735200 kB' 'Inactive(anon): 0 kB' 'Active(file): 615236 kB' 'Inactive(file): 4489080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 468872 kB' 'Mapped: 207700 kB' 'Shmem: 13269764 kB' 'KReclaimable: 244084 kB' 'Slab: 617864 kB' 'SReclaimable: 244084 kB' 'SUnreclaim: 373780 kB' 'KernelStack: 12928 kB' 'PageTables: 7912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14849032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198428 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1734236 kB' 'DirectMap2M: 20205568 kB' 'DirectMap1G: 47185920 kB' 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.900 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.901 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20881044 kB' 'MemUsed: 11995896 kB' 'SwapCached: 0 kB' 'Active: 7865916 kB' 'Inactive: 1097524 kB' 'Active(anon): 7533416 kB' 'Inactive(anon): 0 kB' 'Active(file): 332500 kB' 'Inactive(file): 1097524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8649816 kB' 'Mapped: 70868 kB' 'AnonPages: 316976 kB' 'Shmem: 7219792 kB' 'KernelStack: 8152 kB' 'PageTables: 5016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 130268 kB' 'Slab: 306860 kB' 'SReclaimable: 130268 kB' 'SUnreclaim: 176592 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.902 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.903 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.904 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:13:15.904 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:13:15.904 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:13:15.904 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:15.904 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:13:15.904 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:13:15.904 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:13:15.904 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:13:15.904 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:13:15.904 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:13:15.904 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:13:15.904 node0=1024 expecting 1024 00:13:15.904 08:39:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:13:15.904 00:13:15.904 real 0m3.172s 00:13:15.904 user 0m1.284s 00:13:15.904 sys 0m1.821s 00:13:15.904 08:39:10 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:15.904 08:39:10 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:13:15.904 ************************************ 00:13:15.904 END TEST no_shrink_alloc 00:13:15.904 ************************************ 00:13:15.904 08:39:10 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:13:15.904 08:39:10 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:13:15.904 08:39:10 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:13:15.904 08:39:10 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:13:15.904 08:39:10 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:13:15.904 08:39:10 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:13:15.904 08:39:10 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:13:15.904 08:39:10 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:13:15.904 08:39:10 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:13:15.904 08:39:10 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:13:15.904 08:39:10 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:13:15.904 08:39:10 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:13:15.904 08:39:10 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:13:15.904 08:39:10 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:13:15.904 00:13:15.904 real 0m12.527s 00:13:15.904 user 0m4.764s 00:13:15.904 sys 0m6.614s 00:13:15.904 08:39:10 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:15.904 08:39:10 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:13:15.904 ************************************ 00:13:15.904 END TEST hugepages 00:13:15.904 ************************************ 00:13:16.162 08:39:10 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:13:16.162 08:39:10 setup.sh -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:13:16.162 08:39:10 setup.sh -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:16.162 08:39:10 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:13:16.162 ************************************ 00:13:16.162 START TEST driver 00:13:16.162 ************************************ 00:13:16.162 08:39:10 setup.sh.driver -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:13:16.162 * Looking for test storage... 00:13:16.162 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:13:16.162 08:39:10 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:13:16.162 08:39:10 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:13:16.162 08:39:10 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:13:18.742 08:39:13 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:13:18.742 08:39:13 setup.sh.driver -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:13:18.742 08:39:13 setup.sh.driver -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:18.742 08:39:13 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:13:18.742 ************************************ 00:13:18.742 START TEST guess_driver 00:13:18.742 ************************************ 00:13:18.742 08:39:13 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # guess_driver 00:13:18.742 08:39:13 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:13:18.742 08:39:13 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:13:18.742 08:39:13 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:13:18.742 08:39:13 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:13:18.742 08:39:13 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:13:18.742 08:39:13 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:13:18.742 08:39:13 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:13:18.742 08:39:13 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:13:18.742 08:39:13 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:13:18.742 08:39:13 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 189 > 0 )) 00:13:18.742 08:39:13 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:13:18.742 08:39:13 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:13:18.742 08:39:13 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:13:18.742 08:39:13 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:13:18.742 08:39:13 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:13:18.742 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:13:18.742 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:13:18.742 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:13:18.742 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:13:18.742 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:13:18.742 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:13:18.742 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:13:18.742 08:39:13 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:13:18.742 08:39:13 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:13:18.742 08:39:13 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:13:18.742 08:39:13 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:13:18.742 08:39:13 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:13:18.742 Looking for driver=vfio-pci 00:13:18.742 08:39:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:13:18.742 08:39:13 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:13:18.742 08:39:13 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:13:18.742 08:39:13 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:13:20.118 08:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:13:20.118 08:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:13:20.118 08:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:13:20.118 08:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:13:20.118 08:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:13:20.118 08:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:13:20.118 08:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:13:20.118 08:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:13:20.118 08:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:13:20.118 08:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:13:20.118 08:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:13:20.118 08:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:13:20.118 08:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:13:20.118 08:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:13:20.118 08:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:13:20.118 08:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:13:20.118 08:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:13:20.118 08:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:13:20.118 08:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:13:20.118 08:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:13:20.118 08:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:13:20.118 08:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:13:20.118 08:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:13:20.118 08:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:13:20.118 08:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:13:20.118 08:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:13:20.118 08:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:13:20.118 08:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:13:20.118 08:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:13:20.118 08:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:13:20.118 08:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:13:20.118 08:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:13:20.118 08:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:13:20.118 08:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:13:20.118 08:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:13:20.118 08:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:13:20.118 08:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:13:20.118 08:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:13:20.118 08:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:13:20.118 08:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:13:20.118 08:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:13:20.118 08:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:13:20.118 08:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:13:20.118 08:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:13:20.118 08:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:13:20.118 08:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:13:20.118 08:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:13:20.118 08:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:13:21.052 08:39:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:13:21.052 08:39:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:13:21.052 08:39:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:13:21.052 08:39:15 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:13:21.052 08:39:15 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:13:21.052 08:39:15 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:13:21.052 08:39:15 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:13:24.333 00:13:24.333 real 0m5.146s 00:13:24.333 user 0m1.235s 00:13:24.333 sys 0m2.133s 00:13:24.333 08:39:18 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:24.333 08:39:18 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:13:24.333 ************************************ 00:13:24.333 END TEST guess_driver 00:13:24.333 ************************************ 00:13:24.333 00:13:24.333 real 0m7.685s 00:13:24.333 user 0m1.837s 00:13:24.333 sys 0m3.220s 00:13:24.333 08:39:18 setup.sh.driver -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:24.333 08:39:18 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:13:24.333 ************************************ 00:13:24.333 END TEST driver 00:13:24.333 ************************************ 00:13:24.333 08:39:18 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:13:24.333 08:39:18 setup.sh -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:13:24.333 08:39:18 setup.sh -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:24.333 08:39:18 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:13:24.333 ************************************ 00:13:24.333 START TEST devices 00:13:24.333 ************************************ 00:13:24.333 08:39:18 setup.sh.devices -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:13:24.333 * Looking for test storage... 00:13:24.333 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:13:24.333 08:39:18 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:13:24.333 08:39:18 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:13:24.333 08:39:18 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:13:24.333 08:39:18 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:13:25.707 08:39:20 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:13:25.707 08:39:20 setup.sh.devices -- common/autotest_common.sh@1666 -- # zoned_devs=() 00:13:25.707 08:39:20 setup.sh.devices -- common/autotest_common.sh@1666 -- # local -gA zoned_devs 00:13:25.707 08:39:20 setup.sh.devices -- common/autotest_common.sh@1667 -- # local nvme bdf 00:13:25.707 08:39:20 setup.sh.devices -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:13:25.707 08:39:20 setup.sh.devices -- common/autotest_common.sh@1670 -- # is_block_zoned nvme0n1 00:13:25.707 08:39:20 setup.sh.devices -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:13:25.707 08:39:20 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:13:25.707 08:39:20 setup.sh.devices -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:13:25.707 08:39:20 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:13:25.707 08:39:20 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:13:25.707 08:39:20 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:13:25.707 08:39:20 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:13:25.707 08:39:20 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:13:25.707 08:39:20 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:13:25.707 08:39:20 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:13:25.707 08:39:20 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:13:25.707 08:39:20 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:0b:00.0 00:13:25.707 08:39:20 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\b\:\0\0\.\0* ]] 00:13:25.707 08:39:20 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:13:25.707 08:39:20 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:13:25.707 08:39:20 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:13:25.707 No valid GPT data, bailing 00:13:25.707 08:39:20 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:13:25.707 08:39:20 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:13:25.707 08:39:20 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:13:25.707 08:39:20 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:13:25.707 08:39:20 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:25.707 08:39:20 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:25.707 08:39:20 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:13:25.707 08:39:20 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:13:25.707 08:39:20 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:13:25.707 08:39:20 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:0b:00.0 00:13:25.707 08:39:20 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:13:25.707 08:39:20 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:13:25.707 08:39:20 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:13:25.707 08:39:20 setup.sh.devices -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:13:25.708 08:39:20 setup.sh.devices -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:25.708 08:39:20 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:13:25.708 ************************************ 00:13:25.708 START TEST nvme_mount 00:13:25.708 ************************************ 00:13:25.708 08:39:20 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # nvme_mount 00:13:25.708 08:39:20 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:13:25.708 08:39:20 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:13:25.708 08:39:20 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:13:25.708 08:39:20 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:13:25.708 08:39:20 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:13:25.708 08:39:20 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:13:25.708 08:39:20 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:13:25.708 08:39:20 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:13:25.708 08:39:20 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:13:25.708 08:39:20 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:13:25.708 08:39:20 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:13:25.708 08:39:20 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:13:25.708 08:39:20 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:13:25.708 08:39:20 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:13:25.708 08:39:20 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:13:25.708 08:39:20 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:13:25.708 08:39:20 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:13:25.708 08:39:20 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:13:25.708 08:39:20 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:13:26.642 Creating new GPT entries in memory. 00:13:26.642 GPT data structures destroyed! You may now partition the disk using fdisk or 00:13:26.642 other utilities. 00:13:26.642 08:39:21 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:13:26.642 08:39:21 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:13:26.642 08:39:21 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:13:26.642 08:39:21 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:13:26.642 08:39:21 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:13:27.577 Creating new GPT entries in memory. 00:13:27.577 The operation has completed successfully. 00:13:27.577 08:39:22 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:13:27.577 08:39:22 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:13:27.577 08:39:22 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2126514 00:13:27.577 08:39:22 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:13:27.577 08:39:22 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:13:27.577 08:39:22 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:13:27.577 08:39:22 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:13:27.577 08:39:22 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:13:27.577 08:39:22 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:13:27.577 08:39:22 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:0b:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:13:27.577 08:39:22 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:13:27.577 08:39:22 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:13:27.577 08:39:22 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:13:27.577 08:39:22 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:13:27.577 08:39:22 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:13:27.577 08:39:22 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:13:27.577 08:39:22 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:13:27.577 08:39:22 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:13:27.577 08:39:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:27.577 08:39:22 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:13:27.577 08:39:22 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:13:27.577 08:39:22 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:13:27.577 08:39:22 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:13:28.955 08:39:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:28.955 08:39:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:28.955 08:39:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:28.955 08:39:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:28.955 08:39:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:28.955 08:39:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:28.955 08:39:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:28.955 08:39:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:28.955 08:39:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:28.955 08:39:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:28.955 08:39:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:28.955 08:39:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:28.955 08:39:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:28.955 08:39:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:28.955 08:39:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:28.955 08:39:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:28.955 08:39:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:28.955 08:39:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:13:28.955 08:39:23 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:13:28.955 08:39:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:28.956 08:39:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:28.956 08:39:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:28.956 08:39:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:28.956 08:39:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:28.956 08:39:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:28.956 08:39:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:28.956 08:39:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:28.956 08:39:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:28.956 08:39:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:28.956 08:39:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:28.956 08:39:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:28.956 08:39:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:28.956 08:39:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:28.956 08:39:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:28.956 08:39:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:28.956 08:39:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:28.956 08:39:23 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:13:28.956 08:39:23 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:13:28.956 08:39:23 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:13:28.956 08:39:23 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:13:28.956 08:39:23 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:13:28.956 08:39:23 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:13:28.956 08:39:23 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:13:28.956 08:39:23 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:13:28.956 08:39:23 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:13:28.956 08:39:23 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:13:28.956 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:13:28.956 08:39:23 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:13:28.956 08:39:23 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:13:29.214 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:13:29.214 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:13:29.214 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:13:29.214 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:13:29.214 08:39:23 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:13:29.214 08:39:23 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:13:29.214 08:39:23 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:13:29.214 08:39:23 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:13:29.214 08:39:23 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:13:29.472 08:39:24 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:13:29.472 08:39:24 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:0b:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:13:29.472 08:39:24 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:13:29.472 08:39:24 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:13:29.472 08:39:24 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:13:29.472 08:39:24 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:13:29.472 08:39:24 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:13:29.472 08:39:24 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:13:29.472 08:39:24 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:13:29.472 08:39:24 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:13:29.472 08:39:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:29.472 08:39:24 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:13:29.472 08:39:24 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:13:29.472 08:39:24 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:13:29.472 08:39:24 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:13:30.406 08:39:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:30.406 08:39:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:30.406 08:39:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:30.406 08:39:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:30.406 08:39:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:30.406 08:39:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:30.406 08:39:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:30.406 08:39:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:30.406 08:39:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:30.406 08:39:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:30.406 08:39:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:30.406 08:39:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:30.406 08:39:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:30.406 08:39:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:30.406 08:39:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:30.406 08:39:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:30.664 08:39:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:30.664 08:39:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:13:30.664 08:39:25 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:13:30.664 08:39:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:30.664 08:39:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:30.664 08:39:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:30.664 08:39:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:30.664 08:39:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:30.664 08:39:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:30.664 08:39:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:30.664 08:39:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:30.664 08:39:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:30.664 08:39:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:30.664 08:39:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:30.664 08:39:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:30.664 08:39:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:30.664 08:39:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:30.664 08:39:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:30.664 08:39:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:30.664 08:39:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:30.664 08:39:25 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:13:30.664 08:39:25 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:13:30.664 08:39:25 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:13:30.664 08:39:25 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:13:30.664 08:39:25 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:13:30.664 08:39:25 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:13:30.664 08:39:25 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:0b:00.0 data@nvme0n1 '' '' 00:13:30.665 08:39:25 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:13:30.665 08:39:25 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:13:30.665 08:39:25 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:13:30.665 08:39:25 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:13:30.665 08:39:25 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:13:30.665 08:39:25 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:13:30.665 08:39:25 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:13:30.665 08:39:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:30.665 08:39:25 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:13:30.665 08:39:25 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:13:30.665 08:39:25 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:13:30.665 08:39:25 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:13:32.042 08:39:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:32.042 08:39:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:32.042 08:39:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:32.042 08:39:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:32.042 08:39:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:32.042 08:39:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:32.042 08:39:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:32.042 08:39:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:32.042 08:39:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:32.042 08:39:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:32.042 08:39:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:32.042 08:39:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:32.042 08:39:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:32.042 08:39:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:32.042 08:39:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:32.042 08:39:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:32.042 08:39:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:32.042 08:39:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:13:32.042 08:39:26 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:13:32.042 08:39:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:32.042 08:39:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:32.042 08:39:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:32.042 08:39:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:32.042 08:39:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:32.042 08:39:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:32.042 08:39:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:32.042 08:39:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:32.042 08:39:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:32.042 08:39:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:32.042 08:39:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:32.042 08:39:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:32.042 08:39:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:32.043 08:39:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:32.043 08:39:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:32.043 08:39:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:32.043 08:39:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:32.301 08:39:26 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:13:32.301 08:39:26 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:13:32.301 08:39:26 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:13:32.301 08:39:26 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:13:32.301 08:39:26 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:13:32.301 08:39:26 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:13:32.301 08:39:26 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:13:32.301 08:39:26 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:13:32.301 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:13:32.301 00:13:32.301 real 0m6.711s 00:13:32.301 user 0m1.661s 00:13:32.301 sys 0m2.644s 00:13:32.301 08:39:26 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:32.301 08:39:26 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:13:32.301 ************************************ 00:13:32.301 END TEST nvme_mount 00:13:32.301 ************************************ 00:13:32.301 08:39:26 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:13:32.301 08:39:26 setup.sh.devices -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:13:32.301 08:39:26 setup.sh.devices -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:32.301 08:39:26 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:13:32.301 ************************************ 00:13:32.301 START TEST dm_mount 00:13:32.301 ************************************ 00:13:32.301 08:39:26 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # dm_mount 00:13:32.301 08:39:26 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:13:32.301 08:39:26 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:13:32.301 08:39:26 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:13:32.301 08:39:26 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:13:32.301 08:39:26 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:13:32.301 08:39:26 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:13:32.301 08:39:26 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:13:32.301 08:39:26 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:13:32.301 08:39:26 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:13:32.301 08:39:26 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:13:32.301 08:39:26 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:13:32.301 08:39:26 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:13:32.301 08:39:26 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:13:32.301 08:39:26 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:13:32.301 08:39:26 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:13:32.301 08:39:26 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:13:32.301 08:39:26 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:13:32.301 08:39:26 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:13:32.301 08:39:26 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:13:32.301 08:39:26 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:13:32.301 08:39:26 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:13:33.234 Creating new GPT entries in memory. 00:13:33.234 GPT data structures destroyed! You may now partition the disk using fdisk or 00:13:33.234 other utilities. 00:13:33.234 08:39:27 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:13:33.234 08:39:27 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:13:33.234 08:39:27 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:13:33.234 08:39:27 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:13:33.234 08:39:27 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:13:34.167 Creating new GPT entries in memory. 00:13:34.167 The operation has completed successfully. 00:13:34.167 08:39:28 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:13:34.167 08:39:28 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:13:34.167 08:39:28 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:13:34.167 08:39:28 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:13:34.167 08:39:28 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:13:35.540 The operation has completed successfully. 00:13:35.540 08:39:29 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:13:35.540 08:39:29 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:13:35.540 08:39:29 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2129191 00:13:35.540 08:39:29 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:13:35.540 08:39:29 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:13:35.540 08:39:29 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:13:35.540 08:39:29 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:13:35.540 08:39:30 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:13:35.540 08:39:30 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:13:35.540 08:39:30 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:13:35.540 08:39:30 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:13:35.540 08:39:30 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:13:35.540 08:39:30 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:13:35.540 08:39:30 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:13:35.540 08:39:30 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:13:35.540 08:39:30 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:13:35.540 08:39:30 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:13:35.540 08:39:30 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:13:35.540 08:39:30 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:13:35.540 08:39:30 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:13:35.540 08:39:30 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:13:35.540 08:39:30 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:13:35.540 08:39:30 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:0b:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:13:35.540 08:39:30 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:13:35.540 08:39:30 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:13:35.540 08:39:30 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:13:35.540 08:39:30 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:13:35.540 08:39:30 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:13:35.540 08:39:30 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:13:35.540 08:39:30 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:13:35.540 08:39:30 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:13:35.540 08:39:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:35.540 08:39:30 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:13:35.540 08:39:30 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:13:35.540 08:39:30 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:13:35.540 08:39:30 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:13:36.913 08:39:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:36.913 08:39:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:36.913 08:39:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:36.913 08:39:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:36.913 08:39:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:36.913 08:39:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:36.913 08:39:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:36.913 08:39:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:36.913 08:39:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:36.913 08:39:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:36.913 08:39:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:36.913 08:39:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:36.913 08:39:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:36.913 08:39:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:36.913 08:39:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:36.913 08:39:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:36.913 08:39:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:36.913 08:39:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:13:36.913 08:39:31 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:13:36.913 08:39:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:36.913 08:39:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:36.913 08:39:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:36.913 08:39:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:36.913 08:39:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:36.913 08:39:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:36.913 08:39:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:36.913 08:39:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:36.913 08:39:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:36.913 08:39:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:36.913 08:39:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:36.913 08:39:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:36.913 08:39:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:36.913 08:39:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:36.913 08:39:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:36.913 08:39:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:36.913 08:39:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:36.913 08:39:31 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:13:36.913 08:39:31 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:13:36.913 08:39:31 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:13:36.913 08:39:31 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:13:36.913 08:39:31 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:13:36.913 08:39:31 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:13:36.913 08:39:31 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:0b:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:13:36.914 08:39:31 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:13:36.914 08:39:31 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:13:36.914 08:39:31 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:13:36.914 08:39:31 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:13:36.914 08:39:31 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:13:36.914 08:39:31 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:13:36.914 08:39:31 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:13:36.914 08:39:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:36.914 08:39:31 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:13:36.914 08:39:31 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:13:36.914 08:39:31 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:13:36.914 08:39:31 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:13:38.321 08:39:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:38.321 08:39:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:38.321 08:39:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:38.321 08:39:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:38.321 08:39:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:38.321 08:39:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:38.321 08:39:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:38.321 08:39:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:38.321 08:39:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:38.321 08:39:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:38.321 08:39:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:38.321 08:39:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:38.321 08:39:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:38.321 08:39:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:38.321 08:39:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:38.322 08:39:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:38.322 08:39:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:38.322 08:39:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:13:38.322 08:39:32 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:13:38.322 08:39:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:38.322 08:39:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:38.322 08:39:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:38.322 08:39:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:38.322 08:39:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:38.322 08:39:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:38.322 08:39:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:38.322 08:39:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:38.322 08:39:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:38.322 08:39:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:38.322 08:39:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:38.322 08:39:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:38.322 08:39:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:38.322 08:39:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:38.322 08:39:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:38.322 08:39:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:13:38.322 08:39:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:38.322 08:39:32 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:13:38.322 08:39:32 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:13:38.322 08:39:32 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:13:38.322 08:39:32 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:13:38.322 08:39:32 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:13:38.322 08:39:32 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:13:38.322 08:39:32 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:13:38.322 08:39:32 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:13:38.322 08:39:32 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:13:38.322 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:13:38.322 08:39:32 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:13:38.322 08:39:32 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:13:38.322 00:13:38.322 real 0m6.042s 00:13:38.322 user 0m1.086s 00:13:38.322 sys 0m1.838s 00:13:38.322 08:39:32 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:38.322 08:39:32 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:13:38.322 ************************************ 00:13:38.322 END TEST dm_mount 00:13:38.322 ************************************ 00:13:38.322 08:39:32 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:13:38.322 08:39:32 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:13:38.322 08:39:32 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:13:38.322 08:39:32 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:13:38.322 08:39:32 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:13:38.322 08:39:33 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:13:38.322 08:39:33 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:13:38.579 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:13:38.579 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:13:38.579 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:13:38.579 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:13:38.579 08:39:33 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:13:38.579 08:39:33 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:13:38.579 08:39:33 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:13:38.579 08:39:33 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:13:38.579 08:39:33 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:13:38.579 08:39:33 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:13:38.579 08:39:33 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:13:38.579 00:13:38.579 real 0m14.820s 00:13:38.579 user 0m3.479s 00:13:38.579 sys 0m5.593s 00:13:38.579 08:39:33 setup.sh.devices -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:38.579 08:39:33 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:13:38.579 ************************************ 00:13:38.579 END TEST devices 00:13:38.579 ************************************ 00:13:38.579 00:13:38.579 real 0m46.641s 00:13:38.579 user 0m13.833s 00:13:38.579 sys 0m21.536s 00:13:38.579 08:39:33 setup.sh -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:38.579 08:39:33 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:13:38.579 ************************************ 00:13:38.579 END TEST setup.sh 00:13:38.579 ************************************ 00:13:38.579 08:39:33 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:13:39.951 Hugepages 00:13:39.951 node hugesize free / total 00:13:39.951 node0 1048576kB 0 / 0 00:13:39.951 node0 2048kB 2048 / 2048 00:13:39.951 node1 1048576kB 0 / 0 00:13:39.951 node1 2048kB 0 / 0 00:13:39.951 00:13:39.951 Type BDF Vendor Device NUMA Driver Device Block devices 00:13:39.951 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:13:39.951 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:13:39.951 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:13:39.951 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:13:39.951 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:13:39.951 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:13:39.951 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:13:39.951 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:13:39.951 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:13:39.951 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:13:39.951 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:13:39.951 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:13:39.951 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:13:39.951 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:13:39.951 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:13:39.951 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:13:39.951 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:13:39.951 08:39:34 -- spdk/autotest.sh@130 -- # uname -s 00:13:39.951 08:39:34 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:13:39.951 08:39:34 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:13:39.951 08:39:34 -- common/autotest_common.sh@1528 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:13:41.844 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:13:41.844 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:13:41.844 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:13:41.844 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:13:41.844 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:13:41.844 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:13:41.844 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:13:41.844 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:13:41.844 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:13:41.844 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:13:41.844 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:13:41.844 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:13:41.844 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:13:41.844 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:13:41.844 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:13:41.844 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:13:42.408 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:13:42.666 08:39:37 -- common/autotest_common.sh@1529 -- # sleep 1 00:13:43.598 08:39:38 -- common/autotest_common.sh@1530 -- # bdfs=() 00:13:43.598 08:39:38 -- common/autotest_common.sh@1530 -- # local bdfs 00:13:43.598 08:39:38 -- common/autotest_common.sh@1531 -- # bdfs=($(get_nvme_bdfs)) 00:13:43.598 08:39:38 -- common/autotest_common.sh@1531 -- # get_nvme_bdfs 00:13:43.598 08:39:38 -- common/autotest_common.sh@1510 -- # bdfs=() 00:13:43.598 08:39:38 -- common/autotest_common.sh@1510 -- # local bdfs 00:13:43.598 08:39:38 -- common/autotest_common.sh@1511 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:43.598 08:39:38 -- common/autotest_common.sh@1511 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:13:43.598 08:39:38 -- common/autotest_common.sh@1511 -- # jq -r '.config[].params.traddr' 00:13:43.598 08:39:38 -- common/autotest_common.sh@1512 -- # (( 1 == 0 )) 00:13:43.598 08:39:38 -- common/autotest_common.sh@1516 -- # printf '%s\n' 0000:0b:00.0 00:13:43.598 08:39:38 -- common/autotest_common.sh@1533 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:13:44.969 Waiting for block devices as requested 00:13:44.969 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:13:44.969 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:13:45.227 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:13:45.227 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:13:45.227 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:13:45.227 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:13:45.485 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:13:45.485 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:13:45.485 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:13:45.485 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:13:45.742 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:13:45.742 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:13:45.742 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:13:46.000 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:13:46.000 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:13:46.000 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:13:46.000 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:13:46.258 08:39:40 -- common/autotest_common.sh@1535 -- # for bdf in "${bdfs[@]}" 00:13:46.258 08:39:40 -- common/autotest_common.sh@1536 -- # get_nvme_ctrlr_from_bdf 0000:0b:00.0 00:13:46.258 08:39:40 -- common/autotest_common.sh@1499 -- # readlink -f /sys/class/nvme/nvme0 00:13:46.258 08:39:40 -- common/autotest_common.sh@1499 -- # grep 0000:0b:00.0/nvme/nvme 00:13:46.258 08:39:40 -- common/autotest_common.sh@1499 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:13:46.258 08:39:40 -- common/autotest_common.sh@1500 -- # [[ -z /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 ]] 00:13:46.258 08:39:40 -- common/autotest_common.sh@1504 -- # basename /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:13:46.258 08:39:40 -- common/autotest_common.sh@1504 -- # printf '%s\n' nvme0 00:13:46.258 08:39:40 -- common/autotest_common.sh@1536 -- # nvme_ctrlr=/dev/nvme0 00:13:46.258 08:39:40 -- common/autotest_common.sh@1537 -- # [[ -z /dev/nvme0 ]] 00:13:46.258 08:39:40 -- common/autotest_common.sh@1542 -- # nvme id-ctrl /dev/nvme0 00:13:46.258 08:39:40 -- common/autotest_common.sh@1542 -- # grep oacs 00:13:46.258 08:39:40 -- common/autotest_common.sh@1542 -- # cut -d: -f2 00:13:46.258 08:39:40 -- common/autotest_common.sh@1542 -- # oacs=' 0xf' 00:13:46.258 08:39:40 -- common/autotest_common.sh@1543 -- # oacs_ns_manage=8 00:13:46.258 08:39:40 -- common/autotest_common.sh@1545 -- # [[ 8 -ne 0 ]] 00:13:46.258 08:39:40 -- common/autotest_common.sh@1551 -- # nvme id-ctrl /dev/nvme0 00:13:46.258 08:39:40 -- common/autotest_common.sh@1551 -- # grep unvmcap 00:13:46.258 08:39:40 -- common/autotest_common.sh@1551 -- # cut -d: -f2 00:13:46.258 08:39:40 -- common/autotest_common.sh@1551 -- # unvmcap=' 0' 00:13:46.258 08:39:40 -- common/autotest_common.sh@1552 -- # [[ 0 -eq 0 ]] 00:13:46.258 08:39:40 -- common/autotest_common.sh@1554 -- # continue 00:13:46.258 08:39:40 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:13:46.258 08:39:40 -- common/autotest_common.sh@727 -- # xtrace_disable 00:13:46.258 08:39:40 -- common/autotest_common.sh@10 -- # set +x 00:13:46.258 08:39:40 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:13:46.258 08:39:40 -- common/autotest_common.sh@721 -- # xtrace_disable 00:13:46.258 08:39:40 -- common/autotest_common.sh@10 -- # set +x 00:13:46.258 08:39:40 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:13:47.631 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:13:47.631 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:13:47.631 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:13:47.631 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:13:47.631 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:13:47.631 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:13:47.631 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:13:47.631 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:13:47.631 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:13:47.631 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:13:47.631 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:13:47.631 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:13:47.631 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:13:47.631 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:13:47.631 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:13:47.631 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:13:48.564 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:13:48.822 08:39:43 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:13:48.822 08:39:43 -- common/autotest_common.sh@727 -- # xtrace_disable 00:13:48.822 08:39:43 -- common/autotest_common.sh@10 -- # set +x 00:13:48.822 08:39:43 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:13:48.822 08:39:43 -- common/autotest_common.sh@1588 -- # mapfile -t bdfs 00:13:48.822 08:39:43 -- common/autotest_common.sh@1588 -- # get_nvme_bdfs_by_id 0x0a54 00:13:48.822 08:39:43 -- common/autotest_common.sh@1574 -- # bdfs=() 00:13:48.822 08:39:43 -- common/autotest_common.sh@1574 -- # local bdfs 00:13:48.822 08:39:43 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs 00:13:48.822 08:39:43 -- common/autotest_common.sh@1510 -- # bdfs=() 00:13:48.822 08:39:43 -- common/autotest_common.sh@1510 -- # local bdfs 00:13:48.822 08:39:43 -- common/autotest_common.sh@1511 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:48.822 08:39:43 -- common/autotest_common.sh@1511 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:13:48.822 08:39:43 -- common/autotest_common.sh@1511 -- # jq -r '.config[].params.traddr' 00:13:48.822 08:39:43 -- common/autotest_common.sh@1512 -- # (( 1 == 0 )) 00:13:48.822 08:39:43 -- common/autotest_common.sh@1516 -- # printf '%s\n' 0000:0b:00.0 00:13:48.822 08:39:43 -- common/autotest_common.sh@1576 -- # for bdf in $(get_nvme_bdfs) 00:13:48.822 08:39:43 -- common/autotest_common.sh@1577 -- # cat /sys/bus/pci/devices/0000:0b:00.0/device 00:13:48.822 08:39:43 -- common/autotest_common.sh@1577 -- # device=0x0a54 00:13:48.822 08:39:43 -- common/autotest_common.sh@1578 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:13:48.822 08:39:43 -- common/autotest_common.sh@1579 -- # bdfs+=($bdf) 00:13:48.822 08:39:43 -- common/autotest_common.sh@1583 -- # printf '%s\n' 0000:0b:00.0 00:13:48.822 08:39:43 -- common/autotest_common.sh@1589 -- # [[ -z 0000:0b:00.0 ]] 00:13:48.822 08:39:43 -- common/autotest_common.sh@1594 -- # spdk_tgt_pid=2135083 00:13:48.822 08:39:43 -- common/autotest_common.sh@1593 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:13:48.822 08:39:43 -- common/autotest_common.sh@1595 -- # waitforlisten 2135083 00:13:48.822 08:39:43 -- common/autotest_common.sh@828 -- # '[' -z 2135083 ']' 00:13:48.822 08:39:43 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:48.822 08:39:43 -- common/autotest_common.sh@833 -- # local max_retries=100 00:13:48.822 08:39:43 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:48.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:48.822 08:39:43 -- common/autotest_common.sh@837 -- # xtrace_disable 00:13:48.822 08:39:43 -- common/autotest_common.sh@10 -- # set +x 00:13:48.822 [2024-05-15 08:39:43.520610] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:13:48.822 [2024-05-15 08:39:43.520713] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2135083 ] 00:13:48.822 EAL: No free 2048 kB hugepages reported on node 1 00:13:48.822 [2024-05-15 08:39:43.592506] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:49.079 [2024-05-15 08:39:43.685039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.337 08:39:43 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:13:49.337 08:39:43 -- common/autotest_common.sh@861 -- # return 0 00:13:49.337 08:39:43 -- common/autotest_common.sh@1597 -- # bdf_id=0 00:13:49.337 08:39:43 -- common/autotest_common.sh@1598 -- # for bdf in "${bdfs[@]}" 00:13:49.337 08:39:43 -- common/autotest_common.sh@1599 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:0b:00.0 00:13:52.618 nvme0n1 00:13:52.618 08:39:47 -- common/autotest_common.sh@1601 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:13:52.618 [2024-05-15 08:39:47.248772] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:13:52.618 [2024-05-15 08:39:47.248819] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:13:52.618 request: 00:13:52.618 { 00:13:52.618 "nvme_ctrlr_name": "nvme0", 00:13:52.618 "password": "test", 00:13:52.618 "method": "bdev_nvme_opal_revert", 00:13:52.618 "req_id": 1 00:13:52.618 } 00:13:52.618 Got JSON-RPC error response 00:13:52.618 response: 00:13:52.618 { 00:13:52.618 "code": -32603, 00:13:52.618 "message": "Internal error" 00:13:52.618 } 00:13:52.618 08:39:47 -- common/autotest_common.sh@1601 -- # true 00:13:52.618 08:39:47 -- common/autotest_common.sh@1602 -- # (( ++bdf_id )) 00:13:52.618 08:39:47 -- common/autotest_common.sh@1605 -- # killprocess 2135083 00:13:52.618 08:39:47 -- common/autotest_common.sh@947 -- # '[' -z 2135083 ']' 00:13:52.618 08:39:47 -- common/autotest_common.sh@951 -- # kill -0 2135083 00:13:52.618 08:39:47 -- common/autotest_common.sh@952 -- # uname 00:13:52.618 08:39:47 -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:13:52.618 08:39:47 -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2135083 00:13:52.618 08:39:47 -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:13:52.619 08:39:47 -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:13:52.619 08:39:47 -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2135083' 00:13:52.619 killing process with pid 2135083 00:13:52.619 08:39:47 -- common/autotest_common.sh@966 -- # kill 2135083 00:13:52.619 08:39:47 -- common/autotest_common.sh@971 -- # wait 2135083 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.619 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:52.878 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:13:54.251 08:39:49 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:13:54.251 08:39:49 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:13:54.251 08:39:49 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:13:54.251 08:39:49 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:13:54.251 08:39:49 -- spdk/autotest.sh@162 -- # timing_enter lib 00:13:54.251 08:39:49 -- common/autotest_common.sh@721 -- # xtrace_disable 00:13:54.251 08:39:49 -- common/autotest_common.sh@10 -- # set +x 00:13:54.251 08:39:49 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:13:54.251 08:39:49 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:13:54.251 08:39:49 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:54.251 08:39:49 -- common/autotest_common.sh@10 -- # set +x 00:13:54.509 ************************************ 00:13:54.509 START TEST env 00:13:54.509 ************************************ 00:13:54.509 08:39:49 env -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:13:54.509 * Looking for test storage... 00:13:54.509 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:13:54.509 08:39:49 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:13:54.509 08:39:49 env -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:13:54.509 08:39:49 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:54.509 08:39:49 env -- common/autotest_common.sh@10 -- # set +x 00:13:54.509 ************************************ 00:13:54.509 START TEST env_memory 00:13:54.509 ************************************ 00:13:54.509 08:39:49 env.env_memory -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:13:54.509 00:13:54.509 00:13:54.509 CUnit - A unit testing framework for C - Version 2.1-3 00:13:54.509 http://cunit.sourceforge.net/ 00:13:54.509 00:13:54.509 00:13:54.509 Suite: memory 00:13:54.509 Test: alloc and free memory map ...[2024-05-15 08:39:49.171877] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:13:54.509 passed 00:13:54.509 Test: mem map translation ...[2024-05-15 08:39:49.191990] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:13:54.509 [2024-05-15 08:39:49.192011] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:13:54.509 [2024-05-15 08:39:49.192067] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:13:54.509 [2024-05-15 08:39:49.192079] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:13:54.509 passed 00:13:54.509 Test: mem map registration ...[2024-05-15 08:39:49.232735] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:13:54.509 [2024-05-15 08:39:49.232754] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:13:54.509 passed 00:13:54.509 Test: mem map adjacent registrations ...passed 00:13:54.509 00:13:54.509 Run Summary: Type Total Ran Passed Failed Inactive 00:13:54.509 suites 1 1 n/a 0 0 00:13:54.509 tests 4 4 4 0 0 00:13:54.509 asserts 152 152 152 0 n/a 00:13:54.509 00:13:54.509 Elapsed time = 0.144 seconds 00:13:54.509 00:13:54.509 real 0m0.152s 00:13:54.509 user 0m0.146s 00:13:54.509 sys 0m0.006s 00:13:54.509 08:39:49 env.env_memory -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:54.509 08:39:49 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:13:54.509 ************************************ 00:13:54.509 END TEST env_memory 00:13:54.509 ************************************ 00:13:54.769 08:39:49 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:13:54.769 08:39:49 env -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:13:54.769 08:39:49 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:54.769 08:39:49 env -- common/autotest_common.sh@10 -- # set +x 00:13:54.769 ************************************ 00:13:54.769 START TEST env_vtophys 00:13:54.769 ************************************ 00:13:54.769 08:39:49 env.env_vtophys -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:13:54.769 EAL: lib.eal log level changed from notice to debug 00:13:54.769 EAL: Detected lcore 0 as core 0 on socket 0 00:13:54.769 EAL: Detected lcore 1 as core 1 on socket 0 00:13:54.769 EAL: Detected lcore 2 as core 2 on socket 0 00:13:54.769 EAL: Detected lcore 3 as core 3 on socket 0 00:13:54.769 EAL: Detected lcore 4 as core 4 on socket 0 00:13:54.769 EAL: Detected lcore 5 as core 5 on socket 0 00:13:54.769 EAL: Detected lcore 6 as core 8 on socket 0 00:13:54.769 EAL: Detected lcore 7 as core 9 on socket 0 00:13:54.769 EAL: Detected lcore 8 as core 10 on socket 0 00:13:54.769 EAL: Detected lcore 9 as core 11 on socket 0 00:13:54.769 EAL: Detected lcore 10 as core 12 on socket 0 00:13:54.769 EAL: Detected lcore 11 as core 13 on socket 0 00:13:54.769 EAL: Detected lcore 12 as core 0 on socket 1 00:13:54.769 EAL: Detected lcore 13 as core 1 on socket 1 00:13:54.769 EAL: Detected lcore 14 as core 2 on socket 1 00:13:54.769 EAL: Detected lcore 15 as core 3 on socket 1 00:13:54.769 EAL: Detected lcore 16 as core 4 on socket 1 00:13:54.769 EAL: Detected lcore 17 as core 5 on socket 1 00:13:54.769 EAL: Detected lcore 18 as core 8 on socket 1 00:13:54.769 EAL: Detected lcore 19 as core 9 on socket 1 00:13:54.769 EAL: Detected lcore 20 as core 10 on socket 1 00:13:54.769 EAL: Detected lcore 21 as core 11 on socket 1 00:13:54.769 EAL: Detected lcore 22 as core 12 on socket 1 00:13:54.769 EAL: Detected lcore 23 as core 13 on socket 1 00:13:54.769 EAL: Detected lcore 24 as core 0 on socket 0 00:13:54.769 EAL: Detected lcore 25 as core 1 on socket 0 00:13:54.769 EAL: Detected lcore 26 as core 2 on socket 0 00:13:54.769 EAL: Detected lcore 27 as core 3 on socket 0 00:13:54.769 EAL: Detected lcore 28 as core 4 on socket 0 00:13:54.769 EAL: Detected lcore 29 as core 5 on socket 0 00:13:54.769 EAL: Detected lcore 30 as core 8 on socket 0 00:13:54.769 EAL: Detected lcore 31 as core 9 on socket 0 00:13:54.769 EAL: Detected lcore 32 as core 10 on socket 0 00:13:54.769 EAL: Detected lcore 33 as core 11 on socket 0 00:13:54.769 EAL: Detected lcore 34 as core 12 on socket 0 00:13:54.769 EAL: Detected lcore 35 as core 13 on socket 0 00:13:54.769 EAL: Detected lcore 36 as core 0 on socket 1 00:13:54.769 EAL: Detected lcore 37 as core 1 on socket 1 00:13:54.769 EAL: Detected lcore 38 as core 2 on socket 1 00:13:54.769 EAL: Detected lcore 39 as core 3 on socket 1 00:13:54.769 EAL: Detected lcore 40 as core 4 on socket 1 00:13:54.769 EAL: Detected lcore 41 as core 5 on socket 1 00:13:54.769 EAL: Detected lcore 42 as core 8 on socket 1 00:13:54.769 EAL: Detected lcore 43 as core 9 on socket 1 00:13:54.770 EAL: Detected lcore 44 as core 10 on socket 1 00:13:54.770 EAL: Detected lcore 45 as core 11 on socket 1 00:13:54.770 EAL: Detected lcore 46 as core 12 on socket 1 00:13:54.770 EAL: Detected lcore 47 as core 13 on socket 1 00:13:54.770 EAL: Maximum logical cores by configuration: 128 00:13:54.770 EAL: Detected CPU lcores: 48 00:13:54.770 EAL: Detected NUMA nodes: 2 00:13:54.770 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:13:54.770 EAL: Detected shared linkage of DPDK 00:13:54.770 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:13:54.770 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:13:54.770 EAL: Registered [vdev] bus. 00:13:54.770 EAL: bus.vdev log level changed from disabled to notice 00:13:54.770 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:13:54.770 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:13:54.770 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:13:54.770 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:13:54.770 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:13:54.770 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:13:54.770 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:13:54.770 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:13:54.770 EAL: No shared files mode enabled, IPC will be disabled 00:13:54.770 EAL: No shared files mode enabled, IPC is disabled 00:13:54.770 EAL: Bus pci wants IOVA as 'DC' 00:13:54.770 EAL: Bus vdev wants IOVA as 'DC' 00:13:54.770 EAL: Buses did not request a specific IOVA mode. 00:13:54.770 EAL: IOMMU is available, selecting IOVA as VA mode. 00:13:54.770 EAL: Selected IOVA mode 'VA' 00:13:54.770 EAL: No free 2048 kB hugepages reported on node 1 00:13:54.770 EAL: Probing VFIO support... 00:13:54.770 EAL: IOMMU type 1 (Type 1) is supported 00:13:54.770 EAL: IOMMU type 7 (sPAPR) is not supported 00:13:54.770 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:13:54.770 EAL: VFIO support initialized 00:13:54.770 EAL: Ask a virtual area of 0x2e000 bytes 00:13:54.770 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:13:54.770 EAL: Setting up physically contiguous memory... 00:13:54.770 EAL: Setting maximum number of open files to 524288 00:13:54.770 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:13:54.770 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:13:54.770 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:13:54.770 EAL: Ask a virtual area of 0x61000 bytes 00:13:54.770 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:13:54.770 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:13:54.770 EAL: Ask a virtual area of 0x400000000 bytes 00:13:54.770 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:13:54.770 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:13:54.770 EAL: Ask a virtual area of 0x61000 bytes 00:13:54.770 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:13:54.770 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:13:54.770 EAL: Ask a virtual area of 0x400000000 bytes 00:13:54.770 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:13:54.770 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:13:54.770 EAL: Ask a virtual area of 0x61000 bytes 00:13:54.770 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:13:54.770 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:13:54.770 EAL: Ask a virtual area of 0x400000000 bytes 00:13:54.770 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:13:54.770 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:13:54.770 EAL: Ask a virtual area of 0x61000 bytes 00:13:54.770 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:13:54.770 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:13:54.770 EAL: Ask a virtual area of 0x400000000 bytes 00:13:54.770 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:13:54.770 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:13:54.770 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:13:54.770 EAL: Ask a virtual area of 0x61000 bytes 00:13:54.770 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:13:54.770 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:13:54.770 EAL: Ask a virtual area of 0x400000000 bytes 00:13:54.770 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:13:54.770 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:13:54.770 EAL: Ask a virtual area of 0x61000 bytes 00:13:54.770 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:13:54.770 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:13:54.770 EAL: Ask a virtual area of 0x400000000 bytes 00:13:54.770 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:13:54.770 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:13:54.770 EAL: Ask a virtual area of 0x61000 bytes 00:13:54.770 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:13:54.770 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:13:54.770 EAL: Ask a virtual area of 0x400000000 bytes 00:13:54.770 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:13:54.770 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:13:54.770 EAL: Ask a virtual area of 0x61000 bytes 00:13:54.770 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:13:54.770 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:13:54.770 EAL: Ask a virtual area of 0x400000000 bytes 00:13:54.770 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:13:54.770 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:13:54.770 EAL: Hugepages will be freed exactly as allocated. 00:13:54.770 EAL: No shared files mode enabled, IPC is disabled 00:13:54.770 EAL: No shared files mode enabled, IPC is disabled 00:13:54.770 EAL: TSC frequency is ~2700000 KHz 00:13:54.770 EAL: Main lcore 0 is ready (tid=7f1c9cde2a00;cpuset=[0]) 00:13:54.770 EAL: Trying to obtain current memory policy. 00:13:54.770 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:54.770 EAL: Restoring previous memory policy: 0 00:13:54.770 EAL: request: mp_malloc_sync 00:13:54.770 EAL: No shared files mode enabled, IPC is disabled 00:13:54.770 EAL: Heap on socket 0 was expanded by 2MB 00:13:54.770 EAL: No shared files mode enabled, IPC is disabled 00:13:54.770 EAL: No shared files mode enabled, IPC is disabled 00:13:54.770 EAL: No PCI address specified using 'addr=' in: bus=pci 00:13:54.770 EAL: Mem event callback 'spdk:(nil)' registered 00:13:54.770 00:13:54.770 00:13:54.770 CUnit - A unit testing framework for C - Version 2.1-3 00:13:54.770 http://cunit.sourceforge.net/ 00:13:54.770 00:13:54.770 00:13:54.770 Suite: components_suite 00:13:54.770 Test: vtophys_malloc_test ...passed 00:13:54.770 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:13:54.770 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:54.770 EAL: Restoring previous memory policy: 4 00:13:54.770 EAL: Calling mem event callback 'spdk:(nil)' 00:13:54.770 EAL: request: mp_malloc_sync 00:13:54.770 EAL: No shared files mode enabled, IPC is disabled 00:13:54.770 EAL: Heap on socket 0 was expanded by 4MB 00:13:54.770 EAL: Calling mem event callback 'spdk:(nil)' 00:13:54.770 EAL: request: mp_malloc_sync 00:13:54.770 EAL: No shared files mode enabled, IPC is disabled 00:13:54.770 EAL: Heap on socket 0 was shrunk by 4MB 00:13:54.770 EAL: Trying to obtain current memory policy. 00:13:54.770 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:54.770 EAL: Restoring previous memory policy: 4 00:13:54.770 EAL: Calling mem event callback 'spdk:(nil)' 00:13:54.770 EAL: request: mp_malloc_sync 00:13:54.770 EAL: No shared files mode enabled, IPC is disabled 00:13:54.770 EAL: Heap on socket 0 was expanded by 6MB 00:13:54.770 EAL: Calling mem event callback 'spdk:(nil)' 00:13:54.770 EAL: request: mp_malloc_sync 00:13:54.770 EAL: No shared files mode enabled, IPC is disabled 00:13:54.770 EAL: Heap on socket 0 was shrunk by 6MB 00:13:54.770 EAL: Trying to obtain current memory policy. 00:13:54.770 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:54.770 EAL: Restoring previous memory policy: 4 00:13:54.770 EAL: Calling mem event callback 'spdk:(nil)' 00:13:54.770 EAL: request: mp_malloc_sync 00:13:54.770 EAL: No shared files mode enabled, IPC is disabled 00:13:54.770 EAL: Heap on socket 0 was expanded by 10MB 00:13:54.770 EAL: Calling mem event callback 'spdk:(nil)' 00:13:54.770 EAL: request: mp_malloc_sync 00:13:54.770 EAL: No shared files mode enabled, IPC is disabled 00:13:54.770 EAL: Heap on socket 0 was shrunk by 10MB 00:13:54.770 EAL: Trying to obtain current memory policy. 00:13:54.770 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:54.770 EAL: Restoring previous memory policy: 4 00:13:54.770 EAL: Calling mem event callback 'spdk:(nil)' 00:13:54.770 EAL: request: mp_malloc_sync 00:13:54.770 EAL: No shared files mode enabled, IPC is disabled 00:13:54.770 EAL: Heap on socket 0 was expanded by 18MB 00:13:54.770 EAL: Calling mem event callback 'spdk:(nil)' 00:13:54.770 EAL: request: mp_malloc_sync 00:13:54.770 EAL: No shared files mode enabled, IPC is disabled 00:13:54.770 EAL: Heap on socket 0 was shrunk by 18MB 00:13:54.770 EAL: Trying to obtain current memory policy. 00:13:54.770 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:54.770 EAL: Restoring previous memory policy: 4 00:13:54.770 EAL: Calling mem event callback 'spdk:(nil)' 00:13:54.770 EAL: request: mp_malloc_sync 00:13:54.770 EAL: No shared files mode enabled, IPC is disabled 00:13:54.770 EAL: Heap on socket 0 was expanded by 34MB 00:13:54.770 EAL: Calling mem event callback 'spdk:(nil)' 00:13:54.770 EAL: request: mp_malloc_sync 00:13:54.770 EAL: No shared files mode enabled, IPC is disabled 00:13:54.770 EAL: Heap on socket 0 was shrunk by 34MB 00:13:54.770 EAL: Trying to obtain current memory policy. 00:13:54.770 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:54.770 EAL: Restoring previous memory policy: 4 00:13:54.770 EAL: Calling mem event callback 'spdk:(nil)' 00:13:54.770 EAL: request: mp_malloc_sync 00:13:54.770 EAL: No shared files mode enabled, IPC is disabled 00:13:54.770 EAL: Heap on socket 0 was expanded by 66MB 00:13:54.770 EAL: Calling mem event callback 'spdk:(nil)' 00:13:54.770 EAL: request: mp_malloc_sync 00:13:54.770 EAL: No shared files mode enabled, IPC is disabled 00:13:54.770 EAL: Heap on socket 0 was shrunk by 66MB 00:13:54.770 EAL: Trying to obtain current memory policy. 00:13:54.770 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:54.770 EAL: Restoring previous memory policy: 4 00:13:54.770 EAL: Calling mem event callback 'spdk:(nil)' 00:13:54.770 EAL: request: mp_malloc_sync 00:13:54.771 EAL: No shared files mode enabled, IPC is disabled 00:13:54.771 EAL: Heap on socket 0 was expanded by 130MB 00:13:54.771 EAL: Calling mem event callback 'spdk:(nil)' 00:13:55.067 EAL: request: mp_malloc_sync 00:13:55.067 EAL: No shared files mode enabled, IPC is disabled 00:13:55.067 EAL: Heap on socket 0 was shrunk by 130MB 00:13:55.067 EAL: Trying to obtain current memory policy. 00:13:55.067 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:55.067 EAL: Restoring previous memory policy: 4 00:13:55.067 EAL: Calling mem event callback 'spdk:(nil)' 00:13:55.067 EAL: request: mp_malloc_sync 00:13:55.067 EAL: No shared files mode enabled, IPC is disabled 00:13:55.067 EAL: Heap on socket 0 was expanded by 258MB 00:13:55.067 EAL: Calling mem event callback 'spdk:(nil)' 00:13:55.067 EAL: request: mp_malloc_sync 00:13:55.067 EAL: No shared files mode enabled, IPC is disabled 00:13:55.067 EAL: Heap on socket 0 was shrunk by 258MB 00:13:55.067 EAL: Trying to obtain current memory policy. 00:13:55.068 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:55.326 EAL: Restoring previous memory policy: 4 00:13:55.326 EAL: Calling mem event callback 'spdk:(nil)' 00:13:55.326 EAL: request: mp_malloc_sync 00:13:55.326 EAL: No shared files mode enabled, IPC is disabled 00:13:55.326 EAL: Heap on socket 0 was expanded by 514MB 00:13:55.326 EAL: Calling mem event callback 'spdk:(nil)' 00:13:55.585 EAL: request: mp_malloc_sync 00:13:55.585 EAL: No shared files mode enabled, IPC is disabled 00:13:55.585 EAL: Heap on socket 0 was shrunk by 514MB 00:13:55.585 EAL: Trying to obtain current memory policy. 00:13:55.585 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:55.843 EAL: Restoring previous memory policy: 4 00:13:55.843 EAL: Calling mem event callback 'spdk:(nil)' 00:13:55.843 EAL: request: mp_malloc_sync 00:13:55.844 EAL: No shared files mode enabled, IPC is disabled 00:13:55.844 EAL: Heap on socket 0 was expanded by 1026MB 00:13:56.102 EAL: Calling mem event callback 'spdk:(nil)' 00:13:56.102 EAL: request: mp_malloc_sync 00:13:56.102 EAL: No shared files mode enabled, IPC is disabled 00:13:56.102 EAL: Heap on socket 0 was shrunk by 1026MB 00:13:56.102 passed 00:13:56.102 00:13:56.102 Run Summary: Type Total Ran Passed Failed Inactive 00:13:56.102 suites 1 1 n/a 0 0 00:13:56.102 tests 2 2 2 0 0 00:13:56.102 asserts 497 497 497 0 n/a 00:13:56.102 00:13:56.102 Elapsed time = 1.380 seconds 00:13:56.102 EAL: Calling mem event callback 'spdk:(nil)' 00:13:56.102 EAL: request: mp_malloc_sync 00:13:56.102 EAL: No shared files mode enabled, IPC is disabled 00:13:56.102 EAL: Heap on socket 0 was shrunk by 2MB 00:13:56.102 EAL: No shared files mode enabled, IPC is disabled 00:13:56.102 EAL: No shared files mode enabled, IPC is disabled 00:13:56.102 EAL: No shared files mode enabled, IPC is disabled 00:13:56.102 00:13:56.102 real 0m1.517s 00:13:56.102 user 0m0.862s 00:13:56.102 sys 0m0.610s 00:13:56.102 08:39:50 env.env_vtophys -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:56.102 08:39:50 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:13:56.102 ************************************ 00:13:56.102 END TEST env_vtophys 00:13:56.102 ************************************ 00:13:56.102 08:39:50 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:13:56.102 08:39:50 env -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:13:56.102 08:39:50 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:56.102 08:39:50 env -- common/autotest_common.sh@10 -- # set +x 00:13:56.361 ************************************ 00:13:56.361 START TEST env_pci 00:13:56.361 ************************************ 00:13:56.361 08:39:50 env.env_pci -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:13:56.361 00:13:56.361 00:13:56.361 CUnit - A unit testing framework for C - Version 2.1-3 00:13:56.361 http://cunit.sourceforge.net/ 00:13:56.361 00:13:56.361 00:13:56.361 Suite: pci 00:13:56.361 Test: pci_hook ...[2024-05-15 08:39:50.919865] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2135978 has claimed it 00:13:56.361 EAL: Cannot find device (10000:00:01.0) 00:13:56.361 EAL: Failed to attach device on primary process 00:13:56.361 passed 00:13:56.361 00:13:56.361 Run Summary: Type Total Ran Passed Failed Inactive 00:13:56.361 suites 1 1 n/a 0 0 00:13:56.361 tests 1 1 1 0 0 00:13:56.361 asserts 25 25 25 0 n/a 00:13:56.361 00:13:56.361 Elapsed time = 0.025 seconds 00:13:56.361 00:13:56.361 real 0m0.037s 00:13:56.361 user 0m0.009s 00:13:56.361 sys 0m0.028s 00:13:56.361 08:39:50 env.env_pci -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:56.361 08:39:50 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:13:56.361 ************************************ 00:13:56.361 END TEST env_pci 00:13:56.361 ************************************ 00:13:56.361 08:39:50 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:13:56.361 08:39:50 env -- env/env.sh@15 -- # uname 00:13:56.361 08:39:50 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:13:56.361 08:39:50 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:13:56.361 08:39:50 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:13:56.361 08:39:50 env -- common/autotest_common.sh@1098 -- # '[' 5 -le 1 ']' 00:13:56.361 08:39:50 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:56.361 08:39:50 env -- common/autotest_common.sh@10 -- # set +x 00:13:56.361 ************************************ 00:13:56.361 START TEST env_dpdk_post_init 00:13:56.361 ************************************ 00:13:56.361 08:39:50 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:13:56.361 EAL: Detected CPU lcores: 48 00:13:56.361 EAL: Detected NUMA nodes: 2 00:13:56.361 EAL: Detected shared linkage of DPDK 00:13:56.361 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:13:56.361 EAL: Selected IOVA mode 'VA' 00:13:56.361 EAL: No free 2048 kB hugepages reported on node 1 00:13:56.361 EAL: VFIO support initialized 00:13:56.361 TELEMETRY: No legacy callbacks, legacy socket not created 00:13:56.361 EAL: Using IOMMU type 1 (Type 1) 00:13:56.361 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:13:56.361 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:13:56.619 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:13:56.619 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:13:56.619 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:13:56.619 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:13:56.619 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:13:56.619 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:13:57.186 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:0b:00.0 (socket 0) 00:13:57.186 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:13:57.444 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:13:57.444 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:13:57.444 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:13:57.444 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:13:57.444 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:13:57.444 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:13:57.444 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:14:00.725 EAL: Releasing PCI mapped resource for 0000:0b:00.0 00:14:00.725 EAL: Calling pci_unmap_resource for 0000:0b:00.0 at 0x202001020000 00:14:00.725 Starting DPDK initialization... 00:14:00.725 Starting SPDK post initialization... 00:14:00.725 SPDK NVMe probe 00:14:00.725 Attaching to 0000:0b:00.0 00:14:00.725 Attached to 0000:0b:00.0 00:14:00.725 Cleaning up... 00:14:00.725 00:14:00.725 real 0m4.371s 00:14:00.725 user 0m3.220s 00:14:00.725 sys 0m0.209s 00:14:00.725 08:39:55 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:00.725 08:39:55 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:14:00.725 ************************************ 00:14:00.725 END TEST env_dpdk_post_init 00:14:00.725 ************************************ 00:14:00.725 08:39:55 env -- env/env.sh@26 -- # uname 00:14:00.725 08:39:55 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:14:00.725 08:39:55 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:14:00.725 08:39:55 env -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:14:00.725 08:39:55 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:00.725 08:39:55 env -- common/autotest_common.sh@10 -- # set +x 00:14:00.725 ************************************ 00:14:00.725 START TEST env_mem_callbacks 00:14:00.725 ************************************ 00:14:00.725 08:39:55 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:14:00.725 EAL: Detected CPU lcores: 48 00:14:00.725 EAL: Detected NUMA nodes: 2 00:14:00.725 EAL: Detected shared linkage of DPDK 00:14:00.725 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:14:00.725 EAL: Selected IOVA mode 'VA' 00:14:00.725 EAL: No free 2048 kB hugepages reported on node 1 00:14:00.725 EAL: VFIO support initialized 00:14:00.725 TELEMETRY: No legacy callbacks, legacy socket not created 00:14:00.725 00:14:00.725 00:14:00.725 CUnit - A unit testing framework for C - Version 2.1-3 00:14:00.725 http://cunit.sourceforge.net/ 00:14:00.725 00:14:00.725 00:14:00.725 Suite: memory 00:14:00.725 Test: test ... 00:14:00.725 register 0x200000200000 2097152 00:14:00.725 malloc 3145728 00:14:00.725 register 0x200000400000 4194304 00:14:00.725 buf 0x200000500000 len 3145728 PASSED 00:14:00.725 malloc 64 00:14:00.725 buf 0x2000004fff40 len 64 PASSED 00:14:00.725 malloc 4194304 00:14:00.725 register 0x200000800000 6291456 00:14:00.725 buf 0x200000a00000 len 4194304 PASSED 00:14:00.725 free 0x200000500000 3145728 00:14:00.725 free 0x2000004fff40 64 00:14:00.725 unregister 0x200000400000 4194304 PASSED 00:14:00.725 free 0x200000a00000 4194304 00:14:00.725 unregister 0x200000800000 6291456 PASSED 00:14:00.725 malloc 8388608 00:14:00.725 register 0x200000400000 10485760 00:14:00.725 buf 0x200000600000 len 8388608 PASSED 00:14:00.725 free 0x200000600000 8388608 00:14:00.725 unregister 0x200000400000 10485760 PASSED 00:14:00.725 passed 00:14:00.725 00:14:00.725 Run Summary: Type Total Ran Passed Failed Inactive 00:14:00.725 suites 1 1 n/a 0 0 00:14:00.725 tests 1 1 1 0 0 00:14:00.725 asserts 15 15 15 0 n/a 00:14:00.725 00:14:00.725 Elapsed time = 0.005 seconds 00:14:00.725 00:14:00.725 real 0m0.052s 00:14:00.725 user 0m0.011s 00:14:00.725 sys 0m0.041s 00:14:00.725 08:39:55 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:00.725 08:39:55 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:14:00.725 ************************************ 00:14:00.725 END TEST env_mem_callbacks 00:14:00.725 ************************************ 00:14:00.725 00:14:00.725 real 0m6.449s 00:14:00.725 user 0m4.363s 00:14:00.725 sys 0m1.107s 00:14:00.725 08:39:55 env -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:00.725 08:39:55 env -- common/autotest_common.sh@10 -- # set +x 00:14:00.725 ************************************ 00:14:00.725 END TEST env 00:14:00.725 ************************************ 00:14:00.984 08:39:55 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:14:00.984 08:39:55 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:14:00.984 08:39:55 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:00.984 08:39:55 -- common/autotest_common.sh@10 -- # set +x 00:14:00.984 ************************************ 00:14:00.984 START TEST rpc 00:14:00.984 ************************************ 00:14:00.984 08:39:55 rpc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:14:00.984 * Looking for test storage... 00:14:00.984 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:14:00.984 08:39:55 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2136630 00:14:00.984 08:39:55 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:14:00.984 08:39:55 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:14:00.984 08:39:55 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2136630 00:14:00.984 08:39:55 rpc -- common/autotest_common.sh@828 -- # '[' -z 2136630 ']' 00:14:00.984 08:39:55 rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:00.984 08:39:55 rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:14:00.985 08:39:55 rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:00.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:00.985 08:39:55 rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:14:00.985 08:39:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:14:00.985 [2024-05-15 08:39:55.651806] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:14:00.985 [2024-05-15 08:39:55.651884] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2136630 ] 00:14:00.985 EAL: No free 2048 kB hugepages reported on node 1 00:14:00.985 [2024-05-15 08:39:55.716570] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.243 [2024-05-15 08:39:55.801287] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:14:01.243 [2024-05-15 08:39:55.801354] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2136630' to capture a snapshot of events at runtime. 00:14:01.243 [2024-05-15 08:39:55.801368] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:01.243 [2024-05-15 08:39:55.801379] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:01.243 [2024-05-15 08:39:55.801389] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2136630 for offline analysis/debug. 00:14:01.243 [2024-05-15 08:39:55.801425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.502 08:39:56 rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:14:01.502 08:39:56 rpc -- common/autotest_common.sh@861 -- # return 0 00:14:01.502 08:39:56 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:14:01.502 08:39:56 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:14:01.502 08:39:56 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:14:01.502 08:39:56 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:14:01.502 08:39:56 rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:14:01.502 08:39:56 rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:01.502 08:39:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:14:01.502 ************************************ 00:14:01.502 START TEST rpc_integrity 00:14:01.502 ************************************ 00:14:01.502 08:39:56 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # rpc_integrity 00:14:01.502 08:39:56 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:01.502 08:39:56 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:01.502 08:39:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:01.502 08:39:56 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:01.502 08:39:56 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:14:01.502 08:39:56 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:14:01.502 08:39:56 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:14:01.502 08:39:56 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:14:01.502 08:39:56 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:01.502 08:39:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:01.502 08:39:56 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:01.502 08:39:56 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:14:01.502 08:39:56 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:14:01.503 08:39:56 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:01.503 08:39:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:01.503 08:39:56 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:01.503 08:39:56 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:14:01.503 { 00:14:01.503 "name": "Malloc0", 00:14:01.503 "aliases": [ 00:14:01.503 "a2b98173-12ac-4829-bcab-c4f5e36a1bc7" 00:14:01.503 ], 00:14:01.503 "product_name": "Malloc disk", 00:14:01.503 "block_size": 512, 00:14:01.503 "num_blocks": 16384, 00:14:01.503 "uuid": "a2b98173-12ac-4829-bcab-c4f5e36a1bc7", 00:14:01.503 "assigned_rate_limits": { 00:14:01.503 "rw_ios_per_sec": 0, 00:14:01.503 "rw_mbytes_per_sec": 0, 00:14:01.503 "r_mbytes_per_sec": 0, 00:14:01.503 "w_mbytes_per_sec": 0 00:14:01.503 }, 00:14:01.503 "claimed": false, 00:14:01.503 "zoned": false, 00:14:01.503 "supported_io_types": { 00:14:01.503 "read": true, 00:14:01.503 "write": true, 00:14:01.503 "unmap": true, 00:14:01.503 "write_zeroes": true, 00:14:01.503 "flush": true, 00:14:01.503 "reset": true, 00:14:01.503 "compare": false, 00:14:01.503 "compare_and_write": false, 00:14:01.503 "abort": true, 00:14:01.503 "nvme_admin": false, 00:14:01.503 "nvme_io": false 00:14:01.503 }, 00:14:01.503 "memory_domains": [ 00:14:01.503 { 00:14:01.503 "dma_device_id": "system", 00:14:01.503 "dma_device_type": 1 00:14:01.503 }, 00:14:01.503 { 00:14:01.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.503 "dma_device_type": 2 00:14:01.503 } 00:14:01.503 ], 00:14:01.503 "driver_specific": {} 00:14:01.503 } 00:14:01.503 ]' 00:14:01.503 08:39:56 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:14:01.503 08:39:56 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:14:01.503 08:39:56 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:14:01.503 08:39:56 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:01.503 08:39:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:01.503 [2024-05-15 08:39:56.193623] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:14:01.503 [2024-05-15 08:39:56.193669] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:01.503 [2024-05-15 08:39:56.193694] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13053a0 00:14:01.503 [2024-05-15 08:39:56.193708] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:01.503 [2024-05-15 08:39:56.195210] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:01.503 [2024-05-15 08:39:56.195263] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:14:01.503 Passthru0 00:14:01.503 08:39:56 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:01.503 08:39:56 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:14:01.503 08:39:56 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:01.503 08:39:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:01.503 08:39:56 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:01.503 08:39:56 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:14:01.503 { 00:14:01.503 "name": "Malloc0", 00:14:01.503 "aliases": [ 00:14:01.503 "a2b98173-12ac-4829-bcab-c4f5e36a1bc7" 00:14:01.503 ], 00:14:01.503 "product_name": "Malloc disk", 00:14:01.503 "block_size": 512, 00:14:01.503 "num_blocks": 16384, 00:14:01.503 "uuid": "a2b98173-12ac-4829-bcab-c4f5e36a1bc7", 00:14:01.503 "assigned_rate_limits": { 00:14:01.503 "rw_ios_per_sec": 0, 00:14:01.503 "rw_mbytes_per_sec": 0, 00:14:01.503 "r_mbytes_per_sec": 0, 00:14:01.503 "w_mbytes_per_sec": 0 00:14:01.503 }, 00:14:01.503 "claimed": true, 00:14:01.503 "claim_type": "exclusive_write", 00:14:01.503 "zoned": false, 00:14:01.503 "supported_io_types": { 00:14:01.503 "read": true, 00:14:01.503 "write": true, 00:14:01.503 "unmap": true, 00:14:01.503 "write_zeroes": true, 00:14:01.503 "flush": true, 00:14:01.503 "reset": true, 00:14:01.503 "compare": false, 00:14:01.503 "compare_and_write": false, 00:14:01.503 "abort": true, 00:14:01.503 "nvme_admin": false, 00:14:01.503 "nvme_io": false 00:14:01.503 }, 00:14:01.503 "memory_domains": [ 00:14:01.503 { 00:14:01.503 "dma_device_id": "system", 00:14:01.503 "dma_device_type": 1 00:14:01.503 }, 00:14:01.503 { 00:14:01.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.503 "dma_device_type": 2 00:14:01.503 } 00:14:01.503 ], 00:14:01.503 "driver_specific": {} 00:14:01.503 }, 00:14:01.503 { 00:14:01.503 "name": "Passthru0", 00:14:01.503 "aliases": [ 00:14:01.503 "e4ece87d-6ff8-5bd4-b91d-cfc984a2dae3" 00:14:01.503 ], 00:14:01.503 "product_name": "passthru", 00:14:01.503 "block_size": 512, 00:14:01.503 "num_blocks": 16384, 00:14:01.503 "uuid": "e4ece87d-6ff8-5bd4-b91d-cfc984a2dae3", 00:14:01.503 "assigned_rate_limits": { 00:14:01.503 "rw_ios_per_sec": 0, 00:14:01.503 "rw_mbytes_per_sec": 0, 00:14:01.503 "r_mbytes_per_sec": 0, 00:14:01.503 "w_mbytes_per_sec": 0 00:14:01.503 }, 00:14:01.503 "claimed": false, 00:14:01.503 "zoned": false, 00:14:01.503 "supported_io_types": { 00:14:01.503 "read": true, 00:14:01.503 "write": true, 00:14:01.503 "unmap": true, 00:14:01.503 "write_zeroes": true, 00:14:01.503 "flush": true, 00:14:01.503 "reset": true, 00:14:01.503 "compare": false, 00:14:01.503 "compare_and_write": false, 00:14:01.503 "abort": true, 00:14:01.503 "nvme_admin": false, 00:14:01.503 "nvme_io": false 00:14:01.503 }, 00:14:01.503 "memory_domains": [ 00:14:01.503 { 00:14:01.503 "dma_device_id": "system", 00:14:01.503 "dma_device_type": 1 00:14:01.503 }, 00:14:01.503 { 00:14:01.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.503 "dma_device_type": 2 00:14:01.503 } 00:14:01.503 ], 00:14:01.503 "driver_specific": { 00:14:01.503 "passthru": { 00:14:01.503 "name": "Passthru0", 00:14:01.503 "base_bdev_name": "Malloc0" 00:14:01.503 } 00:14:01.503 } 00:14:01.503 } 00:14:01.503 ]' 00:14:01.503 08:39:56 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:14:01.504 08:39:56 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:14:01.504 08:39:56 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:14:01.504 08:39:56 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:01.504 08:39:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:01.504 08:39:56 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:01.504 08:39:56 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:01.504 08:39:56 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:01.504 08:39:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:01.504 08:39:56 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:01.504 08:39:56 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:01.504 08:39:56 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:01.504 08:39:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:01.504 08:39:56 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:01.504 08:39:56 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:14:01.504 08:39:56 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:14:01.762 08:39:56 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:14:01.762 00:14:01.762 real 0m0.229s 00:14:01.762 user 0m0.151s 00:14:01.762 sys 0m0.019s 00:14:01.762 08:39:56 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:01.762 08:39:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:01.762 ************************************ 00:14:01.762 END TEST rpc_integrity 00:14:01.762 ************************************ 00:14:01.762 08:39:56 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:14:01.762 08:39:56 rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:14:01.762 08:39:56 rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:01.762 08:39:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:14:01.762 ************************************ 00:14:01.762 START TEST rpc_plugins 00:14:01.762 ************************************ 00:14:01.762 08:39:56 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # rpc_plugins 00:14:01.762 08:39:56 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:14:01.762 08:39:56 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:01.762 08:39:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:14:01.762 08:39:56 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:01.762 08:39:56 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:14:01.762 08:39:56 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:14:01.762 08:39:56 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:01.762 08:39:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:14:01.762 08:39:56 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:01.762 08:39:56 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:14:01.762 { 00:14:01.762 "name": "Malloc1", 00:14:01.762 "aliases": [ 00:14:01.762 "7b3cbd53-9f53-4df1-8812-e5cdc2638cb1" 00:14:01.762 ], 00:14:01.762 "product_name": "Malloc disk", 00:14:01.762 "block_size": 4096, 00:14:01.762 "num_blocks": 256, 00:14:01.762 "uuid": "7b3cbd53-9f53-4df1-8812-e5cdc2638cb1", 00:14:01.762 "assigned_rate_limits": { 00:14:01.762 "rw_ios_per_sec": 0, 00:14:01.762 "rw_mbytes_per_sec": 0, 00:14:01.762 "r_mbytes_per_sec": 0, 00:14:01.762 "w_mbytes_per_sec": 0 00:14:01.762 }, 00:14:01.762 "claimed": false, 00:14:01.762 "zoned": false, 00:14:01.762 "supported_io_types": { 00:14:01.762 "read": true, 00:14:01.762 "write": true, 00:14:01.762 "unmap": true, 00:14:01.762 "write_zeroes": true, 00:14:01.762 "flush": true, 00:14:01.762 "reset": true, 00:14:01.762 "compare": false, 00:14:01.762 "compare_and_write": false, 00:14:01.762 "abort": true, 00:14:01.762 "nvme_admin": false, 00:14:01.762 "nvme_io": false 00:14:01.762 }, 00:14:01.762 "memory_domains": [ 00:14:01.762 { 00:14:01.762 "dma_device_id": "system", 00:14:01.762 "dma_device_type": 1 00:14:01.762 }, 00:14:01.762 { 00:14:01.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.762 "dma_device_type": 2 00:14:01.762 } 00:14:01.762 ], 00:14:01.762 "driver_specific": {} 00:14:01.762 } 00:14:01.762 ]' 00:14:01.762 08:39:56 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:14:01.762 08:39:56 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:14:01.762 08:39:56 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:14:01.762 08:39:56 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:01.762 08:39:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:14:01.762 08:39:56 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:01.762 08:39:56 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:14:01.762 08:39:56 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:01.762 08:39:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:14:01.762 08:39:56 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:01.762 08:39:56 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:14:01.762 08:39:56 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:14:01.762 08:39:56 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:14:01.762 00:14:01.762 real 0m0.112s 00:14:01.762 user 0m0.072s 00:14:01.762 sys 0m0.012s 00:14:01.762 08:39:56 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:01.762 08:39:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:14:01.763 ************************************ 00:14:01.763 END TEST rpc_plugins 00:14:01.763 ************************************ 00:14:01.763 08:39:56 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:14:01.763 08:39:56 rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:14:01.763 08:39:56 rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:01.763 08:39:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:14:01.763 ************************************ 00:14:01.763 START TEST rpc_trace_cmd_test 00:14:01.763 ************************************ 00:14:01.763 08:39:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # rpc_trace_cmd_test 00:14:01.763 08:39:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:14:01.763 08:39:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:14:01.763 08:39:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:01.763 08:39:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.763 08:39:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:01.763 08:39:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:14:01.763 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2136630", 00:14:01.763 "tpoint_group_mask": "0x8", 00:14:01.763 "iscsi_conn": { 00:14:01.763 "mask": "0x2", 00:14:01.763 "tpoint_mask": "0x0" 00:14:01.763 }, 00:14:01.763 "scsi": { 00:14:01.763 "mask": "0x4", 00:14:01.763 "tpoint_mask": "0x0" 00:14:01.763 }, 00:14:01.763 "bdev": { 00:14:01.763 "mask": "0x8", 00:14:01.763 "tpoint_mask": "0xffffffffffffffff" 00:14:01.763 }, 00:14:01.763 "nvmf_rdma": { 00:14:01.763 "mask": "0x10", 00:14:01.763 "tpoint_mask": "0x0" 00:14:01.763 }, 00:14:01.763 "nvmf_tcp": { 00:14:01.763 "mask": "0x20", 00:14:01.763 "tpoint_mask": "0x0" 00:14:01.763 }, 00:14:01.763 "ftl": { 00:14:01.763 "mask": "0x40", 00:14:01.763 "tpoint_mask": "0x0" 00:14:01.763 }, 00:14:01.763 "blobfs": { 00:14:01.763 "mask": "0x80", 00:14:01.763 "tpoint_mask": "0x0" 00:14:01.763 }, 00:14:01.763 "dsa": { 00:14:01.763 "mask": "0x200", 00:14:01.763 "tpoint_mask": "0x0" 00:14:01.763 }, 00:14:01.763 "thread": { 00:14:01.763 "mask": "0x400", 00:14:01.763 "tpoint_mask": "0x0" 00:14:01.763 }, 00:14:01.763 "nvme_pcie": { 00:14:01.763 "mask": "0x800", 00:14:01.763 "tpoint_mask": "0x0" 00:14:01.763 }, 00:14:01.763 "iaa": { 00:14:01.763 "mask": "0x1000", 00:14:01.763 "tpoint_mask": "0x0" 00:14:01.763 }, 00:14:01.763 "nvme_tcp": { 00:14:01.763 "mask": "0x2000", 00:14:01.763 "tpoint_mask": "0x0" 00:14:01.763 }, 00:14:01.763 "bdev_nvme": { 00:14:01.763 "mask": "0x4000", 00:14:01.763 "tpoint_mask": "0x0" 00:14:01.763 }, 00:14:01.763 "sock": { 00:14:01.763 "mask": "0x8000", 00:14:01.763 "tpoint_mask": "0x0" 00:14:01.763 } 00:14:01.763 }' 00:14:01.763 08:39:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:14:02.021 08:39:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:14:02.021 08:39:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:14:02.021 08:39:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:14:02.021 08:39:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:14:02.021 08:39:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:14:02.021 08:39:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:14:02.021 08:39:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:14:02.021 08:39:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:14:02.021 08:39:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:14:02.021 00:14:02.021 real 0m0.201s 00:14:02.021 user 0m0.175s 00:14:02.021 sys 0m0.017s 00:14:02.021 08:39:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:02.021 08:39:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.021 ************************************ 00:14:02.021 END TEST rpc_trace_cmd_test 00:14:02.021 ************************************ 00:14:02.021 08:39:56 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:14:02.021 08:39:56 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:14:02.021 08:39:56 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:14:02.021 08:39:56 rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:14:02.021 08:39:56 rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:02.021 08:39:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.021 ************************************ 00:14:02.021 START TEST rpc_daemon_integrity 00:14:02.021 ************************************ 00:14:02.021 08:39:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # rpc_integrity 00:14:02.021 08:39:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:02.021 08:39:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:02.021 08:39:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:02.021 08:39:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:02.021 08:39:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:14:02.021 08:39:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:14:02.280 08:39:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:14:02.280 08:39:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:14:02.280 08:39:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:02.280 08:39:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:02.280 08:39:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:02.280 08:39:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:14:02.280 08:39:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:14:02.280 08:39:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:02.280 08:39:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:02.280 08:39:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:02.280 08:39:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:14:02.280 { 00:14:02.280 "name": "Malloc2", 00:14:02.280 "aliases": [ 00:14:02.280 "4b904b2d-9774-484e-bc48-8b0b13922d67" 00:14:02.280 ], 00:14:02.280 "product_name": "Malloc disk", 00:14:02.280 "block_size": 512, 00:14:02.280 "num_blocks": 16384, 00:14:02.280 "uuid": "4b904b2d-9774-484e-bc48-8b0b13922d67", 00:14:02.280 "assigned_rate_limits": { 00:14:02.280 "rw_ios_per_sec": 0, 00:14:02.280 "rw_mbytes_per_sec": 0, 00:14:02.280 "r_mbytes_per_sec": 0, 00:14:02.280 "w_mbytes_per_sec": 0 00:14:02.280 }, 00:14:02.280 "claimed": false, 00:14:02.280 "zoned": false, 00:14:02.280 "supported_io_types": { 00:14:02.280 "read": true, 00:14:02.280 "write": true, 00:14:02.280 "unmap": true, 00:14:02.280 "write_zeroes": true, 00:14:02.280 "flush": true, 00:14:02.280 "reset": true, 00:14:02.280 "compare": false, 00:14:02.280 "compare_and_write": false, 00:14:02.280 "abort": true, 00:14:02.280 "nvme_admin": false, 00:14:02.280 "nvme_io": false 00:14:02.280 }, 00:14:02.280 "memory_domains": [ 00:14:02.280 { 00:14:02.280 "dma_device_id": "system", 00:14:02.280 "dma_device_type": 1 00:14:02.280 }, 00:14:02.280 { 00:14:02.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.280 "dma_device_type": 2 00:14:02.280 } 00:14:02.280 ], 00:14:02.280 "driver_specific": {} 00:14:02.280 } 00:14:02.280 ]' 00:14:02.280 08:39:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:14:02.280 08:39:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:14:02.280 08:39:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:14:02.280 08:39:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:02.280 08:39:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:02.280 [2024-05-15 08:39:56.895906] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:14:02.280 [2024-05-15 08:39:56.895949] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:02.280 [2024-05-15 08:39:56.895979] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x114c940 00:14:02.280 [2024-05-15 08:39:56.895997] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:02.280 [2024-05-15 08:39:56.897344] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:02.280 [2024-05-15 08:39:56.897372] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:14:02.280 Passthru0 00:14:02.280 08:39:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:02.280 08:39:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:14:02.280 08:39:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:02.280 08:39:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:02.280 08:39:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:02.280 08:39:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:14:02.280 { 00:14:02.280 "name": "Malloc2", 00:14:02.280 "aliases": [ 00:14:02.280 "4b904b2d-9774-484e-bc48-8b0b13922d67" 00:14:02.280 ], 00:14:02.280 "product_name": "Malloc disk", 00:14:02.280 "block_size": 512, 00:14:02.280 "num_blocks": 16384, 00:14:02.280 "uuid": "4b904b2d-9774-484e-bc48-8b0b13922d67", 00:14:02.280 "assigned_rate_limits": { 00:14:02.280 "rw_ios_per_sec": 0, 00:14:02.280 "rw_mbytes_per_sec": 0, 00:14:02.280 "r_mbytes_per_sec": 0, 00:14:02.280 "w_mbytes_per_sec": 0 00:14:02.280 }, 00:14:02.280 "claimed": true, 00:14:02.280 "claim_type": "exclusive_write", 00:14:02.280 "zoned": false, 00:14:02.280 "supported_io_types": { 00:14:02.280 "read": true, 00:14:02.280 "write": true, 00:14:02.280 "unmap": true, 00:14:02.280 "write_zeroes": true, 00:14:02.280 "flush": true, 00:14:02.280 "reset": true, 00:14:02.280 "compare": false, 00:14:02.280 "compare_and_write": false, 00:14:02.280 "abort": true, 00:14:02.280 "nvme_admin": false, 00:14:02.280 "nvme_io": false 00:14:02.280 }, 00:14:02.280 "memory_domains": [ 00:14:02.280 { 00:14:02.280 "dma_device_id": "system", 00:14:02.280 "dma_device_type": 1 00:14:02.280 }, 00:14:02.280 { 00:14:02.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.280 "dma_device_type": 2 00:14:02.280 } 00:14:02.280 ], 00:14:02.280 "driver_specific": {} 00:14:02.280 }, 00:14:02.280 { 00:14:02.280 "name": "Passthru0", 00:14:02.281 "aliases": [ 00:14:02.281 "f64be02e-2e4a-5b31-a0c5-ad435ca77e5d" 00:14:02.281 ], 00:14:02.281 "product_name": "passthru", 00:14:02.281 "block_size": 512, 00:14:02.281 "num_blocks": 16384, 00:14:02.281 "uuid": "f64be02e-2e4a-5b31-a0c5-ad435ca77e5d", 00:14:02.281 "assigned_rate_limits": { 00:14:02.281 "rw_ios_per_sec": 0, 00:14:02.281 "rw_mbytes_per_sec": 0, 00:14:02.281 "r_mbytes_per_sec": 0, 00:14:02.281 "w_mbytes_per_sec": 0 00:14:02.281 }, 00:14:02.281 "claimed": false, 00:14:02.281 "zoned": false, 00:14:02.281 "supported_io_types": { 00:14:02.281 "read": true, 00:14:02.281 "write": true, 00:14:02.281 "unmap": true, 00:14:02.281 "write_zeroes": true, 00:14:02.281 "flush": true, 00:14:02.281 "reset": true, 00:14:02.281 "compare": false, 00:14:02.281 "compare_and_write": false, 00:14:02.281 "abort": true, 00:14:02.281 "nvme_admin": false, 00:14:02.281 "nvme_io": false 00:14:02.281 }, 00:14:02.281 "memory_domains": [ 00:14:02.281 { 00:14:02.281 "dma_device_id": "system", 00:14:02.281 "dma_device_type": 1 00:14:02.281 }, 00:14:02.281 { 00:14:02.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.281 "dma_device_type": 2 00:14:02.281 } 00:14:02.281 ], 00:14:02.281 "driver_specific": { 00:14:02.281 "passthru": { 00:14:02.281 "name": "Passthru0", 00:14:02.281 "base_bdev_name": "Malloc2" 00:14:02.281 } 00:14:02.281 } 00:14:02.281 } 00:14:02.281 ]' 00:14:02.281 08:39:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:14:02.281 08:39:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:14:02.281 08:39:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:14:02.281 08:39:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:02.281 08:39:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:02.281 08:39:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:02.281 08:39:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:14:02.281 08:39:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:02.281 08:39:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:02.281 08:39:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:02.281 08:39:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:02.281 08:39:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:02.281 08:39:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:02.281 08:39:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:02.281 08:39:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:14:02.281 08:39:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:14:02.281 08:39:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:14:02.281 00:14:02.281 real 0m0.231s 00:14:02.281 user 0m0.158s 00:14:02.281 sys 0m0.018s 00:14:02.281 08:39:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:02.281 08:39:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:02.281 ************************************ 00:14:02.281 END TEST rpc_daemon_integrity 00:14:02.281 ************************************ 00:14:02.281 08:39:57 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:14:02.281 08:39:57 rpc -- rpc/rpc.sh@84 -- # killprocess 2136630 00:14:02.281 08:39:57 rpc -- common/autotest_common.sh@947 -- # '[' -z 2136630 ']' 00:14:02.281 08:39:57 rpc -- common/autotest_common.sh@951 -- # kill -0 2136630 00:14:02.281 08:39:57 rpc -- common/autotest_common.sh@952 -- # uname 00:14:02.281 08:39:57 rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:14:02.281 08:39:57 rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2136630 00:14:02.281 08:39:57 rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:14:02.281 08:39:57 rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:14:02.281 08:39:57 rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2136630' 00:14:02.281 killing process with pid 2136630 00:14:02.281 08:39:57 rpc -- common/autotest_common.sh@966 -- # kill 2136630 00:14:02.281 08:39:57 rpc -- common/autotest_common.sh@971 -- # wait 2136630 00:14:02.848 00:14:02.848 real 0m1.915s 00:14:02.848 user 0m2.390s 00:14:02.848 sys 0m0.613s 00:14:02.848 08:39:57 rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:02.848 08:39:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.848 ************************************ 00:14:02.848 END TEST rpc 00:14:02.848 ************************************ 00:14:02.848 08:39:57 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:14:02.848 08:39:57 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:14:02.848 08:39:57 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:02.848 08:39:57 -- common/autotest_common.sh@10 -- # set +x 00:14:02.848 ************************************ 00:14:02.848 START TEST skip_rpc 00:14:02.848 ************************************ 00:14:02.848 08:39:57 skip_rpc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:14:02.848 * Looking for test storage... 00:14:02.848 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:14:02.848 08:39:57 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:14:02.848 08:39:57 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:14:02.848 08:39:57 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:14:02.848 08:39:57 skip_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:14:02.848 08:39:57 skip_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:02.848 08:39:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.848 ************************************ 00:14:02.848 START TEST skip_rpc 00:14:02.848 ************************************ 00:14:02.848 08:39:57 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # test_skip_rpc 00:14:02.848 08:39:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2137069 00:14:02.848 08:39:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:14:02.848 08:39:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:14:02.848 08:39:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:14:03.106 [2024-05-15 08:39:57.653625] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:14:03.106 [2024-05-15 08:39:57.653708] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2137069 ] 00:14:03.106 EAL: No free 2048 kB hugepages reported on node 1 00:14:03.106 [2024-05-15 08:39:57.718310] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.106 [2024-05-15 08:39:57.805193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.369 08:40:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:14:08.369 08:40:02 skip_rpc.skip_rpc -- common/autotest_common.sh@649 -- # local es=0 00:14:08.369 08:40:02 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd spdk_get_version 00:14:08.369 08:40:02 skip_rpc.skip_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:14:08.369 08:40:02 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:08.369 08:40:02 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:14:08.369 08:40:02 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:08.369 08:40:02 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # rpc_cmd spdk_get_version 00:14:08.369 08:40:02 skip_rpc.skip_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:08.369 08:40:02 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.369 08:40:02 skip_rpc.skip_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:14:08.369 08:40:02 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # es=1 00:14:08.369 08:40:02 skip_rpc.skip_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:14:08.369 08:40:02 skip_rpc.skip_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:14:08.369 08:40:02 skip_rpc.skip_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:14:08.369 08:40:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:14:08.369 08:40:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2137069 00:14:08.369 08:40:02 skip_rpc.skip_rpc -- common/autotest_common.sh@947 -- # '[' -z 2137069 ']' 00:14:08.369 08:40:02 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # kill -0 2137069 00:14:08.369 08:40:02 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # uname 00:14:08.369 08:40:02 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:14:08.369 08:40:02 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2137069 00:14:08.369 08:40:02 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:14:08.369 08:40:02 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:14:08.369 08:40:02 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2137069' 00:14:08.369 killing process with pid 2137069 00:14:08.369 08:40:02 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # kill 2137069 00:14:08.369 08:40:02 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # wait 2137069 00:14:08.369 00:14:08.369 real 0m5.440s 00:14:08.369 user 0m5.116s 00:14:08.369 sys 0m0.329s 00:14:08.369 08:40:03 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:08.369 08:40:03 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.369 ************************************ 00:14:08.369 END TEST skip_rpc 00:14:08.369 ************************************ 00:14:08.369 08:40:03 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:14:08.369 08:40:03 skip_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:14:08.369 08:40:03 skip_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:08.369 08:40:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.369 ************************************ 00:14:08.369 START TEST skip_rpc_with_json 00:14:08.369 ************************************ 00:14:08.369 08:40:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # test_skip_rpc_with_json 00:14:08.369 08:40:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:14:08.369 08:40:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2137756 00:14:08.369 08:40:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:14:08.369 08:40:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:14:08.369 08:40:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2137756 00:14:08.369 08:40:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@828 -- # '[' -z 2137756 ']' 00:14:08.369 08:40:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:08.369 08:40:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local max_retries=100 00:14:08.369 08:40:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:08.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:08.369 08:40:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # xtrace_disable 00:14:08.369 08:40:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:14:08.369 [2024-05-15 08:40:03.146261] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:14:08.369 [2024-05-15 08:40:03.146363] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2137756 ] 00:14:08.626 EAL: No free 2048 kB hugepages reported on node 1 00:14:08.626 [2024-05-15 08:40:03.212776] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.626 [2024-05-15 08:40:03.298576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.884 08:40:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:14:08.884 08:40:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@861 -- # return 0 00:14:08.884 08:40:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:14:08.884 08:40:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:08.884 08:40:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:14:08.884 [2024-05-15 08:40:03.553957] nvmf_rpc.c:2547:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:14:08.884 request: 00:14:08.884 { 00:14:08.884 "trtype": "tcp", 00:14:08.884 "method": "nvmf_get_transports", 00:14:08.884 "req_id": 1 00:14:08.884 } 00:14:08.884 Got JSON-RPC error response 00:14:08.884 response: 00:14:08.884 { 00:14:08.884 "code": -19, 00:14:08.884 "message": "No such device" 00:14:08.884 } 00:14:08.884 08:40:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:14:08.884 08:40:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:14:08.884 08:40:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:08.884 08:40:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:14:08.884 [2024-05-15 08:40:03.562088] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:08.884 08:40:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:08.884 08:40:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:14:08.884 08:40:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:08.884 08:40:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:14:09.142 08:40:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:09.142 08:40:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:14:09.142 { 00:14:09.142 "subsystems": [ 00:14:09.142 { 00:14:09.142 "subsystem": "vfio_user_target", 00:14:09.142 "config": null 00:14:09.142 }, 00:14:09.142 { 00:14:09.142 "subsystem": "keyring", 00:14:09.142 "config": [] 00:14:09.142 }, 00:14:09.142 { 00:14:09.142 "subsystem": "iobuf", 00:14:09.142 "config": [ 00:14:09.142 { 00:14:09.142 "method": "iobuf_set_options", 00:14:09.142 "params": { 00:14:09.142 "small_pool_count": 8192, 00:14:09.142 "large_pool_count": 1024, 00:14:09.142 "small_bufsize": 8192, 00:14:09.142 "large_bufsize": 135168 00:14:09.142 } 00:14:09.142 } 00:14:09.142 ] 00:14:09.142 }, 00:14:09.142 { 00:14:09.142 "subsystem": "sock", 00:14:09.142 "config": [ 00:14:09.142 { 00:14:09.142 "method": "sock_impl_set_options", 00:14:09.142 "params": { 00:14:09.142 "impl_name": "posix", 00:14:09.142 "recv_buf_size": 2097152, 00:14:09.142 "send_buf_size": 2097152, 00:14:09.142 "enable_recv_pipe": true, 00:14:09.142 "enable_quickack": false, 00:14:09.142 "enable_placement_id": 0, 00:14:09.142 "enable_zerocopy_send_server": true, 00:14:09.142 "enable_zerocopy_send_client": false, 00:14:09.142 "zerocopy_threshold": 0, 00:14:09.142 "tls_version": 0, 00:14:09.142 "enable_ktls": false 00:14:09.142 } 00:14:09.142 }, 00:14:09.142 { 00:14:09.142 "method": "sock_impl_set_options", 00:14:09.142 "params": { 00:14:09.142 "impl_name": "ssl", 00:14:09.142 "recv_buf_size": 4096, 00:14:09.142 "send_buf_size": 4096, 00:14:09.142 "enable_recv_pipe": true, 00:14:09.142 "enable_quickack": false, 00:14:09.142 "enable_placement_id": 0, 00:14:09.142 "enable_zerocopy_send_server": true, 00:14:09.142 "enable_zerocopy_send_client": false, 00:14:09.142 "zerocopy_threshold": 0, 00:14:09.142 "tls_version": 0, 00:14:09.142 "enable_ktls": false 00:14:09.142 } 00:14:09.142 } 00:14:09.142 ] 00:14:09.142 }, 00:14:09.142 { 00:14:09.142 "subsystem": "vmd", 00:14:09.142 "config": [] 00:14:09.142 }, 00:14:09.142 { 00:14:09.142 "subsystem": "accel", 00:14:09.142 "config": [ 00:14:09.142 { 00:14:09.142 "method": "accel_set_options", 00:14:09.142 "params": { 00:14:09.142 "small_cache_size": 128, 00:14:09.142 "large_cache_size": 16, 00:14:09.142 "task_count": 2048, 00:14:09.142 "sequence_count": 2048, 00:14:09.142 "buf_count": 2048 00:14:09.142 } 00:14:09.142 } 00:14:09.142 ] 00:14:09.142 }, 00:14:09.142 { 00:14:09.142 "subsystem": "bdev", 00:14:09.142 "config": [ 00:14:09.142 { 00:14:09.142 "method": "bdev_set_options", 00:14:09.142 "params": { 00:14:09.142 "bdev_io_pool_size": 65535, 00:14:09.142 "bdev_io_cache_size": 256, 00:14:09.142 "bdev_auto_examine": true, 00:14:09.142 "iobuf_small_cache_size": 128, 00:14:09.142 "iobuf_large_cache_size": 16 00:14:09.142 } 00:14:09.142 }, 00:14:09.142 { 00:14:09.142 "method": "bdev_raid_set_options", 00:14:09.142 "params": { 00:14:09.142 "process_window_size_kb": 1024 00:14:09.142 } 00:14:09.142 }, 00:14:09.142 { 00:14:09.142 "method": "bdev_iscsi_set_options", 00:14:09.142 "params": { 00:14:09.142 "timeout_sec": 30 00:14:09.142 } 00:14:09.142 }, 00:14:09.142 { 00:14:09.142 "method": "bdev_nvme_set_options", 00:14:09.142 "params": { 00:14:09.142 "action_on_timeout": "none", 00:14:09.142 "timeout_us": 0, 00:14:09.142 "timeout_admin_us": 0, 00:14:09.142 "keep_alive_timeout_ms": 10000, 00:14:09.142 "arbitration_burst": 0, 00:14:09.142 "low_priority_weight": 0, 00:14:09.142 "medium_priority_weight": 0, 00:14:09.142 "high_priority_weight": 0, 00:14:09.142 "nvme_adminq_poll_period_us": 10000, 00:14:09.142 "nvme_ioq_poll_period_us": 0, 00:14:09.142 "io_queue_requests": 0, 00:14:09.142 "delay_cmd_submit": true, 00:14:09.142 "transport_retry_count": 4, 00:14:09.142 "bdev_retry_count": 3, 00:14:09.142 "transport_ack_timeout": 0, 00:14:09.142 "ctrlr_loss_timeout_sec": 0, 00:14:09.142 "reconnect_delay_sec": 0, 00:14:09.142 "fast_io_fail_timeout_sec": 0, 00:14:09.142 "disable_auto_failback": false, 00:14:09.142 "generate_uuids": false, 00:14:09.142 "transport_tos": 0, 00:14:09.142 "nvme_error_stat": false, 00:14:09.142 "rdma_srq_size": 0, 00:14:09.142 "io_path_stat": false, 00:14:09.142 "allow_accel_sequence": false, 00:14:09.142 "rdma_max_cq_size": 0, 00:14:09.142 "rdma_cm_event_timeout_ms": 0, 00:14:09.142 "dhchap_digests": [ 00:14:09.142 "sha256", 00:14:09.142 "sha384", 00:14:09.142 "sha512" 00:14:09.142 ], 00:14:09.142 "dhchap_dhgroups": [ 00:14:09.142 "null", 00:14:09.142 "ffdhe2048", 00:14:09.142 "ffdhe3072", 00:14:09.142 "ffdhe4096", 00:14:09.142 "ffdhe6144", 00:14:09.142 "ffdhe8192" 00:14:09.142 ] 00:14:09.142 } 00:14:09.142 }, 00:14:09.142 { 00:14:09.142 "method": "bdev_nvme_set_hotplug", 00:14:09.142 "params": { 00:14:09.142 "period_us": 100000, 00:14:09.142 "enable": false 00:14:09.142 } 00:14:09.142 }, 00:14:09.142 { 00:14:09.142 "method": "bdev_wait_for_examine" 00:14:09.142 } 00:14:09.142 ] 00:14:09.142 }, 00:14:09.142 { 00:14:09.142 "subsystem": "scsi", 00:14:09.142 "config": null 00:14:09.142 }, 00:14:09.142 { 00:14:09.142 "subsystem": "scheduler", 00:14:09.142 "config": [ 00:14:09.142 { 00:14:09.142 "method": "framework_set_scheduler", 00:14:09.142 "params": { 00:14:09.142 "name": "static" 00:14:09.142 } 00:14:09.142 } 00:14:09.142 ] 00:14:09.142 }, 00:14:09.142 { 00:14:09.142 "subsystem": "vhost_scsi", 00:14:09.142 "config": [] 00:14:09.142 }, 00:14:09.142 { 00:14:09.142 "subsystem": "vhost_blk", 00:14:09.142 "config": [] 00:14:09.142 }, 00:14:09.142 { 00:14:09.142 "subsystem": "ublk", 00:14:09.142 "config": [] 00:14:09.142 }, 00:14:09.142 { 00:14:09.142 "subsystem": "nbd", 00:14:09.142 "config": [] 00:14:09.142 }, 00:14:09.142 { 00:14:09.142 "subsystem": "nvmf", 00:14:09.142 "config": [ 00:14:09.142 { 00:14:09.142 "method": "nvmf_set_config", 00:14:09.142 "params": { 00:14:09.142 "discovery_filter": "match_any", 00:14:09.142 "admin_cmd_passthru": { 00:14:09.142 "identify_ctrlr": false 00:14:09.142 } 00:14:09.142 } 00:14:09.142 }, 00:14:09.142 { 00:14:09.142 "method": "nvmf_set_max_subsystems", 00:14:09.142 "params": { 00:14:09.142 "max_subsystems": 1024 00:14:09.142 } 00:14:09.142 }, 00:14:09.142 { 00:14:09.143 "method": "nvmf_set_crdt", 00:14:09.143 "params": { 00:14:09.143 "crdt1": 0, 00:14:09.143 "crdt2": 0, 00:14:09.143 "crdt3": 0 00:14:09.143 } 00:14:09.143 }, 00:14:09.143 { 00:14:09.143 "method": "nvmf_create_transport", 00:14:09.143 "params": { 00:14:09.143 "trtype": "TCP", 00:14:09.143 "max_queue_depth": 128, 00:14:09.143 "max_io_qpairs_per_ctrlr": 127, 00:14:09.143 "in_capsule_data_size": 4096, 00:14:09.143 "max_io_size": 131072, 00:14:09.143 "io_unit_size": 131072, 00:14:09.143 "max_aq_depth": 128, 00:14:09.143 "num_shared_buffers": 511, 00:14:09.143 "buf_cache_size": 4294967295, 00:14:09.143 "dif_insert_or_strip": false, 00:14:09.143 "zcopy": false, 00:14:09.143 "c2h_success": true, 00:14:09.143 "sock_priority": 0, 00:14:09.143 "abort_timeout_sec": 1, 00:14:09.143 "ack_timeout": 0, 00:14:09.143 "data_wr_pool_size": 0 00:14:09.143 } 00:14:09.143 } 00:14:09.143 ] 00:14:09.143 }, 00:14:09.143 { 00:14:09.143 "subsystem": "iscsi", 00:14:09.143 "config": [ 00:14:09.143 { 00:14:09.143 "method": "iscsi_set_options", 00:14:09.143 "params": { 00:14:09.143 "node_base": "iqn.2016-06.io.spdk", 00:14:09.143 "max_sessions": 128, 00:14:09.143 "max_connections_per_session": 2, 00:14:09.143 "max_queue_depth": 64, 00:14:09.143 "default_time2wait": 2, 00:14:09.143 "default_time2retain": 20, 00:14:09.143 "first_burst_length": 8192, 00:14:09.143 "immediate_data": true, 00:14:09.143 "allow_duplicated_isid": false, 00:14:09.143 "error_recovery_level": 0, 00:14:09.143 "nop_timeout": 60, 00:14:09.143 "nop_in_interval": 30, 00:14:09.143 "disable_chap": false, 00:14:09.143 "require_chap": false, 00:14:09.143 "mutual_chap": false, 00:14:09.143 "chap_group": 0, 00:14:09.143 "max_large_datain_per_connection": 64, 00:14:09.143 "max_r2t_per_connection": 4, 00:14:09.143 "pdu_pool_size": 36864, 00:14:09.143 "immediate_data_pool_size": 16384, 00:14:09.143 "data_out_pool_size": 2048 00:14:09.143 } 00:14:09.143 } 00:14:09.143 ] 00:14:09.143 } 00:14:09.143 ] 00:14:09.143 } 00:14:09.143 08:40:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:14:09.143 08:40:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2137756 00:14:09.143 08:40:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@947 -- # '[' -z 2137756 ']' 00:14:09.143 08:40:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # kill -0 2137756 00:14:09.143 08:40:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # uname 00:14:09.143 08:40:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:14:09.143 08:40:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2137756 00:14:09.143 08:40:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:14:09.143 08:40:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:14:09.143 08:40:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2137756' 00:14:09.143 killing process with pid 2137756 00:14:09.143 08:40:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # kill 2137756 00:14:09.143 08:40:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # wait 2137756 00:14:09.400 08:40:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2137893 00:14:09.400 08:40:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:14:09.400 08:40:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:14:14.666 08:40:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2137893 00:14:14.666 08:40:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@947 -- # '[' -z 2137893 ']' 00:14:14.666 08:40:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # kill -0 2137893 00:14:14.666 08:40:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # uname 00:14:14.666 08:40:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:14:14.666 08:40:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2137893 00:14:14.666 08:40:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:14:14.666 08:40:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:14:14.666 08:40:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2137893' 00:14:14.666 killing process with pid 2137893 00:14:14.666 08:40:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # kill 2137893 00:14:14.666 08:40:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # wait 2137893 00:14:14.924 08:40:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:14:14.924 08:40:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:14:14.924 00:14:14.924 real 0m6.477s 00:14:14.924 user 0m6.062s 00:14:14.924 sys 0m0.705s 00:14:14.924 08:40:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:14.924 08:40:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:14:14.924 ************************************ 00:14:14.924 END TEST skip_rpc_with_json 00:14:14.924 ************************************ 00:14:14.924 08:40:09 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:14:14.924 08:40:09 skip_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:14:14.924 08:40:09 skip_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:14.924 08:40:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.924 ************************************ 00:14:14.924 START TEST skip_rpc_with_delay 00:14:14.924 ************************************ 00:14:14.924 08:40:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # test_skip_rpc_with_delay 00:14:14.924 08:40:09 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:14:14.924 08:40:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@649 -- # local es=0 00:14:14.924 08:40:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:14:14.924 08:40:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:14:14.924 08:40:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:14.924 08:40:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:14:14.924 08:40:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:14.924 08:40:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:14:14.924 08:40:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:14.924 08:40:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:14:14.924 08:40:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:14:14.924 08:40:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:14:14.924 [2024-05-15 08:40:09.674137] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:14:14.924 [2024-05-15 08:40:09.674289] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:14:14.924 08:40:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # es=1 00:14:14.924 08:40:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:14:14.924 08:40:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:14:14.924 08:40:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:14:14.924 00:14:14.924 real 0m0.064s 00:14:14.924 user 0m0.044s 00:14:14.924 sys 0m0.019s 00:14:14.924 08:40:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:14.924 08:40:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:14:14.924 ************************************ 00:14:14.924 END TEST skip_rpc_with_delay 00:14:14.924 ************************************ 00:14:14.924 08:40:09 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:14:14.924 08:40:09 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:14:14.924 08:40:09 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:14:14.924 08:40:09 skip_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:14:14.924 08:40:09 skip_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:14.924 08:40:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:15.182 ************************************ 00:14:15.182 START TEST exit_on_failed_rpc_init 00:14:15.182 ************************************ 00:14:15.182 08:40:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # test_exit_on_failed_rpc_init 00:14:15.182 08:40:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2138609 00:14:15.182 08:40:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:14:15.182 08:40:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2138609 00:14:15.182 08:40:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@828 -- # '[' -z 2138609 ']' 00:14:15.182 08:40:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:15.182 08:40:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local max_retries=100 00:14:15.182 08:40:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:15.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:15.182 08:40:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # xtrace_disable 00:14:15.182 08:40:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:14:15.182 [2024-05-15 08:40:09.788283] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:14:15.182 [2024-05-15 08:40:09.788381] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2138609 ] 00:14:15.182 EAL: No free 2048 kB hugepages reported on node 1 00:14:15.182 [2024-05-15 08:40:09.854378] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:15.182 [2024-05-15 08:40:09.938779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.440 08:40:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:14:15.440 08:40:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@861 -- # return 0 00:14:15.440 08:40:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:14:15.440 08:40:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:14:15.440 08:40:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@649 -- # local es=0 00:14:15.440 08:40:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:14:15.440 08:40:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:14:15.440 08:40:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:15.440 08:40:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:14:15.440 08:40:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:15.440 08:40:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:14:15.440 08:40:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:15.440 08:40:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:14:15.440 08:40:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:14:15.440 08:40:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:14:15.699 [2024-05-15 08:40:10.244795] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:14:15.699 [2024-05-15 08:40:10.244870] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2138619 ] 00:14:15.699 EAL: No free 2048 kB hugepages reported on node 1 00:14:15.699 [2024-05-15 08:40:10.315992] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:15.699 [2024-05-15 08:40:10.410429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:15.699 [2024-05-15 08:40:10.410537] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:14:15.699 [2024-05-15 08:40:10.410560] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:14:15.699 [2024-05-15 08:40:10.410583] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:15.957 08:40:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # es=234 00:14:15.957 08:40:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:14:15.957 08:40:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # es=106 00:14:15.957 08:40:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # case "$es" in 00:14:15.957 08:40:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@669 -- # es=1 00:14:15.957 08:40:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:14:15.957 08:40:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:15.957 08:40:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2138609 00:14:15.957 08:40:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@947 -- # '[' -z 2138609 ']' 00:14:15.957 08:40:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # kill -0 2138609 00:14:15.957 08:40:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # uname 00:14:15.957 08:40:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:14:15.957 08:40:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2138609 00:14:15.957 08:40:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:14:15.957 08:40:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:14:15.957 08:40:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2138609' 00:14:15.957 killing process with pid 2138609 00:14:15.957 08:40:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # kill 2138609 00:14:15.957 08:40:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # wait 2138609 00:14:16.235 00:14:16.235 real 0m1.203s 00:14:16.235 user 0m1.294s 00:14:16.235 sys 0m0.470s 00:14:16.235 08:40:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:16.235 08:40:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:14:16.235 ************************************ 00:14:16.235 END TEST exit_on_failed_rpc_init 00:14:16.235 ************************************ 00:14:16.235 08:40:10 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:14:16.235 00:14:16.235 real 0m13.444s 00:14:16.235 user 0m12.635s 00:14:16.235 sys 0m1.673s 00:14:16.235 08:40:10 skip_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:16.235 08:40:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.235 ************************************ 00:14:16.235 END TEST skip_rpc 00:14:16.235 ************************************ 00:14:16.235 08:40:10 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:14:16.235 08:40:10 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:14:16.235 08:40:10 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:16.235 08:40:10 -- common/autotest_common.sh@10 -- # set +x 00:14:16.507 ************************************ 00:14:16.507 START TEST rpc_client 00:14:16.507 ************************************ 00:14:16.507 08:40:11 rpc_client -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:14:16.507 * Looking for test storage... 00:14:16.507 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:14:16.507 08:40:11 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:14:16.507 OK 00:14:16.507 08:40:11 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:14:16.507 00:14:16.507 real 0m0.060s 00:14:16.507 user 0m0.026s 00:14:16.507 sys 0m0.039s 00:14:16.507 08:40:11 rpc_client -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:16.507 08:40:11 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:14:16.507 ************************************ 00:14:16.507 END TEST rpc_client 00:14:16.507 ************************************ 00:14:16.507 08:40:11 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:14:16.507 08:40:11 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:14:16.507 08:40:11 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:16.507 08:40:11 -- common/autotest_common.sh@10 -- # set +x 00:14:16.507 ************************************ 00:14:16.507 START TEST json_config 00:14:16.507 ************************************ 00:14:16.507 08:40:11 json_config -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:14:16.507 08:40:11 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:16.507 08:40:11 json_config -- nvmf/common.sh@7 -- # uname -s 00:14:16.507 08:40:11 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:16.507 08:40:11 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:16.507 08:40:11 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:16.507 08:40:11 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:16.507 08:40:11 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:16.507 08:40:11 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:16.507 08:40:11 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:16.507 08:40:11 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:16.507 08:40:11 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:16.507 08:40:11 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:16.507 08:40:11 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:16.507 08:40:11 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:16.507 08:40:11 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:16.507 08:40:11 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:16.507 08:40:11 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:14:16.507 08:40:11 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:16.507 08:40:11 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:16.507 08:40:11 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:16.507 08:40:11 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:16.507 08:40:11 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:16.507 08:40:11 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.507 08:40:11 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.507 08:40:11 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.507 08:40:11 json_config -- paths/export.sh@5 -- # export PATH 00:14:16.507 08:40:11 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.507 08:40:11 json_config -- nvmf/common.sh@47 -- # : 0 00:14:16.507 08:40:11 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:16.507 08:40:11 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:16.508 08:40:11 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:16.508 08:40:11 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:16.508 08:40:11 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:16.508 08:40:11 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:16.508 08:40:11 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:16.508 08:40:11 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:16.508 08:40:11 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:14:16.508 08:40:11 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:14:16.508 08:40:11 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:14:16.508 08:40:11 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:14:16.508 08:40:11 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:14:16.508 08:40:11 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:14:16.508 08:40:11 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:14:16.508 08:40:11 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:14:16.508 08:40:11 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:14:16.508 08:40:11 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:14:16.508 08:40:11 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:14:16.508 08:40:11 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:14:16.508 08:40:11 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:14:16.508 08:40:11 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:14:16.508 08:40:11 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:14:16.508 08:40:11 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:14:16.508 INFO: JSON configuration test init 00:14:16.508 08:40:11 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:14:16.508 08:40:11 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:14:16.508 08:40:11 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:14:16.508 08:40:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:14:16.508 08:40:11 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:14:16.508 08:40:11 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:14:16.508 08:40:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:14:16.508 08:40:11 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:14:16.508 08:40:11 json_config -- json_config/common.sh@9 -- # local app=target 00:14:16.508 08:40:11 json_config -- json_config/common.sh@10 -- # shift 00:14:16.508 08:40:11 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:14:16.508 08:40:11 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:14:16.508 08:40:11 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:14:16.508 08:40:11 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:14:16.508 08:40:11 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:14:16.508 08:40:11 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2138864 00:14:16.508 08:40:11 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:14:16.508 08:40:11 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:14:16.508 Waiting for target to run... 00:14:16.508 08:40:11 json_config -- json_config/common.sh@25 -- # waitforlisten 2138864 /var/tmp/spdk_tgt.sock 00:14:16.508 08:40:11 json_config -- common/autotest_common.sh@828 -- # '[' -z 2138864 ']' 00:14:16.508 08:40:11 json_config -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:14:16.508 08:40:11 json_config -- common/autotest_common.sh@833 -- # local max_retries=100 00:14:16.508 08:40:11 json_config -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:14:16.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:14:16.508 08:40:11 json_config -- common/autotest_common.sh@837 -- # xtrace_disable 00:14:16.508 08:40:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:14:16.508 [2024-05-15 08:40:11.231467] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:14:16.508 [2024-05-15 08:40:11.231596] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2138864 ] 00:14:16.508 EAL: No free 2048 kB hugepages reported on node 1 00:14:17.074 [2024-05-15 08:40:11.589727] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:17.074 [2024-05-15 08:40:11.648953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:17.639 08:40:12 json_config -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:14:17.639 08:40:12 json_config -- common/autotest_common.sh@861 -- # return 0 00:14:17.639 08:40:12 json_config -- json_config/common.sh@26 -- # echo '' 00:14:17.639 00:14:17.639 08:40:12 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:14:17.639 08:40:12 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:14:17.639 08:40:12 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:14:17.639 08:40:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:14:17.639 08:40:12 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:14:17.639 08:40:12 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:14:17.639 08:40:12 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:14:17.639 08:40:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:14:17.639 08:40:12 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:14:17.639 08:40:12 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:14:17.639 08:40:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:14:20.918 08:40:15 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:14:20.918 08:40:15 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:14:20.918 08:40:15 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:14:20.918 08:40:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:14:20.918 08:40:15 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:14:20.918 08:40:15 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:14:20.918 08:40:15 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:14:20.918 08:40:15 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:14:20.918 08:40:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:14:20.918 08:40:15 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:14:20.918 08:40:15 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:14:20.918 08:40:15 json_config -- json_config/json_config.sh@48 -- # local get_types 00:14:20.918 08:40:15 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:14:20.918 08:40:15 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:14:20.918 08:40:15 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:14:20.918 08:40:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:14:20.918 08:40:15 json_config -- json_config/json_config.sh@55 -- # return 0 00:14:20.918 08:40:15 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:14:20.918 08:40:15 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:14:20.918 08:40:15 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:14:20.918 08:40:15 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:14:20.918 08:40:15 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:14:20.918 08:40:15 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:14:20.918 08:40:15 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:14:20.918 08:40:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:14:20.918 08:40:15 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:14:20.918 08:40:15 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:14:20.918 08:40:15 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:14:20.918 08:40:15 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:14:20.918 08:40:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:14:21.177 MallocForNvmf0 00:14:21.177 08:40:15 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:14:21.177 08:40:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:14:21.435 MallocForNvmf1 00:14:21.435 08:40:16 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:14:21.435 08:40:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:14:21.693 [2024-05-15 08:40:16.361082] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:21.693 08:40:16 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:21.694 08:40:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:21.952 08:40:16 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:14:21.952 08:40:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:14:22.210 08:40:16 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:14:22.210 08:40:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:14:22.468 08:40:17 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:14:22.468 08:40:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:14:22.727 [2024-05-15 08:40:17.347826] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:22.727 [2024-05-15 08:40:17.348468] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:14:22.727 08:40:17 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:14:22.727 08:40:17 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:14:22.727 08:40:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:14:22.727 08:40:17 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:14:22.727 08:40:17 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:14:22.727 08:40:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:14:22.727 08:40:17 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:14:22.727 08:40:17 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:14:22.727 08:40:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:14:22.985 MallocBdevForConfigChangeCheck 00:14:22.985 08:40:17 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:14:22.985 08:40:17 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:14:22.985 08:40:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:14:22.985 08:40:17 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:14:22.985 08:40:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:14:23.551 08:40:18 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:14:23.551 INFO: shutting down applications... 00:14:23.551 08:40:18 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:14:23.551 08:40:18 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:14:23.551 08:40:18 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:14:23.551 08:40:18 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:14:24.925 Calling clear_iscsi_subsystem 00:14:24.925 Calling clear_nvmf_subsystem 00:14:24.925 Calling clear_nbd_subsystem 00:14:24.925 Calling clear_ublk_subsystem 00:14:24.925 Calling clear_vhost_blk_subsystem 00:14:24.925 Calling clear_vhost_scsi_subsystem 00:14:24.925 Calling clear_bdev_subsystem 00:14:24.925 08:40:19 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:14:24.925 08:40:19 json_config -- json_config/json_config.sh@343 -- # count=100 00:14:24.925 08:40:19 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:14:24.925 08:40:19 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:14:24.925 08:40:19 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:14:24.925 08:40:19 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:14:25.491 08:40:20 json_config -- json_config/json_config.sh@345 -- # break 00:14:25.491 08:40:20 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:14:25.491 08:40:20 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:14:25.491 08:40:20 json_config -- json_config/common.sh@31 -- # local app=target 00:14:25.491 08:40:20 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:14:25.491 08:40:20 json_config -- json_config/common.sh@35 -- # [[ -n 2138864 ]] 00:14:25.491 08:40:20 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2138864 00:14:25.491 [2024-05-15 08:40:20.054538] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:25.491 08:40:20 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:14:25.491 08:40:20 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:14:25.491 08:40:20 json_config -- json_config/common.sh@41 -- # kill -0 2138864 00:14:25.491 08:40:20 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:14:26.058 08:40:20 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:14:26.058 08:40:20 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:14:26.058 08:40:20 json_config -- json_config/common.sh@41 -- # kill -0 2138864 00:14:26.058 08:40:20 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:14:26.058 08:40:20 json_config -- json_config/common.sh@43 -- # break 00:14:26.058 08:40:20 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:14:26.058 08:40:20 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:14:26.058 SPDK target shutdown done 00:14:26.058 08:40:20 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:14:26.058 INFO: relaunching applications... 00:14:26.058 08:40:20 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:14:26.058 08:40:20 json_config -- json_config/common.sh@9 -- # local app=target 00:14:26.058 08:40:20 json_config -- json_config/common.sh@10 -- # shift 00:14:26.058 08:40:20 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:14:26.058 08:40:20 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:14:26.058 08:40:20 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:14:26.058 08:40:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:14:26.058 08:40:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:14:26.058 08:40:20 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2140172 00:14:26.058 08:40:20 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:14:26.058 08:40:20 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:14:26.058 Waiting for target to run... 00:14:26.058 08:40:20 json_config -- json_config/common.sh@25 -- # waitforlisten 2140172 /var/tmp/spdk_tgt.sock 00:14:26.058 08:40:20 json_config -- common/autotest_common.sh@828 -- # '[' -z 2140172 ']' 00:14:26.058 08:40:20 json_config -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:14:26.058 08:40:20 json_config -- common/autotest_common.sh@833 -- # local max_retries=100 00:14:26.058 08:40:20 json_config -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:14:26.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:14:26.058 08:40:20 json_config -- common/autotest_common.sh@837 -- # xtrace_disable 00:14:26.058 08:40:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:14:26.058 [2024-05-15 08:40:20.611970] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:14:26.058 [2024-05-15 08:40:20.612057] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2140172 ] 00:14:26.058 EAL: No free 2048 kB hugepages reported on node 1 00:14:26.625 [2024-05-15 08:40:21.172258] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.625 [2024-05-15 08:40:21.254993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.914 [2024-05-15 08:40:24.286365] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:29.914 [2024-05-15 08:40:24.318309] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:29.914 [2024-05-15 08:40:24.318907] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:14:30.478 08:40:25 json_config -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:14:30.478 08:40:25 json_config -- common/autotest_common.sh@861 -- # return 0 00:14:30.478 08:40:25 json_config -- json_config/common.sh@26 -- # echo '' 00:14:30.478 00:14:30.478 08:40:25 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:14:30.478 08:40:25 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:14:30.478 INFO: Checking if target configuration is the same... 00:14:30.478 08:40:25 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:14:30.478 08:40:25 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:14:30.478 08:40:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:14:30.478 + '[' 2 -ne 2 ']' 00:14:30.478 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:14:30.478 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:14:30.478 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:14:30.478 +++ basename /dev/fd/62 00:14:30.478 ++ mktemp /tmp/62.XXX 00:14:30.478 + tmp_file_1=/tmp/62.azj 00:14:30.478 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:14:30.478 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:14:30.478 + tmp_file_2=/tmp/spdk_tgt_config.json.RL6 00:14:30.478 + ret=0 00:14:30.478 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:14:30.737 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:14:30.737 + diff -u /tmp/62.azj /tmp/spdk_tgt_config.json.RL6 00:14:30.737 + echo 'INFO: JSON config files are the same' 00:14:30.737 INFO: JSON config files are the same 00:14:30.737 + rm /tmp/62.azj /tmp/spdk_tgt_config.json.RL6 00:14:30.737 + exit 0 00:14:30.737 08:40:25 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:14:30.737 08:40:25 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:14:30.737 INFO: changing configuration and checking if this can be detected... 00:14:30.737 08:40:25 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:14:30.737 08:40:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:14:30.995 08:40:25 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:14:30.995 08:40:25 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:14:30.995 08:40:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:14:30.995 + '[' 2 -ne 2 ']' 00:14:30.995 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:14:30.995 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:14:30.995 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:14:30.995 +++ basename /dev/fd/62 00:14:30.995 ++ mktemp /tmp/62.XXX 00:14:30.995 + tmp_file_1=/tmp/62.ZHU 00:14:30.995 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:14:30.995 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:14:30.995 + tmp_file_2=/tmp/spdk_tgt_config.json.v2A 00:14:30.995 + ret=0 00:14:30.995 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:14:31.560 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:14:31.560 + diff -u /tmp/62.ZHU /tmp/spdk_tgt_config.json.v2A 00:14:31.560 + ret=1 00:14:31.560 + echo '=== Start of file: /tmp/62.ZHU ===' 00:14:31.560 + cat /tmp/62.ZHU 00:14:31.560 + echo '=== End of file: /tmp/62.ZHU ===' 00:14:31.560 + echo '' 00:14:31.560 + echo '=== Start of file: /tmp/spdk_tgt_config.json.v2A ===' 00:14:31.560 + cat /tmp/spdk_tgt_config.json.v2A 00:14:31.560 + echo '=== End of file: /tmp/spdk_tgt_config.json.v2A ===' 00:14:31.560 + echo '' 00:14:31.560 + rm /tmp/62.ZHU /tmp/spdk_tgt_config.json.v2A 00:14:31.560 + exit 1 00:14:31.560 08:40:26 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:14:31.560 INFO: configuration change detected. 00:14:31.560 08:40:26 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:14:31.560 08:40:26 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:14:31.560 08:40:26 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:14:31.560 08:40:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:14:31.560 08:40:26 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:14:31.560 08:40:26 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:14:31.560 08:40:26 json_config -- json_config/json_config.sh@317 -- # [[ -n 2140172 ]] 00:14:31.560 08:40:26 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:14:31.560 08:40:26 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:14:31.560 08:40:26 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:14:31.560 08:40:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:14:31.560 08:40:26 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:14:31.560 08:40:26 json_config -- json_config/json_config.sh@193 -- # uname -s 00:14:31.560 08:40:26 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:14:31.560 08:40:26 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:14:31.561 08:40:26 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:14:31.561 08:40:26 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:14:31.561 08:40:26 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:14:31.561 08:40:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:14:31.561 08:40:26 json_config -- json_config/json_config.sh@323 -- # killprocess 2140172 00:14:31.561 08:40:26 json_config -- common/autotest_common.sh@947 -- # '[' -z 2140172 ']' 00:14:31.561 08:40:26 json_config -- common/autotest_common.sh@951 -- # kill -0 2140172 00:14:31.561 08:40:26 json_config -- common/autotest_common.sh@952 -- # uname 00:14:31.561 08:40:26 json_config -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:14:31.561 08:40:26 json_config -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2140172 00:14:31.561 08:40:26 json_config -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:14:31.561 08:40:26 json_config -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:14:31.561 08:40:26 json_config -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2140172' 00:14:31.561 killing process with pid 2140172 00:14:31.561 08:40:26 json_config -- common/autotest_common.sh@966 -- # kill 2140172 00:14:31.561 [2024-05-15 08:40:26.177872] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:31.561 08:40:26 json_config -- common/autotest_common.sh@971 -- # wait 2140172 00:14:33.459 08:40:27 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:14:33.459 08:40:27 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:14:33.459 08:40:27 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:14:33.459 08:40:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:14:33.459 08:40:27 json_config -- json_config/json_config.sh@328 -- # return 0 00:14:33.459 08:40:27 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:14:33.459 INFO: Success 00:14:33.459 00:14:33.459 real 0m16.662s 00:14:33.459 user 0m18.554s 00:14:33.459 sys 0m2.086s 00:14:33.459 08:40:27 json_config -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:33.459 08:40:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:14:33.459 ************************************ 00:14:33.459 END TEST json_config 00:14:33.459 ************************************ 00:14:33.459 08:40:27 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:14:33.459 08:40:27 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:14:33.459 08:40:27 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:33.459 08:40:27 -- common/autotest_common.sh@10 -- # set +x 00:14:33.459 ************************************ 00:14:33.459 START TEST json_config_extra_key 00:14:33.459 ************************************ 00:14:33.459 08:40:27 json_config_extra_key -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:14:33.459 08:40:27 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:33.459 08:40:27 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:14:33.459 08:40:27 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:33.459 08:40:27 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:33.459 08:40:27 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:33.459 08:40:27 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:33.459 08:40:27 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:33.459 08:40:27 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:33.459 08:40:27 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:33.459 08:40:27 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:33.459 08:40:27 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:33.459 08:40:27 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:33.459 08:40:27 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:33.459 08:40:27 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:33.459 08:40:27 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:33.459 08:40:27 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:33.459 08:40:27 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:14:33.459 08:40:27 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:33.459 08:40:27 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:33.459 08:40:27 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:33.459 08:40:27 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:33.459 08:40:27 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:33.460 08:40:27 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.460 08:40:27 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.460 08:40:27 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.460 08:40:27 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:14:33.460 08:40:27 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.460 08:40:27 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:14:33.460 08:40:27 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:33.460 08:40:27 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:33.460 08:40:27 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:33.460 08:40:27 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:33.460 08:40:27 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:33.460 08:40:27 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:33.460 08:40:27 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:33.460 08:40:27 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:33.460 08:40:27 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:14:33.460 08:40:27 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:14:33.460 08:40:27 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:14:33.460 08:40:27 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:14:33.460 08:40:27 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:14:33.460 08:40:27 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:14:33.460 08:40:27 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:14:33.460 08:40:27 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:14:33.460 08:40:27 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:14:33.460 08:40:27 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:14:33.460 08:40:27 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:14:33.460 INFO: launching applications... 00:14:33.460 08:40:27 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:14:33.460 08:40:27 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:14:33.460 08:40:27 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:14:33.460 08:40:27 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:14:33.460 08:40:27 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:14:33.460 08:40:27 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:14:33.460 08:40:27 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:14:33.460 08:40:27 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:14:33.460 08:40:27 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2141093 00:14:33.460 08:40:27 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:14:33.460 08:40:27 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:14:33.460 Waiting for target to run... 00:14:33.460 08:40:27 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2141093 /var/tmp/spdk_tgt.sock 00:14:33.460 08:40:27 json_config_extra_key -- common/autotest_common.sh@828 -- # '[' -z 2141093 ']' 00:14:33.460 08:40:27 json_config_extra_key -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:14:33.460 08:40:27 json_config_extra_key -- common/autotest_common.sh@833 -- # local max_retries=100 00:14:33.460 08:40:27 json_config_extra_key -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:14:33.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:14:33.460 08:40:27 json_config_extra_key -- common/autotest_common.sh@837 -- # xtrace_disable 00:14:33.460 08:40:27 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:14:33.460 [2024-05-15 08:40:27.942761] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:14:33.460 [2024-05-15 08:40:27.942860] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2141093 ] 00:14:33.460 EAL: No free 2048 kB hugepages reported on node 1 00:14:33.718 [2024-05-15 08:40:28.438703] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.976 [2024-05-15 08:40:28.521453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.262 08:40:28 json_config_extra_key -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:14:34.262 08:40:28 json_config_extra_key -- common/autotest_common.sh@861 -- # return 0 00:14:34.262 08:40:28 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:14:34.262 00:14:34.262 08:40:28 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:14:34.262 INFO: shutting down applications... 00:14:34.262 08:40:28 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:14:34.262 08:40:28 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:14:34.262 08:40:28 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:14:34.262 08:40:28 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2141093 ]] 00:14:34.262 08:40:28 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2141093 00:14:34.262 08:40:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:14:34.262 08:40:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:14:34.262 08:40:28 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2141093 00:14:34.262 08:40:28 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:14:34.828 08:40:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:14:34.829 08:40:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:14:34.829 08:40:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2141093 00:14:34.829 08:40:29 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:14:34.829 08:40:29 json_config_extra_key -- json_config/common.sh@43 -- # break 00:14:34.829 08:40:29 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:14:34.829 08:40:29 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:14:34.829 SPDK target shutdown done 00:14:34.829 08:40:29 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:14:34.829 Success 00:14:34.829 00:14:34.829 real 0m1.537s 00:14:34.829 user 0m1.362s 00:14:34.829 sys 0m0.585s 00:14:34.829 08:40:29 json_config_extra_key -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:34.829 08:40:29 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:14:34.829 ************************************ 00:14:34.829 END TEST json_config_extra_key 00:14:34.829 ************************************ 00:14:34.829 08:40:29 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:14:34.829 08:40:29 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:14:34.829 08:40:29 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:34.829 08:40:29 -- common/autotest_common.sh@10 -- # set +x 00:14:34.829 ************************************ 00:14:34.829 START TEST alias_rpc 00:14:34.829 ************************************ 00:14:34.829 08:40:29 alias_rpc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:14:34.829 * Looking for test storage... 00:14:34.829 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:14:34.829 08:40:29 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:14:34.829 08:40:29 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2141392 00:14:34.829 08:40:29 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:14:34.829 08:40:29 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2141392 00:14:34.829 08:40:29 alias_rpc -- common/autotest_common.sh@828 -- # '[' -z 2141392 ']' 00:14:34.829 08:40:29 alias_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:34.829 08:40:29 alias_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:14:34.829 08:40:29 alias_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:34.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:34.829 08:40:29 alias_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:14:34.829 08:40:29 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:34.829 [2024-05-15 08:40:29.530257] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:14:34.829 [2024-05-15 08:40:29.530348] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2141392 ] 00:14:34.829 EAL: No free 2048 kB hugepages reported on node 1 00:14:34.829 [2024-05-15 08:40:29.595476] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:35.087 [2024-05-15 08:40:29.676152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.346 08:40:29 alias_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:14:35.346 08:40:29 alias_rpc -- common/autotest_common.sh@861 -- # return 0 00:14:35.346 08:40:29 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:14:35.603 08:40:30 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2141392 00:14:35.603 08:40:30 alias_rpc -- common/autotest_common.sh@947 -- # '[' -z 2141392 ']' 00:14:35.603 08:40:30 alias_rpc -- common/autotest_common.sh@951 -- # kill -0 2141392 00:14:35.603 08:40:30 alias_rpc -- common/autotest_common.sh@952 -- # uname 00:14:35.603 08:40:30 alias_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:14:35.603 08:40:30 alias_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2141392 00:14:35.603 08:40:30 alias_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:14:35.603 08:40:30 alias_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:14:35.603 08:40:30 alias_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2141392' 00:14:35.604 killing process with pid 2141392 00:14:35.604 08:40:30 alias_rpc -- common/autotest_common.sh@966 -- # kill 2141392 00:14:35.604 08:40:30 alias_rpc -- common/autotest_common.sh@971 -- # wait 2141392 00:14:35.861 00:14:35.861 real 0m1.211s 00:14:35.861 user 0m1.242s 00:14:35.861 sys 0m0.458s 00:14:35.861 08:40:30 alias_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:35.861 08:40:30 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:35.861 ************************************ 00:14:35.861 END TEST alias_rpc 00:14:35.861 ************************************ 00:14:36.120 08:40:30 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:14:36.120 08:40:30 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:14:36.120 08:40:30 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:14:36.120 08:40:30 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:36.120 08:40:30 -- common/autotest_common.sh@10 -- # set +x 00:14:36.120 ************************************ 00:14:36.120 START TEST spdkcli_tcp 00:14:36.120 ************************************ 00:14:36.120 08:40:30 spdkcli_tcp -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:14:36.120 * Looking for test storage... 00:14:36.120 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:14:36.120 08:40:30 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:14:36.120 08:40:30 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:14:36.120 08:40:30 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:14:36.120 08:40:30 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:14:36.120 08:40:30 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:14:36.120 08:40:30 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:36.120 08:40:30 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:14:36.120 08:40:30 spdkcli_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:14:36.120 08:40:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:36.120 08:40:30 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2141590 00:14:36.120 08:40:30 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:14:36.120 08:40:30 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2141590 00:14:36.120 08:40:30 spdkcli_tcp -- common/autotest_common.sh@828 -- # '[' -z 2141590 ']' 00:14:36.120 08:40:30 spdkcli_tcp -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:36.120 08:40:30 spdkcli_tcp -- common/autotest_common.sh@833 -- # local max_retries=100 00:14:36.120 08:40:30 spdkcli_tcp -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:36.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:36.120 08:40:30 spdkcli_tcp -- common/autotest_common.sh@837 -- # xtrace_disable 00:14:36.120 08:40:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:36.120 [2024-05-15 08:40:30.799920] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:14:36.120 [2024-05-15 08:40:30.800011] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2141590 ] 00:14:36.120 EAL: No free 2048 kB hugepages reported on node 1 00:14:36.120 [2024-05-15 08:40:30.866347] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:36.378 [2024-05-15 08:40:30.950436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:36.378 [2024-05-15 08:40:30.950440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:36.637 08:40:31 spdkcli_tcp -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:14:36.637 08:40:31 spdkcli_tcp -- common/autotest_common.sh@861 -- # return 0 00:14:36.637 08:40:31 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2141600 00:14:36.637 08:40:31 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:14:36.637 08:40:31 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:14:36.895 [ 00:14:36.895 "bdev_malloc_delete", 00:14:36.895 "bdev_malloc_create", 00:14:36.895 "bdev_null_resize", 00:14:36.895 "bdev_null_delete", 00:14:36.895 "bdev_null_create", 00:14:36.895 "bdev_nvme_cuse_unregister", 00:14:36.895 "bdev_nvme_cuse_register", 00:14:36.895 "bdev_opal_new_user", 00:14:36.895 "bdev_opal_set_lock_state", 00:14:36.895 "bdev_opal_delete", 00:14:36.895 "bdev_opal_get_info", 00:14:36.895 "bdev_opal_create", 00:14:36.895 "bdev_nvme_opal_revert", 00:14:36.895 "bdev_nvme_opal_init", 00:14:36.895 "bdev_nvme_send_cmd", 00:14:36.895 "bdev_nvme_get_path_iostat", 00:14:36.895 "bdev_nvme_get_mdns_discovery_info", 00:14:36.895 "bdev_nvme_stop_mdns_discovery", 00:14:36.895 "bdev_nvme_start_mdns_discovery", 00:14:36.895 "bdev_nvme_set_multipath_policy", 00:14:36.895 "bdev_nvme_set_preferred_path", 00:14:36.895 "bdev_nvme_get_io_paths", 00:14:36.895 "bdev_nvme_remove_error_injection", 00:14:36.895 "bdev_nvme_add_error_injection", 00:14:36.895 "bdev_nvme_get_discovery_info", 00:14:36.895 "bdev_nvme_stop_discovery", 00:14:36.895 "bdev_nvme_start_discovery", 00:14:36.895 "bdev_nvme_get_controller_health_info", 00:14:36.895 "bdev_nvme_disable_controller", 00:14:36.895 "bdev_nvme_enable_controller", 00:14:36.895 "bdev_nvme_reset_controller", 00:14:36.895 "bdev_nvme_get_transport_statistics", 00:14:36.895 "bdev_nvme_apply_firmware", 00:14:36.895 "bdev_nvme_detach_controller", 00:14:36.895 "bdev_nvme_get_controllers", 00:14:36.895 "bdev_nvme_attach_controller", 00:14:36.895 "bdev_nvme_set_hotplug", 00:14:36.895 "bdev_nvme_set_options", 00:14:36.895 "bdev_passthru_delete", 00:14:36.895 "bdev_passthru_create", 00:14:36.895 "bdev_lvol_check_shallow_copy", 00:14:36.895 "bdev_lvol_start_shallow_copy", 00:14:36.895 "bdev_lvol_grow_lvstore", 00:14:36.895 "bdev_lvol_get_lvols", 00:14:36.895 "bdev_lvol_get_lvstores", 00:14:36.896 "bdev_lvol_delete", 00:14:36.896 "bdev_lvol_set_read_only", 00:14:36.896 "bdev_lvol_resize", 00:14:36.896 "bdev_lvol_decouple_parent", 00:14:36.896 "bdev_lvol_inflate", 00:14:36.896 "bdev_lvol_rename", 00:14:36.896 "bdev_lvol_clone_bdev", 00:14:36.896 "bdev_lvol_clone", 00:14:36.896 "bdev_lvol_snapshot", 00:14:36.896 "bdev_lvol_create", 00:14:36.896 "bdev_lvol_delete_lvstore", 00:14:36.896 "bdev_lvol_rename_lvstore", 00:14:36.896 "bdev_lvol_create_lvstore", 00:14:36.896 "bdev_raid_set_options", 00:14:36.896 "bdev_raid_remove_base_bdev", 00:14:36.896 "bdev_raid_add_base_bdev", 00:14:36.896 "bdev_raid_delete", 00:14:36.896 "bdev_raid_create", 00:14:36.896 "bdev_raid_get_bdevs", 00:14:36.896 "bdev_error_inject_error", 00:14:36.896 "bdev_error_delete", 00:14:36.896 "bdev_error_create", 00:14:36.896 "bdev_split_delete", 00:14:36.896 "bdev_split_create", 00:14:36.896 "bdev_delay_delete", 00:14:36.896 "bdev_delay_create", 00:14:36.896 "bdev_delay_update_latency", 00:14:36.896 "bdev_zone_block_delete", 00:14:36.896 "bdev_zone_block_create", 00:14:36.896 "blobfs_create", 00:14:36.896 "blobfs_detect", 00:14:36.896 "blobfs_set_cache_size", 00:14:36.896 "bdev_aio_delete", 00:14:36.896 "bdev_aio_rescan", 00:14:36.896 "bdev_aio_create", 00:14:36.896 "bdev_ftl_set_property", 00:14:36.896 "bdev_ftl_get_properties", 00:14:36.896 "bdev_ftl_get_stats", 00:14:36.896 "bdev_ftl_unmap", 00:14:36.896 "bdev_ftl_unload", 00:14:36.896 "bdev_ftl_delete", 00:14:36.896 "bdev_ftl_load", 00:14:36.896 "bdev_ftl_create", 00:14:36.896 "bdev_virtio_attach_controller", 00:14:36.896 "bdev_virtio_scsi_get_devices", 00:14:36.896 "bdev_virtio_detach_controller", 00:14:36.896 "bdev_virtio_blk_set_hotplug", 00:14:36.896 "bdev_iscsi_delete", 00:14:36.896 "bdev_iscsi_create", 00:14:36.896 "bdev_iscsi_set_options", 00:14:36.896 "accel_error_inject_error", 00:14:36.896 "ioat_scan_accel_module", 00:14:36.896 "dsa_scan_accel_module", 00:14:36.896 "iaa_scan_accel_module", 00:14:36.896 "vfu_virtio_create_scsi_endpoint", 00:14:36.896 "vfu_virtio_scsi_remove_target", 00:14:36.896 "vfu_virtio_scsi_add_target", 00:14:36.896 "vfu_virtio_create_blk_endpoint", 00:14:36.896 "vfu_virtio_delete_endpoint", 00:14:36.896 "keyring_file_remove_key", 00:14:36.896 "keyring_file_add_key", 00:14:36.896 "iscsi_get_histogram", 00:14:36.896 "iscsi_enable_histogram", 00:14:36.896 "iscsi_set_options", 00:14:36.896 "iscsi_get_auth_groups", 00:14:36.896 "iscsi_auth_group_remove_secret", 00:14:36.896 "iscsi_auth_group_add_secret", 00:14:36.896 "iscsi_delete_auth_group", 00:14:36.896 "iscsi_create_auth_group", 00:14:36.896 "iscsi_set_discovery_auth", 00:14:36.896 "iscsi_get_options", 00:14:36.896 "iscsi_target_node_request_logout", 00:14:36.896 "iscsi_target_node_set_redirect", 00:14:36.896 "iscsi_target_node_set_auth", 00:14:36.896 "iscsi_target_node_add_lun", 00:14:36.896 "iscsi_get_stats", 00:14:36.896 "iscsi_get_connections", 00:14:36.896 "iscsi_portal_group_set_auth", 00:14:36.896 "iscsi_start_portal_group", 00:14:36.896 "iscsi_delete_portal_group", 00:14:36.896 "iscsi_create_portal_group", 00:14:36.896 "iscsi_get_portal_groups", 00:14:36.896 "iscsi_delete_target_node", 00:14:36.896 "iscsi_target_node_remove_pg_ig_maps", 00:14:36.896 "iscsi_target_node_add_pg_ig_maps", 00:14:36.896 "iscsi_create_target_node", 00:14:36.896 "iscsi_get_target_nodes", 00:14:36.896 "iscsi_delete_initiator_group", 00:14:36.896 "iscsi_initiator_group_remove_initiators", 00:14:36.896 "iscsi_initiator_group_add_initiators", 00:14:36.896 "iscsi_create_initiator_group", 00:14:36.896 "iscsi_get_initiator_groups", 00:14:36.896 "nvmf_set_crdt", 00:14:36.896 "nvmf_set_config", 00:14:36.896 "nvmf_set_max_subsystems", 00:14:36.896 "nvmf_stop_mdns_prr", 00:14:36.896 "nvmf_publish_mdns_prr", 00:14:36.896 "nvmf_subsystem_get_listeners", 00:14:36.896 "nvmf_subsystem_get_qpairs", 00:14:36.896 "nvmf_subsystem_get_controllers", 00:14:36.896 "nvmf_get_stats", 00:14:36.896 "nvmf_get_transports", 00:14:36.896 "nvmf_create_transport", 00:14:36.896 "nvmf_get_targets", 00:14:36.896 "nvmf_delete_target", 00:14:36.896 "nvmf_create_target", 00:14:36.896 "nvmf_subsystem_allow_any_host", 00:14:36.896 "nvmf_subsystem_remove_host", 00:14:36.896 "nvmf_subsystem_add_host", 00:14:36.896 "nvmf_ns_remove_host", 00:14:36.896 "nvmf_ns_add_host", 00:14:36.896 "nvmf_subsystem_remove_ns", 00:14:36.896 "nvmf_subsystem_add_ns", 00:14:36.896 "nvmf_subsystem_listener_set_ana_state", 00:14:36.896 "nvmf_discovery_get_referrals", 00:14:36.896 "nvmf_discovery_remove_referral", 00:14:36.896 "nvmf_discovery_add_referral", 00:14:36.896 "nvmf_subsystem_remove_listener", 00:14:36.896 "nvmf_subsystem_add_listener", 00:14:36.896 "nvmf_delete_subsystem", 00:14:36.896 "nvmf_create_subsystem", 00:14:36.896 "nvmf_get_subsystems", 00:14:36.896 "env_dpdk_get_mem_stats", 00:14:36.896 "nbd_get_disks", 00:14:36.896 "nbd_stop_disk", 00:14:36.896 "nbd_start_disk", 00:14:36.896 "ublk_recover_disk", 00:14:36.896 "ublk_get_disks", 00:14:36.896 "ublk_stop_disk", 00:14:36.896 "ublk_start_disk", 00:14:36.896 "ublk_destroy_target", 00:14:36.896 "ublk_create_target", 00:14:36.896 "virtio_blk_create_transport", 00:14:36.896 "virtio_blk_get_transports", 00:14:36.896 "vhost_controller_set_coalescing", 00:14:36.896 "vhost_get_controllers", 00:14:36.896 "vhost_delete_controller", 00:14:36.896 "vhost_create_blk_controller", 00:14:36.896 "vhost_scsi_controller_remove_target", 00:14:36.896 "vhost_scsi_controller_add_target", 00:14:36.896 "vhost_start_scsi_controller", 00:14:36.896 "vhost_create_scsi_controller", 00:14:36.896 "thread_set_cpumask", 00:14:36.896 "framework_get_scheduler", 00:14:36.896 "framework_set_scheduler", 00:14:36.896 "framework_get_reactors", 00:14:36.896 "thread_get_io_channels", 00:14:36.896 "thread_get_pollers", 00:14:36.896 "thread_get_stats", 00:14:36.896 "framework_monitor_context_switch", 00:14:36.896 "spdk_kill_instance", 00:14:36.896 "log_enable_timestamps", 00:14:36.896 "log_get_flags", 00:14:36.896 "log_clear_flag", 00:14:36.896 "log_set_flag", 00:14:36.896 "log_get_level", 00:14:36.896 "log_set_level", 00:14:36.896 "log_get_print_level", 00:14:36.896 "log_set_print_level", 00:14:36.896 "framework_enable_cpumask_locks", 00:14:36.896 "framework_disable_cpumask_locks", 00:14:36.896 "framework_wait_init", 00:14:36.896 "framework_start_init", 00:14:36.896 "scsi_get_devices", 00:14:36.896 "bdev_get_histogram", 00:14:36.896 "bdev_enable_histogram", 00:14:36.896 "bdev_set_qos_limit", 00:14:36.896 "bdev_set_qd_sampling_period", 00:14:36.896 "bdev_get_bdevs", 00:14:36.896 "bdev_reset_iostat", 00:14:36.896 "bdev_get_iostat", 00:14:36.896 "bdev_examine", 00:14:36.896 "bdev_wait_for_examine", 00:14:36.896 "bdev_set_options", 00:14:36.896 "notify_get_notifications", 00:14:36.896 "notify_get_types", 00:14:36.896 "accel_get_stats", 00:14:36.896 "accel_set_options", 00:14:36.896 "accel_set_driver", 00:14:36.896 "accel_crypto_key_destroy", 00:14:36.896 "accel_crypto_keys_get", 00:14:36.896 "accel_crypto_key_create", 00:14:36.896 "accel_assign_opc", 00:14:36.896 "accel_get_module_info", 00:14:36.896 "accel_get_opc_assignments", 00:14:36.896 "vmd_rescan", 00:14:36.896 "vmd_remove_device", 00:14:36.896 "vmd_enable", 00:14:36.896 "sock_get_default_impl", 00:14:36.896 "sock_set_default_impl", 00:14:36.896 "sock_impl_set_options", 00:14:36.896 "sock_impl_get_options", 00:14:36.896 "iobuf_get_stats", 00:14:36.896 "iobuf_set_options", 00:14:36.896 "keyring_get_keys", 00:14:36.896 "framework_get_pci_devices", 00:14:36.896 "framework_get_config", 00:14:36.896 "framework_get_subsystems", 00:14:36.896 "vfu_tgt_set_base_path", 00:14:36.896 "trace_get_info", 00:14:36.896 "trace_get_tpoint_group_mask", 00:14:36.896 "trace_disable_tpoint_group", 00:14:36.896 "trace_enable_tpoint_group", 00:14:36.896 "trace_clear_tpoint_mask", 00:14:36.896 "trace_set_tpoint_mask", 00:14:36.896 "spdk_get_version", 00:14:36.896 "rpc_get_methods" 00:14:36.896 ] 00:14:36.896 08:40:31 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:14:36.896 08:40:31 spdkcli_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:14:36.896 08:40:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:36.896 08:40:31 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:14:36.896 08:40:31 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2141590 00:14:36.896 08:40:31 spdkcli_tcp -- common/autotest_common.sh@947 -- # '[' -z 2141590 ']' 00:14:36.896 08:40:31 spdkcli_tcp -- common/autotest_common.sh@951 -- # kill -0 2141590 00:14:36.896 08:40:31 spdkcli_tcp -- common/autotest_common.sh@952 -- # uname 00:14:36.896 08:40:31 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:14:36.896 08:40:31 spdkcli_tcp -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2141590 00:14:36.896 08:40:31 spdkcli_tcp -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:14:36.896 08:40:31 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:14:36.896 08:40:31 spdkcli_tcp -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2141590' 00:14:36.896 killing process with pid 2141590 00:14:36.896 08:40:31 spdkcli_tcp -- common/autotest_common.sh@966 -- # kill 2141590 00:14:36.896 08:40:31 spdkcli_tcp -- common/autotest_common.sh@971 -- # wait 2141590 00:14:37.156 00:14:37.156 real 0m1.213s 00:14:37.156 user 0m2.119s 00:14:37.156 sys 0m0.472s 00:14:37.156 08:40:31 spdkcli_tcp -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:37.156 08:40:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:37.156 ************************************ 00:14:37.156 END TEST spdkcli_tcp 00:14:37.156 ************************************ 00:14:37.156 08:40:31 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:14:37.156 08:40:31 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:14:37.156 08:40:31 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:37.156 08:40:31 -- common/autotest_common.sh@10 -- # set +x 00:14:37.414 ************************************ 00:14:37.414 START TEST dpdk_mem_utility 00:14:37.414 ************************************ 00:14:37.414 08:40:31 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:14:37.414 * Looking for test storage... 00:14:37.414 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:14:37.414 08:40:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:14:37.414 08:40:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2141800 00:14:37.414 08:40:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:14:37.414 08:40:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2141800 00:14:37.414 08:40:32 dpdk_mem_utility -- common/autotest_common.sh@828 -- # '[' -z 2141800 ']' 00:14:37.414 08:40:32 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:37.414 08:40:32 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local max_retries=100 00:14:37.414 08:40:32 dpdk_mem_utility -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:37.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:37.414 08:40:32 dpdk_mem_utility -- common/autotest_common.sh@837 -- # xtrace_disable 00:14:37.414 08:40:32 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:14:37.414 [2024-05-15 08:40:32.063263] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:14:37.414 [2024-05-15 08:40:32.063348] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2141800 ] 00:14:37.414 EAL: No free 2048 kB hugepages reported on node 1 00:14:37.414 [2024-05-15 08:40:32.128928] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:37.673 [2024-05-15 08:40:32.210028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:37.673 08:40:32 dpdk_mem_utility -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:14:37.673 08:40:32 dpdk_mem_utility -- common/autotest_common.sh@861 -- # return 0 00:14:37.673 08:40:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:14:37.673 08:40:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:14:37.932 08:40:32 dpdk_mem_utility -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:37.932 08:40:32 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:14:37.932 { 00:14:37.932 "filename": "/tmp/spdk_mem_dump.txt" 00:14:37.932 } 00:14:37.932 08:40:32 dpdk_mem_utility -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:37.932 08:40:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:14:37.932 DPDK memory size 814.000000 MiB in 1 heap(s) 00:14:37.932 1 heaps totaling size 814.000000 MiB 00:14:37.932 size: 814.000000 MiB heap id: 0 00:14:37.932 end heaps---------- 00:14:37.932 8 mempools totaling size 598.116089 MiB 00:14:37.932 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:14:37.932 size: 158.602051 MiB name: PDU_data_out_Pool 00:14:37.932 size: 84.521057 MiB name: bdev_io_2141800 00:14:37.932 size: 51.011292 MiB name: evtpool_2141800 00:14:37.932 size: 50.003479 MiB name: msgpool_2141800 00:14:37.932 size: 21.763794 MiB name: PDU_Pool 00:14:37.932 size: 19.513306 MiB name: SCSI_TASK_Pool 00:14:37.932 size: 0.026123 MiB name: Session_Pool 00:14:37.932 end mempools------- 00:14:37.932 6 memzones totaling size 4.142822 MiB 00:14:37.932 size: 1.000366 MiB name: RG_ring_0_2141800 00:14:37.932 size: 1.000366 MiB name: RG_ring_1_2141800 00:14:37.932 size: 1.000366 MiB name: RG_ring_4_2141800 00:14:37.932 size: 1.000366 MiB name: RG_ring_5_2141800 00:14:37.932 size: 0.125366 MiB name: RG_ring_2_2141800 00:14:37.932 size: 0.015991 MiB name: RG_ring_3_2141800 00:14:37.932 end memzones------- 00:14:37.932 08:40:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:14:37.932 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:14:37.932 list of free elements. size: 12.519348 MiB 00:14:37.932 element at address: 0x200000400000 with size: 1.999512 MiB 00:14:37.932 element at address: 0x200018e00000 with size: 0.999878 MiB 00:14:37.932 element at address: 0x200019000000 with size: 0.999878 MiB 00:14:37.932 element at address: 0x200003e00000 with size: 0.996277 MiB 00:14:37.932 element at address: 0x200031c00000 with size: 0.994446 MiB 00:14:37.932 element at address: 0x200013800000 with size: 0.978699 MiB 00:14:37.932 element at address: 0x200007000000 with size: 0.959839 MiB 00:14:37.932 element at address: 0x200019200000 with size: 0.936584 MiB 00:14:37.932 element at address: 0x200000200000 with size: 0.841614 MiB 00:14:37.932 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:14:37.932 element at address: 0x20000b200000 with size: 0.490723 MiB 00:14:37.932 element at address: 0x200000800000 with size: 0.487793 MiB 00:14:37.932 element at address: 0x200019400000 with size: 0.485657 MiB 00:14:37.932 element at address: 0x200027e00000 with size: 0.410034 MiB 00:14:37.932 element at address: 0x200003a00000 with size: 0.355530 MiB 00:14:37.932 list of standard malloc elements. size: 199.218079 MiB 00:14:37.932 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:14:37.932 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:14:37.932 element at address: 0x200018efff80 with size: 1.000122 MiB 00:14:37.932 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:14:37.932 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:14:37.932 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:14:37.932 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:14:37.932 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:14:37.932 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:14:37.932 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:14:37.932 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:14:37.932 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:14:37.932 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:14:37.932 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:14:37.932 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:14:37.932 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:14:37.932 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:14:37.932 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:14:37.932 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:14:37.932 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:14:37.932 element at address: 0x200003adb300 with size: 0.000183 MiB 00:14:37.932 element at address: 0x200003adb500 with size: 0.000183 MiB 00:14:37.932 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:14:37.932 element at address: 0x200003affa80 with size: 0.000183 MiB 00:14:37.932 element at address: 0x200003affb40 with size: 0.000183 MiB 00:14:37.932 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:14:37.932 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:14:37.932 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:14:37.932 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:14:37.932 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:14:37.932 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:14:37.932 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:14:37.932 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:14:37.932 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:14:37.932 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:14:37.932 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:14:37.932 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:14:37.932 element at address: 0x200027e69040 with size: 0.000183 MiB 00:14:37.932 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:14:37.932 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:14:37.932 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:14:37.932 list of memzone associated elements. size: 602.262573 MiB 00:14:37.932 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:14:37.932 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:14:37.932 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:14:37.932 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:14:37.932 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:14:37.932 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2141800_0 00:14:37.932 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:14:37.932 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2141800_0 00:14:37.932 element at address: 0x200003fff380 with size: 48.003052 MiB 00:14:37.932 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2141800_0 00:14:37.932 element at address: 0x2000195be940 with size: 20.255554 MiB 00:14:37.932 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:14:37.932 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:14:37.932 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:14:37.932 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:14:37.932 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2141800 00:14:37.932 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:14:37.932 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2141800 00:14:37.932 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:14:37.932 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2141800 00:14:37.932 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:14:37.932 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:14:37.932 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:14:37.932 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:14:37.932 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:14:37.932 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:14:37.932 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:14:37.932 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:14:37.932 element at address: 0x200003eff180 with size: 1.000488 MiB 00:14:37.932 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2141800 00:14:37.932 element at address: 0x200003affc00 with size: 1.000488 MiB 00:14:37.932 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2141800 00:14:37.932 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:14:37.932 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2141800 00:14:37.932 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:14:37.932 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2141800 00:14:37.932 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:14:37.932 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2141800 00:14:37.932 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:14:37.932 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:14:37.932 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:14:37.932 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:14:37.932 element at address: 0x20001947c540 with size: 0.250488 MiB 00:14:37.932 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:14:37.932 element at address: 0x200003adf880 with size: 0.125488 MiB 00:14:37.932 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2141800 00:14:37.932 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:14:37.932 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:14:37.933 element at address: 0x200027e69100 with size: 0.023743 MiB 00:14:37.933 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:14:37.933 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:14:37.933 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2141800 00:14:37.933 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:14:37.933 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:14:37.933 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:14:37.933 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2141800 00:14:37.933 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:14:37.933 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2141800 00:14:37.933 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:14:37.933 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:14:37.933 08:40:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:14:37.933 08:40:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2141800 00:14:37.933 08:40:32 dpdk_mem_utility -- common/autotest_common.sh@947 -- # '[' -z 2141800 ']' 00:14:37.933 08:40:32 dpdk_mem_utility -- common/autotest_common.sh@951 -- # kill -0 2141800 00:14:37.933 08:40:32 dpdk_mem_utility -- common/autotest_common.sh@952 -- # uname 00:14:37.933 08:40:32 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:14:37.933 08:40:32 dpdk_mem_utility -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2141800 00:14:37.933 08:40:32 dpdk_mem_utility -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:14:37.933 08:40:32 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:14:37.933 08:40:32 dpdk_mem_utility -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2141800' 00:14:37.933 killing process with pid 2141800 00:14:37.933 08:40:32 dpdk_mem_utility -- common/autotest_common.sh@966 -- # kill 2141800 00:14:37.933 08:40:32 dpdk_mem_utility -- common/autotest_common.sh@971 -- # wait 2141800 00:14:38.499 00:14:38.499 real 0m1.040s 00:14:38.499 user 0m0.993s 00:14:38.499 sys 0m0.414s 00:14:38.499 08:40:33 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:38.499 08:40:33 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:14:38.499 ************************************ 00:14:38.499 END TEST dpdk_mem_utility 00:14:38.499 ************************************ 00:14:38.499 08:40:33 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:14:38.499 08:40:33 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:14:38.499 08:40:33 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:38.499 08:40:33 -- common/autotest_common.sh@10 -- # set +x 00:14:38.499 ************************************ 00:14:38.499 START TEST event 00:14:38.499 ************************************ 00:14:38.499 08:40:33 event -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:14:38.499 * Looking for test storage... 00:14:38.499 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:14:38.499 08:40:33 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:14:38.499 08:40:33 event -- bdev/nbd_common.sh@6 -- # set -e 00:14:38.499 08:40:33 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:14:38.499 08:40:33 event -- common/autotest_common.sh@1098 -- # '[' 6 -le 1 ']' 00:14:38.499 08:40:33 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:38.499 08:40:33 event -- common/autotest_common.sh@10 -- # set +x 00:14:38.499 ************************************ 00:14:38.499 START TEST event_perf 00:14:38.499 ************************************ 00:14:38.499 08:40:33 event.event_perf -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:14:38.499 Running I/O for 1 seconds...[2024-05-15 08:40:33.154438] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:14:38.499 [2024-05-15 08:40:33.154495] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2141988 ] 00:14:38.499 EAL: No free 2048 kB hugepages reported on node 1 00:14:38.499 [2024-05-15 08:40:33.223395] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:38.756 [2024-05-15 08:40:33.314386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:38.756 [2024-05-15 08:40:33.314441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:38.756 [2024-05-15 08:40:33.314558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:38.756 [2024-05-15 08:40:33.314561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:39.690 Running I/O for 1 seconds... 00:14:39.690 lcore 0: 231318 00:14:39.690 lcore 1: 231316 00:14:39.690 lcore 2: 231317 00:14:39.690 lcore 3: 231318 00:14:39.690 done. 00:14:39.690 00:14:39.690 real 0m1.254s 00:14:39.690 user 0m4.166s 00:14:39.690 sys 0m0.085s 00:14:39.690 08:40:34 event.event_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:39.690 08:40:34 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:14:39.690 ************************************ 00:14:39.690 END TEST event_perf 00:14:39.690 ************************************ 00:14:39.690 08:40:34 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:14:39.690 08:40:34 event -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:14:39.690 08:40:34 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:39.690 08:40:34 event -- common/autotest_common.sh@10 -- # set +x 00:14:39.690 ************************************ 00:14:39.691 START TEST event_reactor 00:14:39.691 ************************************ 00:14:39.691 08:40:34 event.event_reactor -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:14:39.691 [2024-05-15 08:40:34.455042] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:14:39.691 [2024-05-15 08:40:34.455090] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2142143 ] 00:14:39.949 EAL: No free 2048 kB hugepages reported on node 1 00:14:39.949 [2024-05-15 08:40:34.525735] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:39.949 [2024-05-15 08:40:34.615448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.321 test_start 00:14:41.321 oneshot 00:14:41.321 tick 100 00:14:41.321 tick 100 00:14:41.321 tick 250 00:14:41.321 tick 100 00:14:41.321 tick 100 00:14:41.321 tick 100 00:14:41.321 tick 250 00:14:41.321 tick 500 00:14:41.321 tick 100 00:14:41.321 tick 100 00:14:41.321 tick 250 00:14:41.321 tick 100 00:14:41.321 tick 100 00:14:41.321 test_end 00:14:41.321 00:14:41.321 real 0m1.250s 00:14:41.321 user 0m1.159s 00:14:41.321 sys 0m0.087s 00:14:41.321 08:40:35 event.event_reactor -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:41.321 08:40:35 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:14:41.321 ************************************ 00:14:41.321 END TEST event_reactor 00:14:41.321 ************************************ 00:14:41.321 08:40:35 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:14:41.321 08:40:35 event -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:14:41.321 08:40:35 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:41.321 08:40:35 event -- common/autotest_common.sh@10 -- # set +x 00:14:41.321 ************************************ 00:14:41.321 START TEST event_reactor_perf 00:14:41.321 ************************************ 00:14:41.321 08:40:35 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:14:41.321 [2024-05-15 08:40:35.757984] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:14:41.321 [2024-05-15 08:40:35.758053] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2142297 ] 00:14:41.321 EAL: No free 2048 kB hugepages reported on node 1 00:14:41.321 [2024-05-15 08:40:35.828840] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.321 [2024-05-15 08:40:35.919834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.254 test_start 00:14:42.254 test_end 00:14:42.254 Performance: 351506 events per second 00:14:42.254 00:14:42.254 real 0m1.256s 00:14:42.254 user 0m1.159s 00:14:42.254 sys 0m0.093s 00:14:42.254 08:40:37 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:42.254 08:40:37 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:14:42.254 ************************************ 00:14:42.254 END TEST event_reactor_perf 00:14:42.254 ************************************ 00:14:42.254 08:40:37 event -- event/event.sh@49 -- # uname -s 00:14:42.254 08:40:37 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:14:42.254 08:40:37 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:14:42.254 08:40:37 event -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:14:42.254 08:40:37 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:42.254 08:40:37 event -- common/autotest_common.sh@10 -- # set +x 00:14:42.512 ************************************ 00:14:42.512 START TEST event_scheduler 00:14:42.512 ************************************ 00:14:42.512 08:40:37 event.event_scheduler -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:14:42.512 * Looking for test storage... 00:14:42.512 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:14:42.512 08:40:37 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:14:42.512 08:40:37 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2142481 00:14:42.512 08:40:37 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:14:42.512 08:40:37 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:14:42.512 08:40:37 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2142481 00:14:42.512 08:40:37 event.event_scheduler -- common/autotest_common.sh@828 -- # '[' -z 2142481 ']' 00:14:42.512 08:40:37 event.event_scheduler -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:42.512 08:40:37 event.event_scheduler -- common/autotest_common.sh@833 -- # local max_retries=100 00:14:42.512 08:40:37 event.event_scheduler -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:42.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:42.513 08:40:37 event.event_scheduler -- common/autotest_common.sh@837 -- # xtrace_disable 00:14:42.513 08:40:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:14:42.513 [2024-05-15 08:40:37.147100] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:14:42.513 [2024-05-15 08:40:37.147176] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2142481 ] 00:14:42.513 EAL: No free 2048 kB hugepages reported on node 1 00:14:42.513 [2024-05-15 08:40:37.213581] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:42.513 [2024-05-15 08:40:37.304280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.513 [2024-05-15 08:40:37.304368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:42.772 [2024-05-15 08:40:37.306241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:42.772 [2024-05-15 08:40:37.306254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:42.772 08:40:37 event.event_scheduler -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:14:42.772 08:40:37 event.event_scheduler -- common/autotest_common.sh@861 -- # return 0 00:14:42.772 08:40:37 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:14:42.772 08:40:37 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:42.772 08:40:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:14:42.772 POWER: Env isn't set yet! 00:14:42.772 POWER: Attempting to initialise ACPI cpufreq power management... 00:14:42.772 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:14:42.772 POWER: Cannot get available frequencies of lcore 0 00:14:42.772 POWER: Attempting to initialise PSTAT power management... 00:14:42.772 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:14:42.772 POWER: Initialized successfully for lcore 0 power management 00:14:42.772 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:14:42.772 POWER: Initialized successfully for lcore 1 power management 00:14:42.772 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:14:42.772 POWER: Initialized successfully for lcore 2 power management 00:14:42.772 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:14:42.772 POWER: Initialized successfully for lcore 3 power management 00:14:42.772 08:40:37 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:42.772 08:40:37 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:14:42.772 08:40:37 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:42.772 08:40:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:14:42.772 [2024-05-15 08:40:37.512586] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:14:42.772 08:40:37 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:42.772 08:40:37 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:14:42.772 08:40:37 event.event_scheduler -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:14:42.772 08:40:37 event.event_scheduler -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:42.772 08:40:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:14:42.772 ************************************ 00:14:42.772 START TEST scheduler_create_thread 00:14:42.772 ************************************ 00:14:42.772 08:40:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # scheduler_create_thread 00:14:42.772 08:40:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:14:42.772 08:40:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:42.772 08:40:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:14:42.772 2 00:14:42.772 08:40:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:42.772 08:40:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:14:42.772 08:40:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:42.772 08:40:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:14:43.031 3 00:14:43.031 08:40:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:43.031 08:40:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:14:43.031 08:40:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:43.031 08:40:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:14:43.031 4 00:14:43.031 08:40:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:43.031 08:40:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:14:43.031 08:40:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:43.031 08:40:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:14:43.031 5 00:14:43.031 08:40:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:43.031 08:40:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:14:43.031 08:40:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:43.032 08:40:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:14:43.032 6 00:14:43.032 08:40:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:43.032 08:40:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:14:43.032 08:40:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:43.032 08:40:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:14:43.032 7 00:14:43.032 08:40:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:43.032 08:40:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:14:43.032 08:40:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:43.032 08:40:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:14:43.032 8 00:14:43.032 08:40:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:43.032 08:40:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:14:43.032 08:40:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:43.032 08:40:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:14:43.032 9 00:14:43.032 08:40:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:43.032 08:40:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:14:43.032 08:40:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:43.032 08:40:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:14:43.032 10 00:14:43.032 08:40:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:43.032 08:40:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:14:43.032 08:40:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:43.032 08:40:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:14:43.032 08:40:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:43.032 08:40:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:14:43.032 08:40:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:14:43.032 08:40:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:43.032 08:40:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:14:43.594 08:40:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:43.594 08:40:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:14:43.594 08:40:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:43.594 08:40:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:14:44.966 08:40:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:44.966 08:40:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:14:44.966 08:40:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:14:44.966 08:40:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:44.966 08:40:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:14:45.898 08:40:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:45.898 00:14:45.898 real 0m3.097s 00:14:45.898 user 0m0.011s 00:14:45.898 sys 0m0.005s 00:14:45.898 08:40:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:45.898 08:40:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:14:45.898 ************************************ 00:14:45.898 END TEST scheduler_create_thread 00:14:45.898 ************************************ 00:14:45.898 08:40:40 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:14:45.898 08:40:40 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2142481 00:14:45.898 08:40:40 event.event_scheduler -- common/autotest_common.sh@947 -- # '[' -z 2142481 ']' 00:14:45.898 08:40:40 event.event_scheduler -- common/autotest_common.sh@951 -- # kill -0 2142481 00:14:45.898 08:40:40 event.event_scheduler -- common/autotest_common.sh@952 -- # uname 00:14:45.898 08:40:40 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:14:45.898 08:40:40 event.event_scheduler -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2142481 00:14:45.898 08:40:40 event.event_scheduler -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:14:45.898 08:40:40 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:14:46.156 08:40:40 event.event_scheduler -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2142481' 00:14:46.156 killing process with pid 2142481 00:14:46.156 08:40:40 event.event_scheduler -- common/autotest_common.sh@966 -- # kill 2142481 00:14:46.156 08:40:40 event.event_scheduler -- common/autotest_common.sh@971 -- # wait 2142481 00:14:46.413 [2024-05-15 08:40:41.024986] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:14:46.413 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:14:46.413 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:14:46.413 POWER: Power management governor of lcore 1 has been set to 'schedutil' successfully 00:14:46.413 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:14:46.413 POWER: Power management governor of lcore 2 has been set to 'schedutil' successfully 00:14:46.413 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:14:46.413 POWER: Power management governor of lcore 3 has been set to 'schedutil' successfully 00:14:46.413 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:14:46.671 00:14:46.671 real 0m4.231s 00:14:46.671 user 0m6.952s 00:14:46.671 sys 0m0.335s 00:14:46.671 08:40:41 event.event_scheduler -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:46.671 08:40:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:14:46.671 ************************************ 00:14:46.671 END TEST event_scheduler 00:14:46.671 ************************************ 00:14:46.671 08:40:41 event -- event/event.sh@51 -- # modprobe -n nbd 00:14:46.671 08:40:41 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:14:46.671 08:40:41 event -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:14:46.671 08:40:41 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:46.671 08:40:41 event -- common/autotest_common.sh@10 -- # set +x 00:14:46.671 ************************************ 00:14:46.671 START TEST app_repeat 00:14:46.671 ************************************ 00:14:46.671 08:40:41 event.app_repeat -- common/autotest_common.sh@1122 -- # app_repeat_test 00:14:46.671 08:40:41 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:46.671 08:40:41 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:46.671 08:40:41 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:14:46.671 08:40:41 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:14:46.671 08:40:41 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:14:46.671 08:40:41 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:14:46.671 08:40:41 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:14:46.671 08:40:41 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2143062 00:14:46.671 08:40:41 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:14:46.671 08:40:41 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:14:46.671 08:40:41 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2143062' 00:14:46.671 Process app_repeat pid: 2143062 00:14:46.671 08:40:41 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:14:46.671 08:40:41 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:14:46.671 spdk_app_start Round 0 00:14:46.671 08:40:41 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2143062 /var/tmp/spdk-nbd.sock 00:14:46.671 08:40:41 event.app_repeat -- common/autotest_common.sh@828 -- # '[' -z 2143062 ']' 00:14:46.671 08:40:41 event.app_repeat -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:14:46.671 08:40:41 event.app_repeat -- common/autotest_common.sh@833 -- # local max_retries=100 00:14:46.671 08:40:41 event.app_repeat -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:14:46.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:14:46.671 08:40:41 event.app_repeat -- common/autotest_common.sh@837 -- # xtrace_disable 00:14:46.671 08:40:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:14:46.671 [2024-05-15 08:40:41.370438] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:14:46.671 [2024-05-15 08:40:41.370502] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2143062 ] 00:14:46.671 EAL: No free 2048 kB hugepages reported on node 1 00:14:46.671 [2024-05-15 08:40:41.443862] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:46.928 [2024-05-15 08:40:41.533046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:46.928 [2024-05-15 08:40:41.533051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:46.928 08:40:41 event.app_repeat -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:14:46.928 08:40:41 event.app_repeat -- common/autotest_common.sh@861 -- # return 0 00:14:46.928 08:40:41 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:14:47.185 Malloc0 00:14:47.185 08:40:41 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:14:47.443 Malloc1 00:14:47.443 08:40:42 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:14:47.443 08:40:42 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:47.443 08:40:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:14:47.443 08:40:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:14:47.443 08:40:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:47.443 08:40:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:14:47.443 08:40:42 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:14:47.443 08:40:42 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:47.443 08:40:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:14:47.443 08:40:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:47.443 08:40:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:47.443 08:40:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:47.443 08:40:42 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:14:47.443 08:40:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:47.443 08:40:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:47.443 08:40:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:14:47.701 /dev/nbd0 00:14:47.701 08:40:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:47.701 08:40:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:47.701 08:40:42 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd0 00:14:47.701 08:40:42 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:14:47.701 08:40:42 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:14:47.701 08:40:42 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:14:47.701 08:40:42 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd0 /proc/partitions 00:14:47.701 08:40:42 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:14:47.701 08:40:42 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:14:47.701 08:40:42 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:14:47.701 08:40:42 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:14:47.701 1+0 records in 00:14:47.701 1+0 records out 00:14:47.701 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000137622 s, 29.8 MB/s 00:14:47.701 08:40:42 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:14:47.701 08:40:42 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:14:47.701 08:40:42 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:14:47.701 08:40:42 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:14:47.701 08:40:42 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:14:47.701 08:40:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:47.701 08:40:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:47.701 08:40:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:14:47.958 /dev/nbd1 00:14:47.958 08:40:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:47.958 08:40:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:47.958 08:40:42 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd1 00:14:47.958 08:40:42 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:14:47.958 08:40:42 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:14:47.958 08:40:42 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:14:47.958 08:40:42 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd1 /proc/partitions 00:14:47.959 08:40:42 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:14:47.959 08:40:42 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:14:47.959 08:40:42 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:14:47.959 08:40:42 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:14:47.959 1+0 records in 00:14:47.959 1+0 records out 00:14:47.959 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000208995 s, 19.6 MB/s 00:14:47.959 08:40:42 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:14:47.959 08:40:42 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:14:47.959 08:40:42 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:14:47.959 08:40:42 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:14:47.959 08:40:42 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:14:47.959 08:40:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:47.959 08:40:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:47.959 08:40:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:47.959 08:40:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:47.959 08:40:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:48.216 08:40:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:48.216 { 00:14:48.216 "nbd_device": "/dev/nbd0", 00:14:48.216 "bdev_name": "Malloc0" 00:14:48.216 }, 00:14:48.216 { 00:14:48.216 "nbd_device": "/dev/nbd1", 00:14:48.216 "bdev_name": "Malloc1" 00:14:48.216 } 00:14:48.216 ]' 00:14:48.216 08:40:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:48.216 { 00:14:48.216 "nbd_device": "/dev/nbd0", 00:14:48.216 "bdev_name": "Malloc0" 00:14:48.216 }, 00:14:48.216 { 00:14:48.216 "nbd_device": "/dev/nbd1", 00:14:48.216 "bdev_name": "Malloc1" 00:14:48.216 } 00:14:48.216 ]' 00:14:48.216 08:40:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:48.473 08:40:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:14:48.473 /dev/nbd1' 00:14:48.473 08:40:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:14:48.473 /dev/nbd1' 00:14:48.473 08:40:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:48.473 08:40:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:14:48.473 08:40:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:14:48.473 08:40:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:14:48.473 08:40:43 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:14:48.473 08:40:43 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:14:48.473 08:40:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:48.473 08:40:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:48.473 08:40:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:14:48.473 08:40:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:14:48.473 08:40:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:14:48.473 08:40:43 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:14:48.473 256+0 records in 00:14:48.473 256+0 records out 00:14:48.473 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00502981 s, 208 MB/s 00:14:48.473 08:40:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:48.473 08:40:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:14:48.473 256+0 records in 00:14:48.473 256+0 records out 00:14:48.473 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0239209 s, 43.8 MB/s 00:14:48.474 08:40:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:48.474 08:40:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:14:48.474 256+0 records in 00:14:48.474 256+0 records out 00:14:48.474 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0253425 s, 41.4 MB/s 00:14:48.474 08:40:43 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:14:48.474 08:40:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:48.474 08:40:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:48.474 08:40:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:14:48.474 08:40:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:14:48.474 08:40:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:14:48.474 08:40:43 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:14:48.474 08:40:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:48.474 08:40:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:14:48.474 08:40:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:48.474 08:40:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:14:48.474 08:40:43 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:14:48.474 08:40:43 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:14:48.474 08:40:43 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:48.474 08:40:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:48.474 08:40:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:48.474 08:40:43 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:14:48.474 08:40:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:48.474 08:40:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:48.731 08:40:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:48.731 08:40:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:48.731 08:40:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:48.731 08:40:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:48.731 08:40:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:48.731 08:40:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:48.731 08:40:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:14:48.731 08:40:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:14:48.731 08:40:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:48.731 08:40:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:49.019 08:40:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:49.019 08:40:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:49.019 08:40:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:49.019 08:40:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:49.019 08:40:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:49.019 08:40:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:49.019 08:40:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:14:49.019 08:40:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:14:49.019 08:40:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:49.019 08:40:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:49.019 08:40:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:49.277 08:40:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:49.277 08:40:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:49.277 08:40:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:49.277 08:40:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:49.277 08:40:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:14:49.277 08:40:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:49.277 08:40:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:14:49.277 08:40:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:14:49.277 08:40:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:14:49.277 08:40:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:14:49.277 08:40:43 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:14:49.277 08:40:43 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:14:49.277 08:40:43 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:14:49.535 08:40:44 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:14:49.792 [2024-05-15 08:40:44.400107] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:49.792 [2024-05-15 08:40:44.486199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.792 [2024-05-15 08:40:44.486199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:49.792 [2024-05-15 08:40:44.548787] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:14:49.792 [2024-05-15 08:40:44.548869] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:14:53.076 08:40:47 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:14:53.076 08:40:47 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:14:53.076 spdk_app_start Round 1 00:14:53.076 08:40:47 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2143062 /var/tmp/spdk-nbd.sock 00:14:53.076 08:40:47 event.app_repeat -- common/autotest_common.sh@828 -- # '[' -z 2143062 ']' 00:14:53.076 08:40:47 event.app_repeat -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:14:53.076 08:40:47 event.app_repeat -- common/autotest_common.sh@833 -- # local max_retries=100 00:14:53.076 08:40:47 event.app_repeat -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:14:53.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:14:53.076 08:40:47 event.app_repeat -- common/autotest_common.sh@837 -- # xtrace_disable 00:14:53.076 08:40:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:14:53.076 08:40:47 event.app_repeat -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:14:53.076 08:40:47 event.app_repeat -- common/autotest_common.sh@861 -- # return 0 00:14:53.076 08:40:47 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:14:53.076 Malloc0 00:14:53.076 08:40:47 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:14:53.334 Malloc1 00:14:53.334 08:40:47 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:14:53.334 08:40:47 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:53.334 08:40:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:14:53.334 08:40:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:14:53.334 08:40:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:53.334 08:40:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:14:53.334 08:40:47 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:14:53.334 08:40:47 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:53.334 08:40:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:14:53.334 08:40:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:53.334 08:40:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:53.334 08:40:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:53.334 08:40:47 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:14:53.334 08:40:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:53.334 08:40:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:53.334 08:40:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:14:53.592 /dev/nbd0 00:14:53.592 08:40:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:53.592 08:40:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:53.592 08:40:48 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd0 00:14:53.592 08:40:48 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:14:53.592 08:40:48 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:14:53.592 08:40:48 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:14:53.592 08:40:48 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd0 /proc/partitions 00:14:53.592 08:40:48 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:14:53.592 08:40:48 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:14:53.592 08:40:48 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:14:53.592 08:40:48 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:14:53.592 1+0 records in 00:14:53.592 1+0 records out 00:14:53.592 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000200553 s, 20.4 MB/s 00:14:53.592 08:40:48 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:14:53.592 08:40:48 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:14:53.592 08:40:48 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:14:53.592 08:40:48 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:14:53.592 08:40:48 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:14:53.592 08:40:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:53.592 08:40:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:53.592 08:40:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:14:53.850 /dev/nbd1 00:14:53.850 08:40:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:53.850 08:40:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:53.850 08:40:48 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd1 00:14:53.850 08:40:48 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:14:53.850 08:40:48 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:14:53.850 08:40:48 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:14:53.850 08:40:48 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd1 /proc/partitions 00:14:53.850 08:40:48 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:14:53.850 08:40:48 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:14:53.850 08:40:48 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:14:53.850 08:40:48 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:14:53.850 1+0 records in 00:14:53.850 1+0 records out 00:14:53.850 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000204835 s, 20.0 MB/s 00:14:53.850 08:40:48 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:14:53.850 08:40:48 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:14:53.850 08:40:48 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:14:53.850 08:40:48 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:14:53.850 08:40:48 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:14:53.850 08:40:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:53.850 08:40:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:53.850 08:40:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:53.850 08:40:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:53.850 08:40:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:54.108 08:40:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:54.108 { 00:14:54.108 "nbd_device": "/dev/nbd0", 00:14:54.108 "bdev_name": "Malloc0" 00:14:54.108 }, 00:14:54.108 { 00:14:54.108 "nbd_device": "/dev/nbd1", 00:14:54.108 "bdev_name": "Malloc1" 00:14:54.108 } 00:14:54.108 ]' 00:14:54.108 08:40:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:54.108 { 00:14:54.108 "nbd_device": "/dev/nbd0", 00:14:54.108 "bdev_name": "Malloc0" 00:14:54.108 }, 00:14:54.108 { 00:14:54.108 "nbd_device": "/dev/nbd1", 00:14:54.108 "bdev_name": "Malloc1" 00:14:54.108 } 00:14:54.108 ]' 00:14:54.108 08:40:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:54.108 08:40:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:14:54.108 /dev/nbd1' 00:14:54.108 08:40:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:14:54.108 /dev/nbd1' 00:14:54.108 08:40:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:54.108 08:40:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:14:54.108 08:40:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:14:54.108 08:40:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:14:54.108 08:40:48 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:14:54.108 08:40:48 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:14:54.108 08:40:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:54.108 08:40:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:54.108 08:40:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:14:54.108 08:40:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:14:54.108 08:40:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:14:54.108 08:40:48 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:14:54.108 256+0 records in 00:14:54.108 256+0 records out 00:14:54.108 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00377697 s, 278 MB/s 00:14:54.108 08:40:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:54.108 08:40:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:14:54.108 256+0 records in 00:14:54.108 256+0 records out 00:14:54.108 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0237418 s, 44.2 MB/s 00:14:54.108 08:40:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:54.108 08:40:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:14:54.108 256+0 records in 00:14:54.108 256+0 records out 00:14:54.108 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0250638 s, 41.8 MB/s 00:14:54.108 08:40:48 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:14:54.108 08:40:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:54.108 08:40:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:54.108 08:40:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:14:54.108 08:40:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:14:54.108 08:40:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:14:54.108 08:40:48 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:14:54.108 08:40:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:54.108 08:40:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:14:54.108 08:40:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:54.108 08:40:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:14:54.108 08:40:48 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:14:54.108 08:40:48 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:14:54.108 08:40:48 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:54.108 08:40:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:54.108 08:40:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:54.108 08:40:48 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:14:54.108 08:40:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:54.108 08:40:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:54.367 08:40:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:54.367 08:40:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:54.367 08:40:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:54.367 08:40:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:54.367 08:40:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:54.367 08:40:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:54.367 08:40:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:14:54.367 08:40:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:14:54.367 08:40:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:54.367 08:40:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:54.625 08:40:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:54.625 08:40:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:54.625 08:40:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:54.625 08:40:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:54.625 08:40:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:54.625 08:40:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:54.625 08:40:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:14:54.625 08:40:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:14:54.625 08:40:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:54.625 08:40:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:54.625 08:40:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:54.883 08:40:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:54.883 08:40:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:54.883 08:40:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:54.883 08:40:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:54.883 08:40:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:14:54.883 08:40:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:54.883 08:40:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:14:54.883 08:40:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:14:54.883 08:40:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:14:54.883 08:40:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:14:54.883 08:40:49 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:14:54.883 08:40:49 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:14:54.883 08:40:49 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:14:55.140 08:40:49 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:14:55.398 [2024-05-15 08:40:50.105353] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:55.656 [2024-05-15 08:40:50.193984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:55.656 [2024-05-15 08:40:50.193987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.656 [2024-05-15 08:40:50.258682] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:14:55.656 [2024-05-15 08:40:50.258765] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:14:58.184 08:40:52 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:14:58.184 08:40:52 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:14:58.184 spdk_app_start Round 2 00:14:58.184 08:40:52 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2143062 /var/tmp/spdk-nbd.sock 00:14:58.184 08:40:52 event.app_repeat -- common/autotest_common.sh@828 -- # '[' -z 2143062 ']' 00:14:58.184 08:40:52 event.app_repeat -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:14:58.184 08:40:52 event.app_repeat -- common/autotest_common.sh@833 -- # local max_retries=100 00:14:58.184 08:40:52 event.app_repeat -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:14:58.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:14:58.184 08:40:52 event.app_repeat -- common/autotest_common.sh@837 -- # xtrace_disable 00:14:58.184 08:40:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:14:58.442 08:40:53 event.app_repeat -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:14:58.442 08:40:53 event.app_repeat -- common/autotest_common.sh@861 -- # return 0 00:14:58.442 08:40:53 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:14:58.700 Malloc0 00:14:58.700 08:40:53 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:14:58.958 Malloc1 00:14:58.958 08:40:53 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:14:58.958 08:40:53 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:58.958 08:40:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:14:58.958 08:40:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:14:58.958 08:40:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:58.958 08:40:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:14:58.958 08:40:53 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:14:58.958 08:40:53 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:58.958 08:40:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:14:58.958 08:40:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:58.958 08:40:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:58.958 08:40:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:58.958 08:40:53 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:14:58.958 08:40:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:58.958 08:40:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:58.958 08:40:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:14:59.216 /dev/nbd0 00:14:59.216 08:40:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:59.216 08:40:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:59.216 08:40:53 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd0 00:14:59.216 08:40:53 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:14:59.217 08:40:53 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:14:59.217 08:40:53 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:14:59.217 08:40:53 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd0 /proc/partitions 00:14:59.217 08:40:53 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:14:59.217 08:40:53 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:14:59.217 08:40:53 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:14:59.217 08:40:53 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:14:59.217 1+0 records in 00:14:59.217 1+0 records out 00:14:59.217 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000140162 s, 29.2 MB/s 00:14:59.217 08:40:53 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:14:59.217 08:40:53 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:14:59.217 08:40:53 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:14:59.217 08:40:53 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:14:59.217 08:40:53 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:14:59.217 08:40:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:59.217 08:40:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:59.217 08:40:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:14:59.475 /dev/nbd1 00:14:59.475 08:40:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:59.475 08:40:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:59.475 08:40:54 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd1 00:14:59.475 08:40:54 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:14:59.475 08:40:54 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:14:59.475 08:40:54 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:14:59.475 08:40:54 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd1 /proc/partitions 00:14:59.475 08:40:54 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:14:59.475 08:40:54 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:14:59.475 08:40:54 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:14:59.475 08:40:54 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:14:59.475 1+0 records in 00:14:59.475 1+0 records out 00:14:59.475 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000174141 s, 23.5 MB/s 00:14:59.475 08:40:54 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:14:59.475 08:40:54 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:14:59.475 08:40:54 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:14:59.475 08:40:54 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:14:59.475 08:40:54 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:14:59.475 08:40:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:59.475 08:40:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:59.475 08:40:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:59.475 08:40:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:59.475 08:40:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:59.733 08:40:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:59.733 { 00:14:59.733 "nbd_device": "/dev/nbd0", 00:14:59.733 "bdev_name": "Malloc0" 00:14:59.733 }, 00:14:59.733 { 00:14:59.733 "nbd_device": "/dev/nbd1", 00:14:59.733 "bdev_name": "Malloc1" 00:14:59.733 } 00:14:59.733 ]' 00:14:59.733 08:40:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:59.733 { 00:14:59.733 "nbd_device": "/dev/nbd0", 00:14:59.733 "bdev_name": "Malloc0" 00:14:59.733 }, 00:14:59.733 { 00:14:59.733 "nbd_device": "/dev/nbd1", 00:14:59.733 "bdev_name": "Malloc1" 00:14:59.733 } 00:14:59.733 ]' 00:14:59.733 08:40:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:59.733 08:40:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:14:59.733 /dev/nbd1' 00:14:59.733 08:40:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:14:59.733 /dev/nbd1' 00:14:59.733 08:40:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:59.733 08:40:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:14:59.733 08:40:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:14:59.733 08:40:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:14:59.733 08:40:54 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:14:59.733 08:40:54 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:14:59.733 08:40:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:59.733 08:40:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:59.733 08:40:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:14:59.733 08:40:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:14:59.733 08:40:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:14:59.733 08:40:54 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:14:59.733 256+0 records in 00:14:59.733 256+0 records out 00:14:59.733 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00498757 s, 210 MB/s 00:14:59.733 08:40:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:59.733 08:40:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:14:59.733 256+0 records in 00:14:59.733 256+0 records out 00:14:59.733 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0241479 s, 43.4 MB/s 00:14:59.733 08:40:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:59.733 08:40:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:14:59.992 256+0 records in 00:14:59.992 256+0 records out 00:14:59.992 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0257642 s, 40.7 MB/s 00:14:59.992 08:40:54 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:14:59.992 08:40:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:59.992 08:40:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:59.992 08:40:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:14:59.992 08:40:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:14:59.992 08:40:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:14:59.992 08:40:54 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:14:59.992 08:40:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:59.992 08:40:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:14:59.992 08:40:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:59.992 08:40:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:14:59.992 08:40:54 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:14:59.992 08:40:54 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:14:59.992 08:40:54 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:59.992 08:40:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:59.992 08:40:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:59.992 08:40:54 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:14:59.992 08:40:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:59.992 08:40:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:00.249 08:40:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:00.249 08:40:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:00.249 08:40:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:00.249 08:40:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:00.249 08:40:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:00.249 08:40:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:00.249 08:40:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:15:00.249 08:40:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:15:00.249 08:40:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:00.249 08:40:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:00.506 08:40:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:00.506 08:40:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:00.506 08:40:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:00.506 08:40:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:00.506 08:40:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:00.506 08:40:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:00.506 08:40:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:15:00.506 08:40:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:15:00.506 08:40:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:00.506 08:40:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:00.506 08:40:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:00.762 08:40:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:00.762 08:40:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:00.762 08:40:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:00.762 08:40:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:00.762 08:40:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:15:00.762 08:40:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:00.762 08:40:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:15:00.762 08:40:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:15:00.762 08:40:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:15:00.762 08:40:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:15:00.762 08:40:55 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:00.762 08:40:55 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:15:00.762 08:40:55 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:15:01.018 08:40:55 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:15:01.275 [2024-05-15 08:40:55.856928] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:01.275 [2024-05-15 08:40:55.942795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:01.275 [2024-05-15 08:40:55.942800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:01.275 [2024-05-15 08:40:56.005236] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:15:01.275 [2024-05-15 08:40:56.005323] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:15:04.552 08:40:58 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2143062 /var/tmp/spdk-nbd.sock 00:15:04.552 08:40:58 event.app_repeat -- common/autotest_common.sh@828 -- # '[' -z 2143062 ']' 00:15:04.552 08:40:58 event.app_repeat -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:04.552 08:40:58 event.app_repeat -- common/autotest_common.sh@833 -- # local max_retries=100 00:15:04.552 08:40:58 event.app_repeat -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:04.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:04.552 08:40:58 event.app_repeat -- common/autotest_common.sh@837 -- # xtrace_disable 00:15:04.552 08:40:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:15:04.552 08:40:58 event.app_repeat -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:15:04.552 08:40:58 event.app_repeat -- common/autotest_common.sh@861 -- # return 0 00:15:04.552 08:40:58 event.app_repeat -- event/event.sh@39 -- # killprocess 2143062 00:15:04.552 08:40:58 event.app_repeat -- common/autotest_common.sh@947 -- # '[' -z 2143062 ']' 00:15:04.552 08:40:58 event.app_repeat -- common/autotest_common.sh@951 -- # kill -0 2143062 00:15:04.552 08:40:58 event.app_repeat -- common/autotest_common.sh@952 -- # uname 00:15:04.552 08:40:58 event.app_repeat -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:15:04.552 08:40:58 event.app_repeat -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2143062 00:15:04.552 08:40:58 event.app_repeat -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:15:04.552 08:40:58 event.app_repeat -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:15:04.552 08:40:58 event.app_repeat -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2143062' 00:15:04.552 killing process with pid 2143062 00:15:04.552 08:40:58 event.app_repeat -- common/autotest_common.sh@966 -- # kill 2143062 00:15:04.552 08:40:58 event.app_repeat -- common/autotest_common.sh@971 -- # wait 2143062 00:15:04.552 spdk_app_start is called in Round 0. 00:15:04.552 Shutdown signal received, stop current app iteration 00:15:04.552 Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 reinitialization... 00:15:04.552 spdk_app_start is called in Round 1. 00:15:04.552 Shutdown signal received, stop current app iteration 00:15:04.552 Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 reinitialization... 00:15:04.552 spdk_app_start is called in Round 2. 00:15:04.552 Shutdown signal received, stop current app iteration 00:15:04.552 Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 reinitialization... 00:15:04.552 spdk_app_start is called in Round 3. 00:15:04.552 Shutdown signal received, stop current app iteration 00:15:04.552 08:40:59 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:15:04.552 08:40:59 event.app_repeat -- event/event.sh@42 -- # return 0 00:15:04.552 00:15:04.552 real 0m17.764s 00:15:04.552 user 0m39.112s 00:15:04.552 sys 0m3.288s 00:15:04.552 08:40:59 event.app_repeat -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:04.552 08:40:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:15:04.552 ************************************ 00:15:04.552 END TEST app_repeat 00:15:04.552 ************************************ 00:15:04.552 08:40:59 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:15:04.552 08:40:59 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:15:04.552 08:40:59 event -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:15:04.552 08:40:59 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:04.552 08:40:59 event -- common/autotest_common.sh@10 -- # set +x 00:15:04.552 ************************************ 00:15:04.553 START TEST cpu_locks 00:15:04.553 ************************************ 00:15:04.553 08:40:59 event.cpu_locks -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:15:04.553 * Looking for test storage... 00:15:04.553 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:15:04.553 08:40:59 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:15:04.553 08:40:59 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:15:04.553 08:40:59 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:15:04.553 08:40:59 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:15:04.553 08:40:59 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:15:04.553 08:40:59 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:04.553 08:40:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:15:04.553 ************************************ 00:15:04.553 START TEST default_locks 00:15:04.553 ************************************ 00:15:04.553 08:40:59 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # default_locks 00:15:04.553 08:40:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2145413 00:15:04.553 08:40:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:15:04.553 08:40:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2145413 00:15:04.553 08:40:59 event.cpu_locks.default_locks -- common/autotest_common.sh@828 -- # '[' -z 2145413 ']' 00:15:04.553 08:40:59 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:04.553 08:40:59 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local max_retries=100 00:15:04.553 08:40:59 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:04.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:04.553 08:40:59 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # xtrace_disable 00:15:04.553 08:40:59 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:15:04.553 [2024-05-15 08:40:59.283781] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:15:04.553 [2024-05-15 08:40:59.283863] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2145413 ] 00:15:04.553 EAL: No free 2048 kB hugepages reported on node 1 00:15:04.811 [2024-05-15 08:40:59.349245] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.811 [2024-05-15 08:40:59.429809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:05.069 08:40:59 event.cpu_locks.default_locks -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:15:05.069 08:40:59 event.cpu_locks.default_locks -- common/autotest_common.sh@861 -- # return 0 00:15:05.069 08:40:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2145413 00:15:05.069 08:40:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2145413 00:15:05.069 08:40:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:15:05.327 lslocks: write error 00:15:05.327 08:40:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2145413 00:15:05.327 08:40:59 event.cpu_locks.default_locks -- common/autotest_common.sh@947 -- # '[' -z 2145413 ']' 00:15:05.327 08:40:59 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # kill -0 2145413 00:15:05.327 08:40:59 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # uname 00:15:05.327 08:40:59 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:15:05.327 08:40:59 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2145413 00:15:05.327 08:40:59 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:15:05.327 08:40:59 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:15:05.327 08:40:59 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2145413' 00:15:05.327 killing process with pid 2145413 00:15:05.327 08:40:59 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # kill 2145413 00:15:05.327 08:40:59 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # wait 2145413 00:15:05.584 08:41:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2145413 00:15:05.584 08:41:00 event.cpu_locks.default_locks -- common/autotest_common.sh@649 -- # local es=0 00:15:05.584 08:41:00 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 2145413 00:15:05.584 08:41:00 event.cpu_locks.default_locks -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:15:05.584 08:41:00 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:05.584 08:41:00 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:15:05.584 08:41:00 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:05.584 08:41:00 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # waitforlisten 2145413 00:15:05.584 08:41:00 event.cpu_locks.default_locks -- common/autotest_common.sh@828 -- # '[' -z 2145413 ']' 00:15:05.584 08:41:00 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:05.584 08:41:00 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local max_retries=100 00:15:05.584 08:41:00 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:05.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:05.584 08:41:00 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # xtrace_disable 00:15:05.584 08:41:00 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:15:05.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 843: kill: (2145413) - No such process 00:15:05.584 ERROR: process (pid: 2145413) is no longer running 00:15:05.584 08:41:00 event.cpu_locks.default_locks -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:15:05.584 08:41:00 event.cpu_locks.default_locks -- common/autotest_common.sh@861 -- # return 1 00:15:05.584 08:41:00 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # es=1 00:15:05.584 08:41:00 event.cpu_locks.default_locks -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:05.584 08:41:00 event.cpu_locks.default_locks -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:05.584 08:41:00 event.cpu_locks.default_locks -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:05.584 08:41:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:15:05.584 08:41:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:15:05.584 08:41:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:15:05.584 08:41:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:15:05.584 00:15:05.584 real 0m1.125s 00:15:05.584 user 0m1.035s 00:15:05.584 sys 0m0.526s 00:15:05.584 08:41:00 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:05.584 08:41:00 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:15:05.584 ************************************ 00:15:05.584 END TEST default_locks 00:15:05.584 ************************************ 00:15:05.842 08:41:00 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:15:05.842 08:41:00 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:15:05.842 08:41:00 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:05.842 08:41:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:15:05.842 ************************************ 00:15:05.842 START TEST default_locks_via_rpc 00:15:05.842 ************************************ 00:15:05.842 08:41:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # default_locks_via_rpc 00:15:05.842 08:41:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2145575 00:15:05.842 08:41:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2145575 00:15:05.842 08:41:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:15:05.842 08:41:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 2145575 ']' 00:15:05.842 08:41:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:05.842 08:41:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:15:05.842 08:41:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:05.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:05.842 08:41:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:15:05.842 08:41:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.842 [2024-05-15 08:41:00.464235] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:15:05.842 [2024-05-15 08:41:00.464327] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2145575 ] 00:15:05.842 EAL: No free 2048 kB hugepages reported on node 1 00:15:05.842 [2024-05-15 08:41:00.539621] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.842 [2024-05-15 08:41:00.626148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.099 08:41:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:15:06.099 08:41:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:15:06.099 08:41:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:15:06.099 08:41:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:06.099 08:41:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:06.099 08:41:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:06.099 08:41:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:15:06.099 08:41:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:15:06.099 08:41:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:15:06.100 08:41:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:15:06.100 08:41:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:15:06.100 08:41:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:06.100 08:41:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:06.357 08:41:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:06.357 08:41:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2145575 00:15:06.357 08:41:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2145575 00:15:06.357 08:41:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:15:06.644 08:41:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2145575 00:15:06.644 08:41:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@947 -- # '[' -z 2145575 ']' 00:15:06.644 08:41:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # kill -0 2145575 00:15:06.644 08:41:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # uname 00:15:06.644 08:41:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:15:06.644 08:41:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2145575 00:15:06.644 08:41:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:15:06.644 08:41:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:15:06.644 08:41:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2145575' 00:15:06.644 killing process with pid 2145575 00:15:06.644 08:41:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # kill 2145575 00:15:06.644 08:41:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # wait 2145575 00:15:06.902 00:15:06.902 real 0m1.260s 00:15:06.902 user 0m1.194s 00:15:06.902 sys 0m0.530s 00:15:06.902 08:41:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:06.902 08:41:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:06.902 ************************************ 00:15:06.902 END TEST default_locks_via_rpc 00:15:06.902 ************************************ 00:15:07.161 08:41:01 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:15:07.161 08:41:01 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:15:07.161 08:41:01 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:07.161 08:41:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:15:07.161 ************************************ 00:15:07.161 START TEST non_locking_app_on_locked_coremask 00:15:07.161 ************************************ 00:15:07.161 08:41:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # non_locking_app_on_locked_coremask 00:15:07.161 08:41:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2145738 00:15:07.161 08:41:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:15:07.161 08:41:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2145738 /var/tmp/spdk.sock 00:15:07.161 08:41:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # '[' -z 2145738 ']' 00:15:07.161 08:41:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:07.161 08:41:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:15:07.161 08:41:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:07.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:07.161 08:41:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:15:07.161 08:41:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:15:07.161 [2024-05-15 08:41:01.784671] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:15:07.161 [2024-05-15 08:41:01.784763] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2145738 ] 00:15:07.161 EAL: No free 2048 kB hugepages reported on node 1 00:15:07.161 [2024-05-15 08:41:01.859619] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:07.161 [2024-05-15 08:41:01.946560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.418 08:41:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:15:07.418 08:41:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # return 0 00:15:07.418 08:41:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2145863 00:15:07.418 08:41:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:15:07.418 08:41:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2145863 /var/tmp/spdk2.sock 00:15:07.418 08:41:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # '[' -z 2145863 ']' 00:15:07.418 08:41:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:15:07.418 08:41:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:15:07.418 08:41:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:15:07.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:15:07.418 08:41:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:15:07.418 08:41:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:15:07.676 [2024-05-15 08:41:02.248555] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:15:07.676 [2024-05-15 08:41:02.248651] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2145863 ] 00:15:07.676 EAL: No free 2048 kB hugepages reported on node 1 00:15:07.676 [2024-05-15 08:41:02.362895] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:15:07.676 [2024-05-15 08:41:02.362934] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:07.934 [2024-05-15 08:41:02.545773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:08.500 08:41:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:15:08.500 08:41:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # return 0 00:15:08.500 08:41:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2145738 00:15:08.500 08:41:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2145738 00:15:08.500 08:41:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:15:09.434 lslocks: write error 00:15:09.434 08:41:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2145738 00:15:09.434 08:41:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # '[' -z 2145738 ']' 00:15:09.434 08:41:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # kill -0 2145738 00:15:09.434 08:41:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # uname 00:15:09.434 08:41:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:15:09.434 08:41:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2145738 00:15:09.434 08:41:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:15:09.434 08:41:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:15:09.434 08:41:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2145738' 00:15:09.434 killing process with pid 2145738 00:15:09.434 08:41:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # kill 2145738 00:15:09.434 08:41:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # wait 2145738 00:15:10.368 08:41:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2145863 00:15:10.368 08:41:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # '[' -z 2145863 ']' 00:15:10.368 08:41:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # kill -0 2145863 00:15:10.368 08:41:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # uname 00:15:10.368 08:41:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:15:10.368 08:41:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2145863 00:15:10.368 08:41:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:15:10.368 08:41:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:15:10.368 08:41:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2145863' 00:15:10.368 killing process with pid 2145863 00:15:10.368 08:41:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # kill 2145863 00:15:10.368 08:41:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # wait 2145863 00:15:10.626 00:15:10.626 real 0m3.496s 00:15:10.626 user 0m3.626s 00:15:10.626 sys 0m1.120s 00:15:10.626 08:41:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:10.626 08:41:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:15:10.626 ************************************ 00:15:10.626 END TEST non_locking_app_on_locked_coremask 00:15:10.626 ************************************ 00:15:10.626 08:41:05 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:15:10.626 08:41:05 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:15:10.626 08:41:05 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:10.626 08:41:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:15:10.626 ************************************ 00:15:10.626 START TEST locking_app_on_unlocked_coremask 00:15:10.626 ************************************ 00:15:10.626 08:41:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # locking_app_on_unlocked_coremask 00:15:10.626 08:41:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2146186 00:15:10.626 08:41:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2146186 /var/tmp/spdk.sock 00:15:10.626 08:41:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:15:10.626 08:41:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@828 -- # '[' -z 2146186 ']' 00:15:10.626 08:41:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:10.626 08:41:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:15:10.626 08:41:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:10.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:10.626 08:41:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:15:10.626 08:41:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:15:10.626 [2024-05-15 08:41:05.338091] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:15:10.626 [2024-05-15 08:41:05.338184] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2146186 ] 00:15:10.626 EAL: No free 2048 kB hugepages reported on node 1 00:15:10.626 [2024-05-15 08:41:05.411889] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:15:10.626 [2024-05-15 08:41:05.411930] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.884 [2024-05-15 08:41:05.502013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:11.142 08:41:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:15:11.142 08:41:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@861 -- # return 0 00:15:11.142 08:41:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2146303 00:15:11.142 08:41:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:15:11.142 08:41:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2146303 /var/tmp/spdk2.sock 00:15:11.142 08:41:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@828 -- # '[' -z 2146303 ']' 00:15:11.142 08:41:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:15:11.142 08:41:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:15:11.142 08:41:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:15:11.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:15:11.142 08:41:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:15:11.142 08:41:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:15:11.142 [2024-05-15 08:41:05.812096] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:15:11.142 [2024-05-15 08:41:05.812193] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2146303 ] 00:15:11.142 EAL: No free 2048 kB hugepages reported on node 1 00:15:11.142 [2024-05-15 08:41:05.920980] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.400 [2024-05-15 08:41:06.098416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:11.966 08:41:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:15:11.966 08:41:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@861 -- # return 0 00:15:11.966 08:41:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2146303 00:15:11.966 08:41:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2146303 00:15:11.966 08:41:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:15:12.900 lslocks: write error 00:15:12.900 08:41:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2146186 00:15:12.900 08:41:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@947 -- # '[' -z 2146186 ']' 00:15:12.900 08:41:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # kill -0 2146186 00:15:12.900 08:41:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # uname 00:15:12.900 08:41:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:15:12.900 08:41:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2146186 00:15:12.900 08:41:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:15:12.900 08:41:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:15:12.900 08:41:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2146186' 00:15:12.900 killing process with pid 2146186 00:15:12.900 08:41:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # kill 2146186 00:15:12.900 08:41:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # wait 2146186 00:15:13.466 08:41:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2146303 00:15:13.466 08:41:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@947 -- # '[' -z 2146303 ']' 00:15:13.466 08:41:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # kill -0 2146303 00:15:13.466 08:41:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # uname 00:15:13.466 08:41:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:15:13.466 08:41:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2146303 00:15:13.466 08:41:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:15:13.466 08:41:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:15:13.466 08:41:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2146303' 00:15:13.466 killing process with pid 2146303 00:15:13.466 08:41:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # kill 2146303 00:15:13.466 08:41:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # wait 2146303 00:15:14.032 00:15:14.032 real 0m3.351s 00:15:14.032 user 0m3.464s 00:15:14.032 sys 0m1.124s 00:15:14.032 08:41:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:14.032 08:41:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:15:14.032 ************************************ 00:15:14.032 END TEST locking_app_on_unlocked_coremask 00:15:14.032 ************************************ 00:15:14.032 08:41:08 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:15:14.032 08:41:08 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:15:14.032 08:41:08 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:14.032 08:41:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:15:14.032 ************************************ 00:15:14.032 START TEST locking_app_on_locked_coremask 00:15:14.032 ************************************ 00:15:14.032 08:41:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # locking_app_on_locked_coremask 00:15:14.032 08:41:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2146624 00:15:14.032 08:41:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:15:14.032 08:41:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2146624 /var/tmp/spdk.sock 00:15:14.032 08:41:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # '[' -z 2146624 ']' 00:15:14.032 08:41:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:14.032 08:41:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:15:14.032 08:41:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:14.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:14.032 08:41:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:15:14.032 08:41:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:15:14.032 [2024-05-15 08:41:08.742814] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:15:14.032 [2024-05-15 08:41:08.742905] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2146624 ] 00:15:14.032 EAL: No free 2048 kB hugepages reported on node 1 00:15:14.032 [2024-05-15 08:41:08.816549] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.290 [2024-05-15 08:41:08.899985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.549 08:41:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:15:14.549 08:41:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # return 0 00:15:14.549 08:41:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2146737 00:15:14.549 08:41:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:15:14.549 08:41:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2146737 /var/tmp/spdk2.sock 00:15:14.549 08:41:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@649 -- # local es=0 00:15:14.549 08:41:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 2146737 /var/tmp/spdk2.sock 00:15:14.549 08:41:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:15:14.549 08:41:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:14.549 08:41:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:15:14.549 08:41:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:14.549 08:41:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # waitforlisten 2146737 /var/tmp/spdk2.sock 00:15:14.549 08:41:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # '[' -z 2146737 ']' 00:15:14.549 08:41:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:15:14.549 08:41:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:15:14.549 08:41:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:15:14.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:15:14.549 08:41:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:15:14.549 08:41:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:15:14.549 [2024-05-15 08:41:09.207393] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:15:14.549 [2024-05-15 08:41:09.207473] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2146737 ] 00:15:14.549 EAL: No free 2048 kB hugepages reported on node 1 00:15:14.549 [2024-05-15 08:41:09.317416] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2146624 has claimed it. 00:15:14.549 [2024-05-15 08:41:09.317480] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:15:15.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 843: kill: (2146737) - No such process 00:15:15.115 ERROR: process (pid: 2146737) is no longer running 00:15:15.115 08:41:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:15:15.115 08:41:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # return 1 00:15:15.115 08:41:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # es=1 00:15:15.115 08:41:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:15.115 08:41:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:15.115 08:41:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:15.115 08:41:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2146624 00:15:15.115 08:41:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2146624 00:15:15.115 08:41:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:15:15.681 lslocks: write error 00:15:15.681 08:41:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2146624 00:15:15.681 08:41:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # '[' -z 2146624 ']' 00:15:15.681 08:41:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # kill -0 2146624 00:15:15.681 08:41:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # uname 00:15:15.681 08:41:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:15:15.681 08:41:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2146624 00:15:15.681 08:41:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:15:15.681 08:41:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:15:15.681 08:41:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2146624' 00:15:15.681 killing process with pid 2146624 00:15:15.681 08:41:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # kill 2146624 00:15:15.681 08:41:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # wait 2146624 00:15:15.939 00:15:15.939 real 0m2.029s 00:15:15.939 user 0m2.147s 00:15:15.939 sys 0m0.668s 00:15:15.940 08:41:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:15.940 08:41:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:15:15.940 ************************************ 00:15:15.940 END TEST locking_app_on_locked_coremask 00:15:15.940 ************************************ 00:15:16.198 08:41:10 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:15:16.198 08:41:10 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:15:16.198 08:41:10 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:16.198 08:41:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:15:16.198 ************************************ 00:15:16.198 START TEST locking_overlapped_coremask 00:15:16.198 ************************************ 00:15:16.198 08:41:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # locking_overlapped_coremask 00:15:16.198 08:41:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2146910 00:15:16.198 08:41:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:15:16.198 08:41:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2146910 /var/tmp/spdk.sock 00:15:16.198 08:41:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@828 -- # '[' -z 2146910 ']' 00:15:16.198 08:41:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:16.198 08:41:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:15:16.198 08:41:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:16.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:16.198 08:41:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:15:16.198 08:41:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:15:16.198 [2024-05-15 08:41:10.826156] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:15:16.198 [2024-05-15 08:41:10.826258] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2146910 ] 00:15:16.198 EAL: No free 2048 kB hugepages reported on node 1 00:15:16.198 [2024-05-15 08:41:10.897310] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:16.198 [2024-05-15 08:41:10.984691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:16.198 [2024-05-15 08:41:10.984741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:16.198 [2024-05-15 08:41:10.984758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.456 08:41:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:15:16.456 08:41:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@861 -- # return 0 00:15:16.456 08:41:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2147036 00:15:16.456 08:41:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:15:16.456 08:41:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2147036 /var/tmp/spdk2.sock 00:15:16.456 08:41:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@649 -- # local es=0 00:15:16.456 08:41:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 2147036 /var/tmp/spdk2.sock 00:15:16.456 08:41:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:15:16.456 08:41:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:16.456 08:41:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:15:16.456 08:41:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:16.456 08:41:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # waitforlisten 2147036 /var/tmp/spdk2.sock 00:15:16.456 08:41:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@828 -- # '[' -z 2147036 ']' 00:15:16.456 08:41:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:15:16.456 08:41:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:15:16.456 08:41:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:15:16.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:15:16.456 08:41:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:15:16.456 08:41:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:15:16.713 [2024-05-15 08:41:11.269570] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:15:16.713 [2024-05-15 08:41:11.269664] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2147036 ] 00:15:16.713 EAL: No free 2048 kB hugepages reported on node 1 00:15:16.713 [2024-05-15 08:41:11.368414] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2146910 has claimed it. 00:15:16.713 [2024-05-15 08:41:11.368492] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:15:17.278 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 843: kill: (2147036) - No such process 00:15:17.278 ERROR: process (pid: 2147036) is no longer running 00:15:17.278 08:41:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:15:17.278 08:41:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@861 -- # return 1 00:15:17.278 08:41:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # es=1 00:15:17.278 08:41:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:17.278 08:41:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:17.278 08:41:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:17.278 08:41:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:15:17.278 08:41:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:15:17.279 08:41:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:15:17.279 08:41:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:15:17.279 08:41:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2146910 00:15:17.279 08:41:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@947 -- # '[' -z 2146910 ']' 00:15:17.279 08:41:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # kill -0 2146910 00:15:17.279 08:41:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # uname 00:15:17.279 08:41:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:15:17.279 08:41:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2146910 00:15:17.279 08:41:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:15:17.279 08:41:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:15:17.279 08:41:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2146910' 00:15:17.279 killing process with pid 2146910 00:15:17.279 08:41:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # kill 2146910 00:15:17.279 08:41:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # wait 2146910 00:15:17.845 00:15:17.845 real 0m1.598s 00:15:17.845 user 0m4.307s 00:15:17.845 sys 0m0.451s 00:15:17.845 08:41:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:17.845 08:41:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:15:17.845 ************************************ 00:15:17.845 END TEST locking_overlapped_coremask 00:15:17.845 ************************************ 00:15:17.845 08:41:12 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:15:17.845 08:41:12 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:15:17.845 08:41:12 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:17.845 08:41:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:15:17.845 ************************************ 00:15:17.845 START TEST locking_overlapped_coremask_via_rpc 00:15:17.845 ************************************ 00:15:17.845 08:41:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # locking_overlapped_coremask_via_rpc 00:15:17.845 08:41:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2147205 00:15:17.845 08:41:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:15:17.845 08:41:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2147205 /var/tmp/spdk.sock 00:15:17.845 08:41:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 2147205 ']' 00:15:17.845 08:41:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.845 08:41:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:15:17.845 08:41:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.845 08:41:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:15:17.845 08:41:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:17.845 [2024-05-15 08:41:12.481413] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:15:17.845 [2024-05-15 08:41:12.481500] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2147205 ] 00:15:17.845 EAL: No free 2048 kB hugepages reported on node 1 00:15:17.845 [2024-05-15 08:41:12.552846] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:15:17.845 [2024-05-15 08:41:12.552884] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:18.103 [2024-05-15 08:41:12.640686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:18.103 [2024-05-15 08:41:12.640735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:18.103 [2024-05-15 08:41:12.640738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.103 08:41:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:15:18.103 08:41:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:15:18.361 08:41:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2147210 00:15:18.361 08:41:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2147210 /var/tmp/spdk2.sock 00:15:18.361 08:41:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:15:18.361 08:41:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 2147210 ']' 00:15:18.361 08:41:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:15:18.361 08:41:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:15:18.361 08:41:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:15:18.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:15:18.361 08:41:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:15:18.361 08:41:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:18.361 [2024-05-15 08:41:12.947008] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:15:18.361 [2024-05-15 08:41:12.947107] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2147210 ] 00:15:18.361 EAL: No free 2048 kB hugepages reported on node 1 00:15:18.361 [2024-05-15 08:41:13.049138] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:15:18.361 [2024-05-15 08:41:13.049179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:18.619 [2024-05-15 08:41:13.222797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:18.619 [2024-05-15 08:41:13.222860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:18.619 [2024-05-15 08:41:13.222862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:19.184 08:41:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:15:19.184 08:41:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:15:19.184 08:41:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:15:19.184 08:41:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:19.184 08:41:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.184 08:41:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:19.184 08:41:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:15:19.184 08:41:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@649 -- # local es=0 00:15:19.184 08:41:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:15:19.184 08:41:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:15:19.184 08:41:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:19.184 08:41:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:15:19.184 08:41:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:19.184 08:41:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:15:19.184 08:41:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:19.184 08:41:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.184 [2024-05-15 08:41:13.893312] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2147205 has claimed it. 00:15:19.184 request: 00:15:19.184 { 00:15:19.184 "method": "framework_enable_cpumask_locks", 00:15:19.184 "req_id": 1 00:15:19.184 } 00:15:19.184 Got JSON-RPC error response 00:15:19.184 response: 00:15:19.184 { 00:15:19.184 "code": -32603, 00:15:19.184 "message": "Failed to claim CPU core: 2" 00:15:19.184 } 00:15:19.184 08:41:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:15:19.184 08:41:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # es=1 00:15:19.184 08:41:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:19.184 08:41:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:19.184 08:41:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:19.184 08:41:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2147205 /var/tmp/spdk.sock 00:15:19.184 08:41:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 2147205 ']' 00:15:19.184 08:41:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:19.184 08:41:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:15:19.184 08:41:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:19.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:19.184 08:41:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:15:19.184 08:41:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.442 08:41:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:15:19.442 08:41:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:15:19.442 08:41:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2147210 /var/tmp/spdk2.sock 00:15:19.442 08:41:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 2147210 ']' 00:15:19.442 08:41:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:15:19.442 08:41:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:15:19.442 08:41:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:15:19.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:15:19.442 08:41:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:15:19.442 08:41:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.705 08:41:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:15:19.705 08:41:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:15:19.705 08:41:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:15:19.705 08:41:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:15:19.705 08:41:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:15:19.705 08:41:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:15:19.705 00:15:19.705 real 0m1.979s 00:15:19.705 user 0m1.008s 00:15:19.705 sys 0m0.184s 00:15:19.705 08:41:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:19.705 08:41:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.705 ************************************ 00:15:19.705 END TEST locking_overlapped_coremask_via_rpc 00:15:19.705 ************************************ 00:15:19.705 08:41:14 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:15:19.705 08:41:14 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2147205 ]] 00:15:19.705 08:41:14 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2147205 00:15:19.705 08:41:14 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' -z 2147205 ']' 00:15:19.705 08:41:14 event.cpu_locks -- common/autotest_common.sh@951 -- # kill -0 2147205 00:15:19.705 08:41:14 event.cpu_locks -- common/autotest_common.sh@952 -- # uname 00:15:19.705 08:41:14 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:15:19.705 08:41:14 event.cpu_locks -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2147205 00:15:19.705 08:41:14 event.cpu_locks -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:15:19.705 08:41:14 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:15:19.705 08:41:14 event.cpu_locks -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2147205' 00:15:19.705 killing process with pid 2147205 00:15:19.705 08:41:14 event.cpu_locks -- common/autotest_common.sh@966 -- # kill 2147205 00:15:19.705 08:41:14 event.cpu_locks -- common/autotest_common.sh@971 -- # wait 2147205 00:15:20.271 08:41:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2147210 ]] 00:15:20.271 08:41:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2147210 00:15:20.271 08:41:14 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' -z 2147210 ']' 00:15:20.271 08:41:14 event.cpu_locks -- common/autotest_common.sh@951 -- # kill -0 2147210 00:15:20.271 08:41:14 event.cpu_locks -- common/autotest_common.sh@952 -- # uname 00:15:20.271 08:41:14 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:15:20.271 08:41:14 event.cpu_locks -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2147210 00:15:20.271 08:41:14 event.cpu_locks -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:15:20.271 08:41:14 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:15:20.271 08:41:14 event.cpu_locks -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2147210' 00:15:20.271 killing process with pid 2147210 00:15:20.271 08:41:14 event.cpu_locks -- common/autotest_common.sh@966 -- # kill 2147210 00:15:20.271 08:41:14 event.cpu_locks -- common/autotest_common.sh@971 -- # wait 2147210 00:15:20.529 08:41:15 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:15:20.529 08:41:15 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:15:20.529 08:41:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2147205 ]] 00:15:20.529 08:41:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2147205 00:15:20.529 08:41:15 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' -z 2147205 ']' 00:15:20.529 08:41:15 event.cpu_locks -- common/autotest_common.sh@951 -- # kill -0 2147205 00:15:20.529 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 951: kill: (2147205) - No such process 00:15:20.529 08:41:15 event.cpu_locks -- common/autotest_common.sh@974 -- # echo 'Process with pid 2147205 is not found' 00:15:20.529 Process with pid 2147205 is not found 00:15:20.529 08:41:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2147210 ]] 00:15:20.529 08:41:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2147210 00:15:20.529 08:41:15 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' -z 2147210 ']' 00:15:20.529 08:41:15 event.cpu_locks -- common/autotest_common.sh@951 -- # kill -0 2147210 00:15:20.529 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 951: kill: (2147210) - No such process 00:15:20.529 08:41:15 event.cpu_locks -- common/autotest_common.sh@974 -- # echo 'Process with pid 2147210 is not found' 00:15:20.529 Process with pid 2147210 is not found 00:15:20.529 08:41:15 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:15:20.529 00:15:20.529 real 0m16.131s 00:15:20.529 user 0m27.544s 00:15:20.529 sys 0m5.518s 00:15:20.529 08:41:15 event.cpu_locks -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:20.529 08:41:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:15:20.529 ************************************ 00:15:20.529 END TEST cpu_locks 00:15:20.529 ************************************ 00:15:20.529 00:15:20.529 real 0m42.254s 00:15:20.529 user 1m20.232s 00:15:20.529 sys 0m9.645s 00:15:20.529 08:41:15 event -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:20.529 08:41:15 event -- common/autotest_common.sh@10 -- # set +x 00:15:20.529 ************************************ 00:15:20.529 END TEST event 00:15:20.529 ************************************ 00:15:20.787 08:41:15 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:15:20.787 08:41:15 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:15:20.787 08:41:15 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:20.787 08:41:15 -- common/autotest_common.sh@10 -- # set +x 00:15:20.787 ************************************ 00:15:20.787 START TEST thread 00:15:20.787 ************************************ 00:15:20.787 08:41:15 thread -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:15:20.787 * Looking for test storage... 00:15:20.787 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:15:20.787 08:41:15 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:15:20.787 08:41:15 thread -- common/autotest_common.sh@1098 -- # '[' 8 -le 1 ']' 00:15:20.787 08:41:15 thread -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:20.787 08:41:15 thread -- common/autotest_common.sh@10 -- # set +x 00:15:20.787 ************************************ 00:15:20.787 START TEST thread_poller_perf 00:15:20.787 ************************************ 00:15:20.787 08:41:15 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:15:20.787 [2024-05-15 08:41:15.459379] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:15:20.787 [2024-05-15 08:41:15.459442] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2147587 ] 00:15:20.787 EAL: No free 2048 kB hugepages reported on node 1 00:15:20.787 [2024-05-15 08:41:15.525125] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.069 [2024-05-15 08:41:15.609616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.069 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:15:22.014 ====================================== 00:15:22.014 busy:2710669343 (cyc) 00:15:22.014 total_run_count: 293000 00:15:22.014 tsc_hz: 2700000000 (cyc) 00:15:22.014 ====================================== 00:15:22.014 poller_cost: 9251 (cyc), 3426 (nsec) 00:15:22.014 00:15:22.014 real 0m1.248s 00:15:22.014 user 0m1.152s 00:15:22.014 sys 0m0.090s 00:15:22.014 08:41:16 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:22.014 08:41:16 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:15:22.014 ************************************ 00:15:22.014 END TEST thread_poller_perf 00:15:22.014 ************************************ 00:15:22.014 08:41:16 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:15:22.014 08:41:16 thread -- common/autotest_common.sh@1098 -- # '[' 8 -le 1 ']' 00:15:22.014 08:41:16 thread -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:22.014 08:41:16 thread -- common/autotest_common.sh@10 -- # set +x 00:15:22.014 ************************************ 00:15:22.014 START TEST thread_poller_perf 00:15:22.014 ************************************ 00:15:22.014 08:41:16 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:15:22.014 [2024-05-15 08:41:16.760156] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:15:22.014 [2024-05-15 08:41:16.760228] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2147853 ] 00:15:22.014 EAL: No free 2048 kB hugepages reported on node 1 00:15:22.272 [2024-05-15 08:41:16.831107] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:22.272 [2024-05-15 08:41:16.917615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.272 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:15:23.203 ====================================== 00:15:23.203 busy:2702881283 (cyc) 00:15:23.203 total_run_count: 3854000 00:15:23.203 tsc_hz: 2700000000 (cyc) 00:15:23.203 ====================================== 00:15:23.203 poller_cost: 701 (cyc), 259 (nsec) 00:15:23.203 00:15:23.203 real 0m1.246s 00:15:23.203 user 0m1.157s 00:15:23.203 sys 0m0.083s 00:15:23.203 08:41:17 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:23.203 08:41:17 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:15:23.203 ************************************ 00:15:23.203 END TEST thread_poller_perf 00:15:23.203 ************************************ 00:15:23.461 08:41:18 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:15:23.461 00:15:23.461 real 0m2.652s 00:15:23.461 user 0m2.369s 00:15:23.461 sys 0m0.280s 00:15:23.461 08:41:18 thread -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:23.461 08:41:18 thread -- common/autotest_common.sh@10 -- # set +x 00:15:23.461 ************************************ 00:15:23.461 END TEST thread 00:15:23.461 ************************************ 00:15:23.461 08:41:18 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:15:23.461 08:41:18 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:15:23.461 08:41:18 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:23.461 08:41:18 -- common/autotest_common.sh@10 -- # set +x 00:15:23.461 ************************************ 00:15:23.461 START TEST accel 00:15:23.461 ************************************ 00:15:23.461 08:41:18 accel -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:15:23.461 * Looking for test storage... 00:15:23.461 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:15:23.461 08:41:18 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:15:23.461 08:41:18 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:15:23.461 08:41:18 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:15:23.461 08:41:18 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=2148051 00:15:23.461 08:41:18 accel -- accel/accel.sh@63 -- # waitforlisten 2148051 00:15:23.461 08:41:18 accel -- common/autotest_common.sh@828 -- # '[' -z 2148051 ']' 00:15:23.461 08:41:18 accel -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:23.461 08:41:18 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:15:23.461 08:41:18 accel -- accel/accel.sh@61 -- # build_accel_config 00:15:23.461 08:41:18 accel -- common/autotest_common.sh@833 -- # local max_retries=100 00:15:23.461 08:41:18 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:23.461 08:41:18 accel -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:23.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:23.461 08:41:18 accel -- common/autotest_common.sh@837 -- # xtrace_disable 00:15:23.461 08:41:18 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:23.461 08:41:18 accel -- common/autotest_common.sh@10 -- # set +x 00:15:23.461 08:41:18 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:23.461 08:41:18 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:23.461 08:41:18 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:23.461 08:41:18 accel -- accel/accel.sh@40 -- # local IFS=, 00:15:23.461 08:41:18 accel -- accel/accel.sh@41 -- # jq -r . 00:15:23.461 [2024-05-15 08:41:18.172654] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:15:23.461 [2024-05-15 08:41:18.172726] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2148051 ] 00:15:23.461 EAL: No free 2048 kB hugepages reported on node 1 00:15:23.461 [2024-05-15 08:41:18.241697] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:23.718 [2024-05-15 08:41:18.329350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.976 08:41:18 accel -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:15:23.976 08:41:18 accel -- common/autotest_common.sh@861 -- # return 0 00:15:23.976 08:41:18 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:15:23.976 08:41:18 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:15:23.976 08:41:18 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:15:23.976 08:41:18 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:15:23.976 08:41:18 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:15:23.976 08:41:18 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:15:23.976 08:41:18 accel -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:23.976 08:41:18 accel -- common/autotest_common.sh@10 -- # set +x 00:15:23.976 08:41:18 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:15:23.976 08:41:18 accel -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:23.976 08:41:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:15:23.976 08:41:18 accel -- accel/accel.sh@72 -- # IFS== 00:15:23.976 08:41:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:15:23.976 08:41:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:15:23.976 08:41:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:15:23.976 08:41:18 accel -- accel/accel.sh@72 -- # IFS== 00:15:23.976 08:41:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:15:23.976 08:41:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:15:23.976 08:41:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:15:23.976 08:41:18 accel -- accel/accel.sh@72 -- # IFS== 00:15:23.976 08:41:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:15:23.976 08:41:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:15:23.976 08:41:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:15:23.976 08:41:18 accel -- accel/accel.sh@72 -- # IFS== 00:15:23.976 08:41:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:15:23.976 08:41:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:15:23.976 08:41:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:15:23.976 08:41:18 accel -- accel/accel.sh@72 -- # IFS== 00:15:23.976 08:41:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:15:23.976 08:41:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:15:23.976 08:41:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:15:23.976 08:41:18 accel -- accel/accel.sh@72 -- # IFS== 00:15:23.976 08:41:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:15:23.976 08:41:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:15:23.976 08:41:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:15:23.976 08:41:18 accel -- accel/accel.sh@72 -- # IFS== 00:15:23.976 08:41:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:15:23.976 08:41:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:15:23.976 08:41:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:15:23.976 08:41:18 accel -- accel/accel.sh@72 -- # IFS== 00:15:23.976 08:41:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:15:23.976 08:41:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:15:23.976 08:41:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:15:23.976 08:41:18 accel -- accel/accel.sh@72 -- # IFS== 00:15:23.976 08:41:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:15:23.976 08:41:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:15:23.976 08:41:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:15:23.976 08:41:18 accel -- accel/accel.sh@72 -- # IFS== 00:15:23.976 08:41:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:15:23.976 08:41:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:15:23.977 08:41:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:15:23.977 08:41:18 accel -- accel/accel.sh@72 -- # IFS== 00:15:23.977 08:41:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:15:23.977 08:41:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:15:23.977 08:41:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:15:23.977 08:41:18 accel -- accel/accel.sh@72 -- # IFS== 00:15:23.977 08:41:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:15:23.977 08:41:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:15:23.977 08:41:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:15:23.977 08:41:18 accel -- accel/accel.sh@72 -- # IFS== 00:15:23.977 08:41:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:15:23.977 08:41:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:15:23.977 08:41:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:15:23.977 08:41:18 accel -- accel/accel.sh@72 -- # IFS== 00:15:23.977 08:41:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:15:23.977 08:41:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:15:23.977 08:41:18 accel -- accel/accel.sh@75 -- # killprocess 2148051 00:15:23.977 08:41:18 accel -- common/autotest_common.sh@947 -- # '[' -z 2148051 ']' 00:15:23.977 08:41:18 accel -- common/autotest_common.sh@951 -- # kill -0 2148051 00:15:23.977 08:41:18 accel -- common/autotest_common.sh@952 -- # uname 00:15:23.977 08:41:18 accel -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:15:23.977 08:41:18 accel -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2148051 00:15:23.977 08:41:18 accel -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:15:23.977 08:41:18 accel -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:15:23.977 08:41:18 accel -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2148051' 00:15:23.977 killing process with pid 2148051 00:15:23.977 08:41:18 accel -- common/autotest_common.sh@966 -- # kill 2148051 00:15:23.977 08:41:18 accel -- common/autotest_common.sh@971 -- # wait 2148051 00:15:24.542 08:41:19 accel -- accel/accel.sh@76 -- # trap - ERR 00:15:24.542 08:41:19 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:15:24.542 08:41:19 accel -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:15:24.542 08:41:19 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:24.542 08:41:19 accel -- common/autotest_common.sh@10 -- # set +x 00:15:24.542 08:41:19 accel.accel_help -- common/autotest_common.sh@1122 -- # accel_perf -h 00:15:24.542 08:41:19 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:15:24.542 08:41:19 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:15:24.542 08:41:19 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:24.542 08:41:19 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:24.543 08:41:19 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:24.543 08:41:19 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:24.543 08:41:19 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:24.543 08:41:19 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:15:24.543 08:41:19 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:15:24.543 08:41:19 accel.accel_help -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:24.543 08:41:19 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:15:24.543 08:41:19 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:15:24.543 08:41:19 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:15:24.543 08:41:19 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:24.543 08:41:19 accel -- common/autotest_common.sh@10 -- # set +x 00:15:24.543 ************************************ 00:15:24.543 START TEST accel_missing_filename 00:15:24.543 ************************************ 00:15:24.543 08:41:19 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # NOT accel_perf -t 1 -w compress 00:15:24.543 08:41:19 accel.accel_missing_filename -- common/autotest_common.sh@649 -- # local es=0 00:15:24.543 08:41:19 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress 00:15:24.543 08:41:19 accel.accel_missing_filename -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:15:24.543 08:41:19 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:24.543 08:41:19 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # type -t accel_perf 00:15:24.543 08:41:19 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:24.543 08:41:19 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress 00:15:24.543 08:41:19 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:15:24.543 08:41:19 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:15:24.543 08:41:19 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:24.543 08:41:19 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:24.543 08:41:19 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:24.543 08:41:19 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:24.543 08:41:19 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:24.543 08:41:19 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:15:24.543 08:41:19 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:15:24.543 [2024-05-15 08:41:19.170817] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:15:24.543 [2024-05-15 08:41:19.170881] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2148219 ] 00:15:24.543 EAL: No free 2048 kB hugepages reported on node 1 00:15:24.543 [2024-05-15 08:41:19.244168] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:24.543 [2024-05-15 08:41:19.334582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:24.801 [2024-05-15 08:41:19.397404] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:24.801 [2024-05-15 08:41:19.479062] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:15:24.801 A filename is required. 00:15:24.801 08:41:19 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # es=234 00:15:24.801 08:41:19 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:24.801 08:41:19 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # es=106 00:15:24.801 08:41:19 accel.accel_missing_filename -- common/autotest_common.sh@662 -- # case "$es" in 00:15:24.801 08:41:19 accel.accel_missing_filename -- common/autotest_common.sh@669 -- # es=1 00:15:24.801 08:41:19 accel.accel_missing_filename -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:24.801 00:15:24.801 real 0m0.407s 00:15:24.801 user 0m0.280s 00:15:24.801 sys 0m0.162s 00:15:24.801 08:41:19 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:24.801 08:41:19 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:15:24.801 ************************************ 00:15:24.801 END TEST accel_missing_filename 00:15:24.801 ************************************ 00:15:24.801 08:41:19 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:15:24.801 08:41:19 accel -- common/autotest_common.sh@1098 -- # '[' 10 -le 1 ']' 00:15:24.801 08:41:19 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:24.801 08:41:19 accel -- common/autotest_common.sh@10 -- # set +x 00:15:25.058 ************************************ 00:15:25.058 START TEST accel_compress_verify 00:15:25.058 ************************************ 00:15:25.058 08:41:19 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:15:25.058 08:41:19 accel.accel_compress_verify -- common/autotest_common.sh@649 -- # local es=0 00:15:25.058 08:41:19 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:15:25.058 08:41:19 accel.accel_compress_verify -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:15:25.058 08:41:19 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:25.058 08:41:19 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # type -t accel_perf 00:15:25.058 08:41:19 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:25.059 08:41:19 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:15:25.059 08:41:19 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:15:25.059 08:41:19 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:15:25.059 08:41:19 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:25.059 08:41:19 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:25.059 08:41:19 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:25.059 08:41:19 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:25.059 08:41:19 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:25.059 08:41:19 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:15:25.059 08:41:19 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:15:25.059 [2024-05-15 08:41:19.634774] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:15:25.059 [2024-05-15 08:41:19.634838] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2148249 ] 00:15:25.059 EAL: No free 2048 kB hugepages reported on node 1 00:15:25.059 [2024-05-15 08:41:19.708520] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.059 [2024-05-15 08:41:19.796291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.316 [2024-05-15 08:41:19.854901] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:25.316 [2024-05-15 08:41:19.938695] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:15:25.316 00:15:25.317 Compression does not support the verify option, aborting. 00:15:25.317 08:41:20 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # es=161 00:15:25.317 08:41:20 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:25.317 08:41:20 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # es=33 00:15:25.317 08:41:20 accel.accel_compress_verify -- common/autotest_common.sh@662 -- # case "$es" in 00:15:25.317 08:41:20 accel.accel_compress_verify -- common/autotest_common.sh@669 -- # es=1 00:15:25.317 08:41:20 accel.accel_compress_verify -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:25.317 00:15:25.317 real 0m0.405s 00:15:25.317 user 0m0.283s 00:15:25.317 sys 0m0.156s 00:15:25.317 08:41:20 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:25.317 08:41:20 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:15:25.317 ************************************ 00:15:25.317 END TEST accel_compress_verify 00:15:25.317 ************************************ 00:15:25.317 08:41:20 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:15:25.317 08:41:20 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:15:25.317 08:41:20 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:25.317 08:41:20 accel -- common/autotest_common.sh@10 -- # set +x 00:15:25.317 ************************************ 00:15:25.317 START TEST accel_wrong_workload 00:15:25.317 ************************************ 00:15:25.317 08:41:20 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # NOT accel_perf -t 1 -w foobar 00:15:25.317 08:41:20 accel.accel_wrong_workload -- common/autotest_common.sh@649 -- # local es=0 00:15:25.317 08:41:20 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:15:25.317 08:41:20 accel.accel_wrong_workload -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:15:25.317 08:41:20 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:25.317 08:41:20 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # type -t accel_perf 00:15:25.317 08:41:20 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:25.317 08:41:20 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w foobar 00:15:25.317 08:41:20 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:15:25.317 08:41:20 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:15:25.317 08:41:20 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:25.317 08:41:20 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:25.317 08:41:20 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:25.317 08:41:20 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:25.317 08:41:20 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:25.317 08:41:20 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:15:25.317 08:41:20 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:15:25.317 Unsupported workload type: foobar 00:15:25.317 [2024-05-15 08:41:20.091464] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:15:25.317 accel_perf options: 00:15:25.317 [-h help message] 00:15:25.317 [-q queue depth per core] 00:15:25.317 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:15:25.317 [-T number of threads per core 00:15:25.317 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:15:25.317 [-t time in seconds] 00:15:25.317 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:15:25.317 [ dif_verify, , dif_generate, dif_generate_copy 00:15:25.317 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:15:25.317 [-l for compress/decompress workloads, name of uncompressed input file 00:15:25.317 [-S for crc32c workload, use this seed value (default 0) 00:15:25.317 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:15:25.317 [-f for fill workload, use this BYTE value (default 255) 00:15:25.317 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:15:25.317 [-y verify result if this switch is on] 00:15:25.317 [-a tasks to allocate per core (default: same value as -q)] 00:15:25.317 Can be used to spread operations across a wider range of memory. 00:15:25.317 08:41:20 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # es=1 00:15:25.317 08:41:20 accel.accel_wrong_workload -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:25.317 08:41:20 accel.accel_wrong_workload -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:25.317 08:41:20 accel.accel_wrong_workload -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:25.317 00:15:25.317 real 0m0.022s 00:15:25.317 user 0m0.017s 00:15:25.317 sys 0m0.005s 00:15:25.317 08:41:20 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:25.317 08:41:20 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:15:25.317 ************************************ 00:15:25.317 END TEST accel_wrong_workload 00:15:25.317 ************************************ 00:15:25.575 Error: writing output failed: Broken pipe 00:15:25.576 08:41:20 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:15:25.576 08:41:20 accel -- common/autotest_common.sh@1098 -- # '[' 10 -le 1 ']' 00:15:25.576 08:41:20 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:25.576 08:41:20 accel -- common/autotest_common.sh@10 -- # set +x 00:15:25.576 ************************************ 00:15:25.576 START TEST accel_negative_buffers 00:15:25.576 ************************************ 00:15:25.576 08:41:20 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:15:25.576 08:41:20 accel.accel_negative_buffers -- common/autotest_common.sh@649 -- # local es=0 00:15:25.576 08:41:20 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:15:25.576 08:41:20 accel.accel_negative_buffers -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:15:25.576 08:41:20 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:25.576 08:41:20 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # type -t accel_perf 00:15:25.576 08:41:20 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:25.576 08:41:20 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w xor -y -x -1 00:15:25.576 08:41:20 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:15:25.576 08:41:20 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:15:25.576 08:41:20 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:25.576 08:41:20 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:25.576 08:41:20 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:25.576 08:41:20 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:25.576 08:41:20 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:25.576 08:41:20 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:15:25.576 08:41:20 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:15:25.576 -x option must be non-negative. 00:15:25.576 [2024-05-15 08:41:20.159552] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:15:25.576 accel_perf options: 00:15:25.576 [-h help message] 00:15:25.576 [-q queue depth per core] 00:15:25.576 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:15:25.576 [-T number of threads per core 00:15:25.576 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:15:25.576 [-t time in seconds] 00:15:25.576 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:15:25.576 [ dif_verify, , dif_generate, dif_generate_copy 00:15:25.576 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:15:25.576 [-l for compress/decompress workloads, name of uncompressed input file 00:15:25.576 [-S for crc32c workload, use this seed value (default 0) 00:15:25.576 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:15:25.576 [-f for fill workload, use this BYTE value (default 255) 00:15:25.576 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:15:25.576 [-y verify result if this switch is on] 00:15:25.576 [-a tasks to allocate per core (default: same value as -q)] 00:15:25.576 Can be used to spread operations across a wider range of memory. 00:15:25.576 08:41:20 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # es=1 00:15:25.576 08:41:20 accel.accel_negative_buffers -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:25.576 08:41:20 accel.accel_negative_buffers -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:25.576 08:41:20 accel.accel_negative_buffers -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:25.576 00:15:25.576 real 0m0.021s 00:15:25.576 user 0m0.011s 00:15:25.576 sys 0m0.010s 00:15:25.576 08:41:20 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:25.576 08:41:20 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:15:25.576 ************************************ 00:15:25.576 END TEST accel_negative_buffers 00:15:25.576 ************************************ 00:15:25.576 Error: writing output failed: Broken pipe 00:15:25.576 08:41:20 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:15:25.576 08:41:20 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:15:25.576 08:41:20 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:25.576 08:41:20 accel -- common/autotest_common.sh@10 -- # set +x 00:15:25.576 ************************************ 00:15:25.576 START TEST accel_crc32c 00:15:25.576 ************************************ 00:15:25.576 08:41:20 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w crc32c -S 32 -y 00:15:25.576 08:41:20 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:15:25.576 08:41:20 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:15:25.576 08:41:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:25.576 08:41:20 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:15:25.576 08:41:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:25.576 08:41:20 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:15:25.576 08:41:20 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:15:25.576 08:41:20 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:25.576 08:41:20 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:25.576 08:41:20 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:25.576 08:41:20 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:25.576 08:41:20 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:25.576 08:41:20 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:15:25.576 08:41:20 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:15:25.576 [2024-05-15 08:41:20.230540] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:15:25.576 [2024-05-15 08:41:20.230604] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2148434 ] 00:15:25.576 EAL: No free 2048 kB hugepages reported on node 1 00:15:25.576 [2024-05-15 08:41:20.302885] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.835 [2024-05-15 08:41:20.392833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:25.835 08:41:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:27.209 08:41:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:15:27.209 08:41:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:27.209 08:41:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:27.209 08:41:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:27.209 08:41:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:15:27.209 08:41:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:27.209 08:41:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:27.209 08:41:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:27.209 08:41:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:15:27.209 08:41:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:27.209 08:41:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:27.209 08:41:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:27.209 08:41:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:15:27.209 08:41:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:27.209 08:41:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:27.209 08:41:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:27.209 08:41:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:15:27.209 08:41:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:27.209 08:41:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:27.209 08:41:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:27.209 08:41:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:15:27.210 08:41:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:27.210 08:41:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:27.210 08:41:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:27.210 08:41:21 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:15:27.210 08:41:21 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:15:27.210 08:41:21 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:27.210 00:15:27.210 real 0m1.412s 00:15:27.210 user 0m1.261s 00:15:27.210 sys 0m0.154s 00:15:27.210 08:41:21 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:27.210 08:41:21 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:15:27.210 ************************************ 00:15:27.210 END TEST accel_crc32c 00:15:27.210 ************************************ 00:15:27.210 08:41:21 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:15:27.210 08:41:21 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:15:27.210 08:41:21 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:27.210 08:41:21 accel -- common/autotest_common.sh@10 -- # set +x 00:15:27.210 ************************************ 00:15:27.210 START TEST accel_crc32c_C2 00:15:27.210 ************************************ 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w crc32c -y -C 2 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:15:27.210 [2024-05-15 08:41:21.689902] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:15:27.210 [2024-05-15 08:41:21.689963] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2148592 ] 00:15:27.210 EAL: No free 2048 kB hugepages reported on node 1 00:15:27.210 [2024-05-15 08:41:21.761191] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:27.210 [2024-05-15 08:41:21.851083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:27.210 08:41:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:28.581 08:41:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:15:28.581 08:41:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:28.581 08:41:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:28.581 08:41:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:28.581 08:41:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:15:28.581 08:41:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:28.581 08:41:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:28.581 08:41:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:28.581 08:41:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:15:28.581 08:41:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:28.581 08:41:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:28.581 08:41:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:28.581 08:41:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:15:28.581 08:41:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:28.581 08:41:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:28.581 08:41:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:28.581 08:41:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:15:28.581 08:41:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:28.581 08:41:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:28.581 08:41:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:28.581 08:41:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:15:28.581 08:41:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:28.581 08:41:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:28.581 08:41:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:28.581 08:41:23 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:15:28.581 08:41:23 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:15:28.581 08:41:23 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:28.581 00:15:28.581 real 0m1.413s 00:15:28.581 user 0m1.257s 00:15:28.581 sys 0m0.157s 00:15:28.581 08:41:23 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:28.581 08:41:23 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:15:28.581 ************************************ 00:15:28.581 END TEST accel_crc32c_C2 00:15:28.581 ************************************ 00:15:28.581 08:41:23 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:15:28.581 08:41:23 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:15:28.581 08:41:23 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:28.581 08:41:23 accel -- common/autotest_common.sh@10 -- # set +x 00:15:28.581 ************************************ 00:15:28.581 START TEST accel_copy 00:15:28.581 ************************************ 00:15:28.581 08:41:23 accel.accel_copy -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w copy -y 00:15:28.581 08:41:23 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:15:28.581 08:41:23 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:15:28.581 08:41:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:15:28.581 08:41:23 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:15:28.581 08:41:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:15:28.581 08:41:23 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:15:28.581 08:41:23 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:15:28.581 08:41:23 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:28.581 08:41:23 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:28.581 08:41:23 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:28.581 08:41:23 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:28.581 08:41:23 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:28.581 08:41:23 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:15:28.581 08:41:23 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:15:28.581 [2024-05-15 08:41:23.152533] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:15:28.582 [2024-05-15 08:41:23.152608] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2148745 ] 00:15:28.582 EAL: No free 2048 kB hugepages reported on node 1 00:15:28.582 [2024-05-15 08:41:23.224910] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.582 [2024-05-15 08:41:23.313854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.839 08:41:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:15:28.839 08:41:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:28.839 08:41:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:15:28.839 08:41:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:15:28.839 08:41:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:15:28.839 08:41:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:28.839 08:41:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:15:28.839 08:41:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:15:28.839 08:41:23 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:15:28.839 08:41:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:28.839 08:41:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:15:28.839 08:41:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:15:28.839 08:41:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:15:28.839 08:41:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:28.839 08:41:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:15:28.839 08:41:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:15:28.839 08:41:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:15:28.839 08:41:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:28.839 08:41:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:15:28.839 08:41:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:15:28.839 08:41:23 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:15:28.839 08:41:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:28.839 08:41:23 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:15:28.839 08:41:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:15:28.839 08:41:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:15:28.839 08:41:23 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:15:28.839 08:41:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:28.839 08:41:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:15:28.839 08:41:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:15:28.839 08:41:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:15:28.839 08:41:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:28.839 08:41:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:15:28.839 08:41:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:15:28.839 08:41:23 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:15:28.839 08:41:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:28.839 08:41:23 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:15:28.839 08:41:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:15:28.839 08:41:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:15:28.839 08:41:23 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:15:28.839 08:41:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:28.839 08:41:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:15:28.839 08:41:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:15:28.839 08:41:23 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:15:28.839 08:41:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:28.839 08:41:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:15:28.839 08:41:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:15:28.839 08:41:23 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:15:28.839 08:41:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:28.839 08:41:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:15:28.839 08:41:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:15:28.839 08:41:23 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:15:28.840 08:41:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:28.840 08:41:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:15:28.840 08:41:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:15:28.840 08:41:23 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:15:28.840 08:41:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:28.840 08:41:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:15:28.840 08:41:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:15:28.840 08:41:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:15:28.840 08:41:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:28.840 08:41:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:15:28.840 08:41:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:15:28.840 08:41:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:15:28.840 08:41:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:28.840 08:41:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:15:28.840 08:41:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:15:29.772 08:41:24 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:15:29.772 08:41:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:29.772 08:41:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:15:29.772 08:41:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:15:29.772 08:41:24 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:15:29.772 08:41:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:29.772 08:41:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:15:29.772 08:41:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:15:29.772 08:41:24 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:15:29.772 08:41:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:29.772 08:41:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:15:29.772 08:41:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:15:29.772 08:41:24 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:15:29.772 08:41:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:29.772 08:41:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:15:29.772 08:41:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:15:29.772 08:41:24 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:15:29.772 08:41:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:29.772 08:41:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:15:29.772 08:41:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:15:29.772 08:41:24 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:15:29.772 08:41:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:29.772 08:41:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:15:29.772 08:41:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:15:29.772 08:41:24 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:15:29.772 08:41:24 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:15:29.772 08:41:24 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:29.772 00:15:29.772 real 0m1.405s 00:15:29.772 user 0m1.250s 00:15:29.772 sys 0m0.156s 00:15:29.772 08:41:24 accel.accel_copy -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:29.772 08:41:24 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:15:29.772 ************************************ 00:15:29.772 END TEST accel_copy 00:15:29.772 ************************************ 00:15:29.772 08:41:24 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:15:29.772 08:41:24 accel -- common/autotest_common.sh@1098 -- # '[' 13 -le 1 ']' 00:15:29.772 08:41:24 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:29.772 08:41:24 accel -- common/autotest_common.sh@10 -- # set +x 00:15:30.030 ************************************ 00:15:30.030 START TEST accel_fill 00:15:30.030 ************************************ 00:15:30.030 08:41:24 accel.accel_fill -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:15:30.030 08:41:24 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:15:30.030 08:41:24 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:15:30.030 08:41:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:15:30.030 08:41:24 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:15:30.030 08:41:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:15:30.030 08:41:24 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:15:30.030 08:41:24 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:15:30.030 08:41:24 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:30.030 08:41:24 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:30.030 08:41:24 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:30.030 08:41:24 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:30.030 08:41:24 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:30.030 08:41:24 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:15:30.030 08:41:24 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:15:30.030 [2024-05-15 08:41:24.605996] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:15:30.030 [2024-05-15 08:41:24.606066] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2148924 ] 00:15:30.030 EAL: No free 2048 kB hugepages reported on node 1 00:15:30.030 [2024-05-15 08:41:24.676567] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.030 [2024-05-15 08:41:24.765892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.288 08:41:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:15:30.288 08:41:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:15:30.288 08:41:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:15:30.288 08:41:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:15:30.288 08:41:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:15:30.288 08:41:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:15:30.288 08:41:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:15:30.288 08:41:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:15:30.288 08:41:24 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:15:30.288 08:41:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:15:30.288 08:41:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:15:30.288 08:41:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:15:30.288 08:41:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:15:30.288 08:41:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:15:30.288 08:41:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:15:30.288 08:41:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:15:30.288 08:41:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:15:30.288 08:41:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:15:30.288 08:41:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:15:30.288 08:41:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:15:30.288 08:41:24 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:15:30.288 08:41:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:15:30.288 08:41:24 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:15:30.288 08:41:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:15:30.288 08:41:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:15:30.288 08:41:24 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:15:30.288 08:41:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:15:30.288 08:41:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:15:30.288 08:41:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:15:30.288 08:41:24 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:15:30.288 08:41:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:15:30.288 08:41:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:15:30.288 08:41:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:15:30.288 08:41:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:15:30.288 08:41:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:15:30.288 08:41:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:15:30.288 08:41:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:15:30.288 08:41:24 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:15:30.288 08:41:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:15:30.288 08:41:24 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:15:30.288 08:41:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:15:30.288 08:41:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:15:30.288 08:41:24 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:15:30.288 08:41:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:15:30.288 08:41:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:15:30.289 08:41:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:15:30.289 08:41:24 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:15:30.289 08:41:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:15:30.289 08:41:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:15:30.289 08:41:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:15:30.289 08:41:24 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:15:30.289 08:41:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:15:30.289 08:41:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:15:30.289 08:41:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:15:30.289 08:41:24 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:15:30.289 08:41:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:15:30.289 08:41:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:15:30.289 08:41:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:15:30.289 08:41:24 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:15:30.289 08:41:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:15:30.289 08:41:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:15:30.289 08:41:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:15:30.289 08:41:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:15:30.289 08:41:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:15:30.289 08:41:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:15:30.289 08:41:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:15:30.289 08:41:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:15:30.289 08:41:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:15:30.289 08:41:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:15:30.289 08:41:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:15:31.222 08:41:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:15:31.222 08:41:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:15:31.222 08:41:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:15:31.222 08:41:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:15:31.222 08:41:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:15:31.222 08:41:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:15:31.222 08:41:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:15:31.222 08:41:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:15:31.222 08:41:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:15:31.222 08:41:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:15:31.222 08:41:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:15:31.222 08:41:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:15:31.222 08:41:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:15:31.222 08:41:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:15:31.222 08:41:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:15:31.222 08:41:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:15:31.222 08:41:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:15:31.222 08:41:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:15:31.222 08:41:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:15:31.222 08:41:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:15:31.222 08:41:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:15:31.222 08:41:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:15:31.222 08:41:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:15:31.222 08:41:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:15:31.222 08:41:25 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:15:31.222 08:41:25 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:15:31.222 08:41:25 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:31.222 00:15:31.222 real 0m1.405s 00:15:31.222 user 0m1.254s 00:15:31.222 sys 0m0.152s 00:15:31.222 08:41:25 accel.accel_fill -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:31.222 08:41:25 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:15:31.222 ************************************ 00:15:31.222 END TEST accel_fill 00:15:31.222 ************************************ 00:15:31.480 08:41:26 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:15:31.480 08:41:26 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:15:31.480 08:41:26 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:31.480 08:41:26 accel -- common/autotest_common.sh@10 -- # set +x 00:15:31.480 ************************************ 00:15:31.480 START TEST accel_copy_crc32c 00:15:31.480 ************************************ 00:15:31.480 08:41:26 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w copy_crc32c -y 00:15:31.480 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:15:31.480 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:15:31.480 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:31.480 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:15:31.480 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:31.480 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:15:31.480 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:15:31.480 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:31.480 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:31.480 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:31.480 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:31.480 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:31.480 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:15:31.480 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:15:31.480 [2024-05-15 08:41:26.066068] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:15:31.480 [2024-05-15 08:41:26.066133] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2149179 ] 00:15:31.480 EAL: No free 2048 kB hugepages reported on node 1 00:15:31.480 [2024-05-15 08:41:26.140364] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.480 [2024-05-15 08:41:26.230223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:31.739 08:41:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:33.112 08:41:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:15:33.112 08:41:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:33.112 08:41:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:33.112 08:41:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:33.112 08:41:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:15:33.112 08:41:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:33.112 08:41:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:33.112 08:41:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:33.112 08:41:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:15:33.112 08:41:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:33.112 08:41:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:33.112 08:41:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:33.112 08:41:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:15:33.112 08:41:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:33.112 08:41:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:33.112 08:41:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:33.112 08:41:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:15:33.112 08:41:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:33.112 08:41:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:33.112 08:41:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:33.112 08:41:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:15:33.112 08:41:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:33.112 08:41:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:33.112 08:41:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:33.112 08:41:27 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:15:33.112 08:41:27 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:15:33.112 08:41:27 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:33.112 00:15:33.112 real 0m1.423s 00:15:33.112 user 0m1.272s 00:15:33.112 sys 0m0.153s 00:15:33.112 08:41:27 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:33.112 08:41:27 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:15:33.112 ************************************ 00:15:33.112 END TEST accel_copy_crc32c 00:15:33.112 ************************************ 00:15:33.112 08:41:27 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:15:33.112 08:41:27 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:15:33.112 08:41:27 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:33.112 08:41:27 accel -- common/autotest_common.sh@10 -- # set +x 00:15:33.112 ************************************ 00:15:33.112 START TEST accel_copy_crc32c_C2 00:15:33.112 ************************************ 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:15:33.112 [2024-05-15 08:41:27.536867] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:15:33.112 [2024-05-15 08:41:27.536930] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2149331 ] 00:15:33.112 EAL: No free 2048 kB hugepages reported on node 1 00:15:33.112 [2024-05-15 08:41:27.607951] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.112 [2024-05-15 08:41:27.698066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:33.112 08:41:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:34.484 08:41:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:15:34.484 08:41:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:34.484 08:41:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:34.484 08:41:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:34.484 08:41:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:15:34.485 08:41:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:34.485 08:41:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:34.485 08:41:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:34.485 08:41:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:15:34.485 08:41:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:34.485 08:41:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:34.485 08:41:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:34.485 08:41:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:15:34.485 08:41:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:34.485 08:41:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:34.485 08:41:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:34.485 08:41:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:15:34.485 08:41:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:34.485 08:41:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:34.485 08:41:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:34.485 08:41:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:15:34.485 08:41:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:34.485 08:41:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:34.485 08:41:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:34.485 08:41:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:15:34.485 08:41:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:15:34.485 08:41:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:34.485 00:15:34.485 real 0m1.416s 00:15:34.485 user 0m1.258s 00:15:34.485 sys 0m0.159s 00:15:34.485 08:41:28 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:34.485 08:41:28 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:15:34.485 ************************************ 00:15:34.485 END TEST accel_copy_crc32c_C2 00:15:34.485 ************************************ 00:15:34.485 08:41:28 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:15:34.485 08:41:28 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:15:34.485 08:41:28 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:34.485 08:41:28 accel -- common/autotest_common.sh@10 -- # set +x 00:15:34.485 ************************************ 00:15:34.485 START TEST accel_dualcast 00:15:34.485 ************************************ 00:15:34.485 08:41:28 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w dualcast -y 00:15:34.485 08:41:28 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:15:34.485 08:41:28 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:15:34.485 08:41:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:15:34.485 08:41:28 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:15:34.485 08:41:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:15:34.485 08:41:28 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:15:34.485 08:41:28 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:15:34.485 08:41:28 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:34.485 08:41:28 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:34.485 08:41:28 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:34.485 08:41:28 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:34.485 08:41:28 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:34.485 08:41:28 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:15:34.485 08:41:28 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:15:34.485 [2024-05-15 08:41:29.000028] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:15:34.485 [2024-05-15 08:41:29.000090] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2149490 ] 00:15:34.485 EAL: No free 2048 kB hugepages reported on node 1 00:15:34.485 [2024-05-15 08:41:29.072193] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:34.485 [2024-05-15 08:41:29.163595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:15:34.485 08:41:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:15:35.894 08:41:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:15:35.894 08:41:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:15:35.894 08:41:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:15:35.894 08:41:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:15:35.894 08:41:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:15:35.894 08:41:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:15:35.894 08:41:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:15:35.894 08:41:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:15:35.894 08:41:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:15:35.894 08:41:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:15:35.894 08:41:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:15:35.894 08:41:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:15:35.894 08:41:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:15:35.894 08:41:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:15:35.894 08:41:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:15:35.894 08:41:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:15:35.894 08:41:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:15:35.894 08:41:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:15:35.894 08:41:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:15:35.894 08:41:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:15:35.894 08:41:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:15:35.894 08:41:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:15:35.894 08:41:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:15:35.894 08:41:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:15:35.894 08:41:30 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:15:35.894 08:41:30 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:15:35.894 08:41:30 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:35.894 00:15:35.894 real 0m1.405s 00:15:35.894 user 0m1.252s 00:15:35.894 sys 0m0.153s 00:15:35.894 08:41:30 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:35.894 08:41:30 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:15:35.894 ************************************ 00:15:35.894 END TEST accel_dualcast 00:15:35.894 ************************************ 00:15:35.894 08:41:30 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:15:35.894 08:41:30 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:15:35.894 08:41:30 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:35.894 08:41:30 accel -- common/autotest_common.sh@10 -- # set +x 00:15:35.894 ************************************ 00:15:35.894 START TEST accel_compare 00:15:35.894 ************************************ 00:15:35.894 08:41:30 accel.accel_compare -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w compare -y 00:15:35.894 08:41:30 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:15:35.894 08:41:30 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:15:35.894 08:41:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:15:35.894 08:41:30 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:15:35.894 08:41:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:15:35.894 08:41:30 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:15:35.894 08:41:30 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:15:35.894 08:41:30 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:15:35.895 [2024-05-15 08:41:30.454088] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:15:35.895 [2024-05-15 08:41:30.454154] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2149762 ] 00:15:35.895 EAL: No free 2048 kB hugepages reported on node 1 00:15:35.895 [2024-05-15 08:41:30.526434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:35.895 [2024-05-15 08:41:30.616842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:15:35.895 08:41:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:15:37.274 08:41:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:15:37.274 08:41:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:15:37.274 08:41:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:15:37.274 08:41:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:15:37.274 08:41:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:15:37.274 08:41:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:15:37.274 08:41:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:15:37.274 08:41:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:15:37.274 08:41:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:15:37.274 08:41:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:15:37.274 08:41:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:15:37.274 08:41:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:15:37.274 08:41:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:15:37.274 08:41:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:15:37.274 08:41:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:15:37.274 08:41:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:15:37.274 08:41:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:15:37.274 08:41:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:15:37.274 08:41:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:15:37.274 08:41:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:15:37.274 08:41:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:15:37.274 08:41:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:15:37.274 08:41:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:15:37.274 08:41:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:15:37.274 08:41:31 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:15:37.274 08:41:31 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:15:37.274 08:41:31 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:37.274 00:15:37.274 real 0m1.403s 00:15:37.274 user 0m1.245s 00:15:37.274 sys 0m0.158s 00:15:37.274 08:41:31 accel.accel_compare -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:37.274 08:41:31 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:15:37.274 ************************************ 00:15:37.274 END TEST accel_compare 00:15:37.274 ************************************ 00:15:37.274 08:41:31 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:15:37.274 08:41:31 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:15:37.274 08:41:31 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:37.274 08:41:31 accel -- common/autotest_common.sh@10 -- # set +x 00:15:37.274 ************************************ 00:15:37.274 START TEST accel_xor 00:15:37.274 ************************************ 00:15:37.275 08:41:31 accel.accel_xor -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w xor -y 00:15:37.275 08:41:31 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:15:37.275 08:41:31 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:15:37.275 08:41:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:37.275 08:41:31 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:15:37.275 08:41:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:37.275 08:41:31 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:15:37.275 08:41:31 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:15:37.275 08:41:31 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:37.275 08:41:31 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:37.275 08:41:31 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:37.275 08:41:31 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:37.275 08:41:31 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:37.275 08:41:31 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:15:37.275 08:41:31 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:15:37.275 [2024-05-15 08:41:31.904883] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:15:37.275 [2024-05-15 08:41:31.904945] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2149924 ] 00:15:37.275 EAL: No free 2048 kB hugepages reported on node 1 00:15:37.275 [2024-05-15 08:41:31.975196] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.275 [2024-05-15 08:41:32.065763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:37.533 08:41:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:38.906 00:15:38.906 real 0m1.418s 00:15:38.906 user 0m1.260s 00:15:38.906 sys 0m0.159s 00:15:38.906 08:41:33 accel.accel_xor -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:38.906 08:41:33 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:15:38.906 ************************************ 00:15:38.906 END TEST accel_xor 00:15:38.906 ************************************ 00:15:38.906 08:41:33 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:15:38.906 08:41:33 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:15:38.906 08:41:33 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:38.906 08:41:33 accel -- common/autotest_common.sh@10 -- # set +x 00:15:38.906 ************************************ 00:15:38.906 START TEST accel_xor 00:15:38.906 ************************************ 00:15:38.906 08:41:33 accel.accel_xor -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w xor -y -x 3 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:15:38.906 [2024-05-15 08:41:33.381312] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:15:38.906 [2024-05-15 08:41:33.381375] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2150078 ] 00:15:38.906 EAL: No free 2048 kB hugepages reported on node 1 00:15:38.906 [2024-05-15 08:41:33.455762] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:38.906 [2024-05-15 08:41:33.544372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:38.906 08:41:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:40.279 08:41:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:15:40.280 08:41:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:40.280 08:41:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:40.280 08:41:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:40.280 08:41:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:15:40.280 08:41:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:40.280 08:41:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:40.280 08:41:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:40.280 08:41:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:15:40.280 08:41:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:40.280 08:41:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:40.280 08:41:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:40.280 08:41:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:15:40.280 08:41:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:40.280 08:41:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:40.280 08:41:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:40.280 08:41:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:15:40.280 08:41:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:40.280 08:41:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:40.280 08:41:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:40.280 08:41:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:15:40.280 08:41:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:40.280 08:41:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:40.280 08:41:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:40.280 08:41:34 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:15:40.280 08:41:34 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:15:40.280 08:41:34 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:40.280 00:15:40.280 real 0m1.410s 00:15:40.280 user 0m1.256s 00:15:40.280 sys 0m0.155s 00:15:40.280 08:41:34 accel.accel_xor -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:40.280 08:41:34 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:15:40.280 ************************************ 00:15:40.280 END TEST accel_xor 00:15:40.280 ************************************ 00:15:40.280 08:41:34 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:15:40.280 08:41:34 accel -- common/autotest_common.sh@1098 -- # '[' 6 -le 1 ']' 00:15:40.280 08:41:34 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:40.280 08:41:34 accel -- common/autotest_common.sh@10 -- # set +x 00:15:40.280 ************************************ 00:15:40.280 START TEST accel_dif_verify 00:15:40.280 ************************************ 00:15:40.280 08:41:34 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w dif_verify 00:15:40.280 08:41:34 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:15:40.280 08:41:34 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:15:40.280 08:41:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:15:40.280 08:41:34 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:15:40.280 08:41:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:15:40.280 08:41:34 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:15:40.280 08:41:34 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:15:40.280 08:41:34 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:40.280 08:41:34 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:40.280 08:41:34 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:40.280 08:41:34 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:40.280 08:41:34 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:40.280 08:41:34 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:15:40.280 08:41:34 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:15:40.280 [2024-05-15 08:41:34.838259] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:15:40.280 [2024-05-15 08:41:34.838334] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2150318 ] 00:15:40.280 EAL: No free 2048 kB hugepages reported on node 1 00:15:40.280 [2024-05-15 08:41:34.910337] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:40.280 [2024-05-15 08:41:35.000734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:15:40.280 08:41:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:15:40.538 08:41:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:15:40.538 08:41:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:15:40.538 08:41:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:15:40.538 08:41:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:15:40.538 08:41:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:15:40.538 08:41:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:15:40.538 08:41:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:15:41.472 08:41:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:15:41.472 08:41:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:15:41.472 08:41:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:15:41.472 08:41:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:15:41.472 08:41:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:15:41.472 08:41:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:15:41.472 08:41:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:15:41.472 08:41:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:15:41.472 08:41:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:15:41.472 08:41:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:15:41.472 08:41:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:15:41.472 08:41:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:15:41.472 08:41:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:15:41.472 08:41:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:15:41.472 08:41:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:15:41.472 08:41:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:15:41.472 08:41:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:15:41.472 08:41:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:15:41.472 08:41:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:15:41.472 08:41:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:15:41.472 08:41:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:15:41.472 08:41:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:15:41.472 08:41:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:15:41.472 08:41:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:15:41.472 08:41:36 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:15:41.472 08:41:36 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:15:41.472 08:41:36 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:41.472 00:15:41.472 real 0m1.421s 00:15:41.472 user 0m1.266s 00:15:41.472 sys 0m0.158s 00:15:41.472 08:41:36 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:41.472 08:41:36 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:15:41.472 ************************************ 00:15:41.472 END TEST accel_dif_verify 00:15:41.472 ************************************ 00:15:41.730 08:41:36 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:15:41.730 08:41:36 accel -- common/autotest_common.sh@1098 -- # '[' 6 -le 1 ']' 00:15:41.730 08:41:36 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:41.730 08:41:36 accel -- common/autotest_common.sh@10 -- # set +x 00:15:41.730 ************************************ 00:15:41.730 START TEST accel_dif_generate 00:15:41.730 ************************************ 00:15:41.730 08:41:36 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w dif_generate 00:15:41.730 08:41:36 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:15:41.730 08:41:36 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:15:41.730 08:41:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:15:41.730 08:41:36 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:15:41.730 08:41:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:15:41.730 08:41:36 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:15:41.730 08:41:36 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:15:41.730 08:41:36 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:41.730 08:41:36 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:41.730 08:41:36 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:41.730 08:41:36 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:41.730 08:41:36 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:41.730 08:41:36 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:15:41.730 08:41:36 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:15:41.730 [2024-05-15 08:41:36.311865] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:15:41.730 [2024-05-15 08:41:36.311931] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2150512 ] 00:15:41.730 EAL: No free 2048 kB hugepages reported on node 1 00:15:41.730 [2024-05-15 08:41:36.378582] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.730 [2024-05-15 08:41:36.467591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.988 08:41:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:15:41.988 08:41:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:15:41.988 08:41:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:15:41.988 08:41:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:15:41.988 08:41:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:15:41.988 08:41:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:15:41.988 08:41:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:15:41.988 08:41:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:15:41.988 08:41:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:15:41.988 08:41:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:15:41.988 08:41:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:15:41.988 08:41:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:15:41.988 08:41:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:15:41.988 08:41:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:15:41.988 08:41:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:15:41.988 08:41:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:15:41.988 08:41:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:15:41.988 08:41:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:15:41.988 08:41:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:15:41.988 08:41:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:15:41.988 08:41:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:15:41.989 08:41:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:15:41.989 08:41:36 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:15:41.989 08:41:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:15:41.989 08:41:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:15:41.989 08:41:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:15:41.989 08:41:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:15:41.989 08:41:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:15:41.989 08:41:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:15:41.989 08:41:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:15:41.989 08:41:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:15:41.989 08:41:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:15:41.989 08:41:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:15:41.989 08:41:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:15:41.989 08:41:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:15:41.989 08:41:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:15:41.989 08:41:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:15:41.989 08:41:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:15:41.989 08:41:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:15:41.989 08:41:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:15:41.989 08:41:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:15:41.989 08:41:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:15:41.989 08:41:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:15:41.989 08:41:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:15:41.989 08:41:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:15:41.989 08:41:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:15:41.989 08:41:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:15:41.989 08:41:36 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:15:41.989 08:41:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:15:41.989 08:41:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:15:41.989 08:41:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:15:41.989 08:41:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:15:41.989 08:41:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:15:41.989 08:41:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:15:41.989 08:41:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:15:41.989 08:41:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:15:41.989 08:41:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:15:41.989 08:41:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:15:41.989 08:41:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:15:41.989 08:41:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:15:41.989 08:41:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:15:41.989 08:41:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:15:41.989 08:41:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:15:41.989 08:41:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:15:41.989 08:41:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:15:41.989 08:41:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:15:41.989 08:41:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:15:41.989 08:41:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:15:41.989 08:41:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:15:41.989 08:41:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:15:41.989 08:41:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:15:41.989 08:41:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:15:41.989 08:41:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:15:41.989 08:41:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:15:41.989 08:41:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:15:41.989 08:41:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:15:41.989 08:41:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:15:41.989 08:41:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:15:42.922 08:41:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:15:42.922 08:41:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:15:42.922 08:41:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:15:42.922 08:41:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:15:42.922 08:41:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:15:42.922 08:41:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:15:42.922 08:41:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:15:42.922 08:41:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:15:42.922 08:41:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:15:42.922 08:41:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:15:42.922 08:41:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:15:42.922 08:41:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:15:42.922 08:41:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:15:42.922 08:41:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:15:42.922 08:41:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:15:42.922 08:41:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:15:42.922 08:41:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:15:42.922 08:41:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:15:42.922 08:41:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:15:42.922 08:41:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:15:42.922 08:41:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:15:42.922 08:41:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:15:42.922 08:41:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:15:42.922 08:41:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:15:42.922 08:41:37 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:15:42.922 08:41:37 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:15:42.922 08:41:37 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:42.922 00:15:42.922 real 0m1.400s 00:15:42.922 user 0m1.256s 00:15:42.922 sys 0m0.148s 00:15:42.922 08:41:37 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:42.922 08:41:37 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:15:42.922 ************************************ 00:15:42.922 END TEST accel_dif_generate 00:15:42.922 ************************************ 00:15:43.180 08:41:37 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:15:43.180 08:41:37 accel -- common/autotest_common.sh@1098 -- # '[' 6 -le 1 ']' 00:15:43.180 08:41:37 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:43.180 08:41:37 accel -- common/autotest_common.sh@10 -- # set +x 00:15:43.180 ************************************ 00:15:43.180 START TEST accel_dif_generate_copy 00:15:43.180 ************************************ 00:15:43.180 08:41:37 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w dif_generate_copy 00:15:43.180 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:15:43.180 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:15:43.180 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:15:43.180 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:15:43.180 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:15:43.180 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:15:43.180 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:15:43.180 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:43.181 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:43.181 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:43.181 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:43.181 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:43.181 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:15:43.181 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:15:43.181 [2024-05-15 08:41:37.760143] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:15:43.181 [2024-05-15 08:41:37.760209] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2150670 ] 00:15:43.181 EAL: No free 2048 kB hugepages reported on node 1 00:15:43.181 [2024-05-15 08:41:37.831974] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:43.181 [2024-05-15 08:41:37.922861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:43.438 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:15:43.438 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:43.438 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:15:43.439 08:41:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:15:44.373 08:41:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:15:44.373 08:41:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:44.373 08:41:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:15:44.373 08:41:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:15:44.373 08:41:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:15:44.373 08:41:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:44.373 08:41:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:15:44.373 08:41:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:15:44.373 08:41:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:15:44.373 08:41:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:44.373 08:41:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:15:44.373 08:41:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:15:44.373 08:41:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:15:44.373 08:41:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:44.373 08:41:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:15:44.373 08:41:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:15:44.373 08:41:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:15:44.373 08:41:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:44.373 08:41:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:15:44.373 08:41:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:15:44.373 08:41:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:15:44.373 08:41:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:44.373 08:41:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:15:44.373 08:41:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:15:44.373 08:41:39 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:15:44.373 08:41:39 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:15:44.373 08:41:39 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:44.373 00:15:44.373 real 0m1.422s 00:15:44.373 user 0m1.267s 00:15:44.373 sys 0m0.157s 00:15:44.373 08:41:39 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:44.631 08:41:39 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:15:44.631 ************************************ 00:15:44.631 END TEST accel_dif_generate_copy 00:15:44.631 ************************************ 00:15:44.631 08:41:39 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:15:44.631 08:41:39 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:15:44.631 08:41:39 accel -- common/autotest_common.sh@1098 -- # '[' 8 -le 1 ']' 00:15:44.631 08:41:39 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:44.631 08:41:39 accel -- common/autotest_common.sh@10 -- # set +x 00:15:44.631 ************************************ 00:15:44.631 START TEST accel_comp 00:15:44.631 ************************************ 00:15:44.631 08:41:39 accel.accel_comp -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:15:44.631 08:41:39 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:15:44.631 08:41:39 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:15:44.631 08:41:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:15:44.631 08:41:39 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:15:44.631 08:41:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:15:44.631 08:41:39 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:15:44.631 08:41:39 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:15:44.631 08:41:39 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:44.631 08:41:39 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:44.631 08:41:39 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:44.631 08:41:39 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:44.631 08:41:39 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:44.631 08:41:39 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:15:44.631 08:41:39 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:15:44.631 [2024-05-15 08:41:39.228044] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:15:44.631 [2024-05-15 08:41:39.228104] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2150823 ] 00:15:44.631 EAL: No free 2048 kB hugepages reported on node 1 00:15:44.631 [2024-05-15 08:41:39.298730] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:44.631 [2024-05-15 08:41:39.389333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.890 08:41:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:15:44.890 08:41:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:15:44.890 08:41:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:15:44.890 08:41:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:15:44.890 08:41:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:15:44.890 08:41:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:15:44.890 08:41:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:15:44.890 08:41:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:15:44.890 08:41:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:15:44.890 08:41:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:15:44.890 08:41:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:15:44.890 08:41:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:15:44.890 08:41:39 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:15:44.890 08:41:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:15:44.890 08:41:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:15:44.890 08:41:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:15:44.890 08:41:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:15:44.890 08:41:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:15:44.890 08:41:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:15:44.890 08:41:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:15:44.890 08:41:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:15:44.890 08:41:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:15:44.890 08:41:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:15:44.890 08:41:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:15:44.890 08:41:39 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:15:44.890 08:41:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:15:44.890 08:41:39 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:15:44.890 08:41:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:15:44.890 08:41:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:15:44.890 08:41:39 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:15:44.890 08:41:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:15:44.890 08:41:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:15:44.890 08:41:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:15:44.890 08:41:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:15:44.890 08:41:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:15:44.890 08:41:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:15:44.890 08:41:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:15:44.890 08:41:39 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:15:44.890 08:41:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:15:44.890 08:41:39 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:15:44.890 08:41:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:15:44.890 08:41:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:15:44.890 08:41:39 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:15:44.890 08:41:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:15:44.890 08:41:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:15:44.890 08:41:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:15:44.890 08:41:39 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:15:44.890 08:41:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:15:44.890 08:41:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:15:44.890 08:41:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:15:44.890 08:41:39 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:15:44.890 08:41:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:15:44.890 08:41:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:15:44.890 08:41:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:15:44.890 08:41:39 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:15:44.890 08:41:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:15:44.891 08:41:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:15:44.891 08:41:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:15:44.891 08:41:39 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:15:44.891 08:41:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:15:44.891 08:41:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:15:44.891 08:41:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:15:44.891 08:41:39 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:15:44.891 08:41:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:15:44.891 08:41:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:15:44.891 08:41:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:15:44.891 08:41:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:15:44.891 08:41:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:15:44.891 08:41:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:15:44.891 08:41:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:15:44.891 08:41:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:15:44.891 08:41:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:15:44.891 08:41:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:15:44.891 08:41:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:15:45.825 08:41:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:15:45.825 08:41:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:15:45.825 08:41:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:15:45.825 08:41:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:15:45.825 08:41:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:15:45.825 08:41:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:15:45.825 08:41:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:15:45.825 08:41:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:15:45.825 08:41:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:15:45.825 08:41:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:15:45.825 08:41:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:15:45.825 08:41:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:15:45.825 08:41:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:15:45.825 08:41:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:15:45.825 08:41:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:15:45.825 08:41:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:15:45.825 08:41:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:15:45.825 08:41:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:15:45.825 08:41:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:15:45.825 08:41:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:15:45.825 08:41:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:15:46.083 08:41:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:15:46.083 08:41:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:15:46.083 08:41:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:15:46.083 08:41:40 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:15:46.083 08:41:40 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:15:46.083 08:41:40 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:46.083 00:15:46.083 real 0m1.405s 00:15:46.083 user 0m1.252s 00:15:46.083 sys 0m0.155s 00:15:46.083 08:41:40 accel.accel_comp -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:46.083 08:41:40 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:15:46.083 ************************************ 00:15:46.083 END TEST accel_comp 00:15:46.083 ************************************ 00:15:46.083 08:41:40 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:15:46.083 08:41:40 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:15:46.083 08:41:40 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:46.083 08:41:40 accel -- common/autotest_common.sh@10 -- # set +x 00:15:46.083 ************************************ 00:15:46.083 START TEST accel_decomp 00:15:46.083 ************************************ 00:15:46.083 08:41:40 accel.accel_decomp -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:15:46.083 08:41:40 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:15:46.083 08:41:40 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:15:46.083 08:41:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:15:46.083 08:41:40 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:15:46.083 08:41:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:15:46.083 08:41:40 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:15:46.083 08:41:40 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:15:46.083 08:41:40 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:46.083 08:41:40 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:46.083 08:41:40 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:46.083 08:41:40 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:46.083 08:41:40 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:46.083 08:41:40 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:15:46.083 08:41:40 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:15:46.083 [2024-05-15 08:41:40.690848] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:15:46.083 [2024-05-15 08:41:40.690916] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2151094 ] 00:15:46.083 EAL: No free 2048 kB hugepages reported on node 1 00:15:46.083 [2024-05-15 08:41:40.763847] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:46.083 [2024-05-15 08:41:40.853232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.341 08:41:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:15:46.341 08:41:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:15:46.341 08:41:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:15:46.341 08:41:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:15:46.341 08:41:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:15:46.341 08:41:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:15:46.341 08:41:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:15:46.341 08:41:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:15:46.341 08:41:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:15:46.341 08:41:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:15:46.341 08:41:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:15:46.341 08:41:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:15:46.341 08:41:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:15:46.341 08:41:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:15:46.341 08:41:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:15:46.341 08:41:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:15:46.341 08:41:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:15:46.341 08:41:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:15:46.341 08:41:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:15:46.341 08:41:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:15:46.341 08:41:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:15:46.341 08:41:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:15:46.341 08:41:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:15:46.341 08:41:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:15:46.341 08:41:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:15:46.341 08:41:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:15:46.341 08:41:40 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:15:46.341 08:41:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:15:46.341 08:41:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:15:46.341 08:41:40 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:15:46.341 08:41:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:15:46.341 08:41:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:15:46.341 08:41:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:15:46.341 08:41:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:15:46.341 08:41:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:15:46.341 08:41:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:15:46.341 08:41:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:15:46.341 08:41:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:15:46.341 08:41:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:15:46.341 08:41:40 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:15:46.341 08:41:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:15:46.341 08:41:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:15:46.341 08:41:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:15:46.341 08:41:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:15:46.341 08:41:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:15:46.341 08:41:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:15:46.341 08:41:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:15:46.341 08:41:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:15:46.341 08:41:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:15:46.341 08:41:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:15:46.341 08:41:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:15:46.341 08:41:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:15:46.341 08:41:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:15:46.341 08:41:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:15:46.341 08:41:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:15:46.341 08:41:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:15:46.341 08:41:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:15:46.341 08:41:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:15:46.341 08:41:40 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:15:46.341 08:41:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:15:46.341 08:41:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:15:46.342 08:41:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:15:46.342 08:41:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:15:46.342 08:41:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:15:46.342 08:41:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:15:46.342 08:41:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:15:46.342 08:41:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:15:46.342 08:41:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:15:46.342 08:41:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:15:46.342 08:41:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:15:46.342 08:41:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:15:46.342 08:41:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:15:46.342 08:41:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:15:46.342 08:41:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:15:47.713 08:41:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:15:47.713 08:41:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:15:47.713 08:41:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:15:47.713 08:41:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:15:47.713 08:41:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:15:47.713 08:41:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:15:47.713 08:41:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:15:47.713 08:41:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:15:47.713 08:41:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:15:47.713 08:41:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:15:47.713 08:41:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:15:47.713 08:41:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:15:47.713 08:41:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:15:47.713 08:41:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:15:47.713 08:41:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:15:47.713 08:41:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:15:47.713 08:41:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:15:47.713 08:41:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:15:47.713 08:41:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:15:47.713 08:41:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:15:47.713 08:41:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:15:47.713 08:41:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:15:47.713 08:41:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:15:47.713 08:41:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:15:47.713 08:41:42 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:15:47.713 08:41:42 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:15:47.713 08:41:42 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:47.713 00:15:47.713 real 0m1.416s 00:15:47.713 user 0m1.269s 00:15:47.713 sys 0m0.149s 00:15:47.713 08:41:42 accel.accel_decomp -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:47.713 08:41:42 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:15:47.713 ************************************ 00:15:47.713 END TEST accel_decomp 00:15:47.713 ************************************ 00:15:47.713 08:41:42 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:15:47.713 08:41:42 accel -- common/autotest_common.sh@1098 -- # '[' 11 -le 1 ']' 00:15:47.713 08:41:42 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:47.713 08:41:42 accel -- common/autotest_common.sh@10 -- # set +x 00:15:47.713 ************************************ 00:15:47.713 START TEST accel_decmop_full 00:15:47.713 ************************************ 00:15:47.713 08:41:42 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:15:47.713 [2024-05-15 08:41:42.160117] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:15:47.713 [2024-05-15 08:41:42.160182] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2151257 ] 00:15:47.713 EAL: No free 2048 kB hugepages reported on node 1 00:15:47.713 [2024-05-15 08:41:42.234765] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:47.713 [2024-05-15 08:41:42.324465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:15:47.713 08:41:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:15:47.714 08:41:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:15:47.714 08:41:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:15:47.714 08:41:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:15:47.714 08:41:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:15:47.714 08:41:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:15:47.714 08:41:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:15:47.714 08:41:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:15:47.714 08:41:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:15:47.714 08:41:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:15:47.714 08:41:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:15:47.714 08:41:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:15:47.714 08:41:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:15:47.714 08:41:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:15:47.714 08:41:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:15:49.085 08:41:43 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:15:49.085 08:41:43 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:15:49.085 08:41:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:15:49.085 08:41:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:15:49.085 08:41:43 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:15:49.085 08:41:43 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:15:49.085 08:41:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:15:49.085 08:41:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:15:49.085 08:41:43 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:15:49.085 08:41:43 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:15:49.085 08:41:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:15:49.085 08:41:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:15:49.085 08:41:43 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:15:49.085 08:41:43 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:15:49.085 08:41:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:15:49.085 08:41:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:15:49.085 08:41:43 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:15:49.085 08:41:43 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:15:49.085 08:41:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:15:49.085 08:41:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:15:49.085 08:41:43 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:15:49.085 08:41:43 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:15:49.085 08:41:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:15:49.085 08:41:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:15:49.085 08:41:43 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:15:49.085 08:41:43 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:15:49.085 08:41:43 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:49.085 00:15:49.085 real 0m1.437s 00:15:49.085 user 0m1.283s 00:15:49.085 sys 0m0.156s 00:15:49.085 08:41:43 accel.accel_decmop_full -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:49.086 08:41:43 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:15:49.086 ************************************ 00:15:49.086 END TEST accel_decmop_full 00:15:49.086 ************************************ 00:15:49.086 08:41:43 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:15:49.086 08:41:43 accel -- common/autotest_common.sh@1098 -- # '[' 11 -le 1 ']' 00:15:49.086 08:41:43 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:49.086 08:41:43 accel -- common/autotest_common.sh@10 -- # set +x 00:15:49.086 ************************************ 00:15:49.086 START TEST accel_decomp_mcore 00:15:49.086 ************************************ 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:15:49.086 [2024-05-15 08:41:43.642403] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:15:49.086 [2024-05-15 08:41:43.642457] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2151410 ] 00:15:49.086 EAL: No free 2048 kB hugepages reported on node 1 00:15:49.086 [2024-05-15 08:41:43.713531] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:49.086 [2024-05-15 08:41:43.806939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:49.086 [2024-05-15 08:41:43.806990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:49.086 [2024-05-15 08:41:43.807108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:49.086 [2024-05-15 08:41:43.807111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:49.086 08:41:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:50.459 08:41:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:15:50.459 08:41:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:50.459 08:41:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:50.459 08:41:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:50.459 08:41:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:15:50.459 08:41:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:50.459 08:41:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:50.459 08:41:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:50.459 08:41:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:15:50.459 08:41:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:50.459 08:41:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:50.459 08:41:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:50.459 08:41:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:15:50.459 08:41:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:50.459 08:41:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:50.459 08:41:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:50.459 08:41:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:15:50.459 08:41:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:50.459 08:41:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:50.459 08:41:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:50.459 08:41:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:15:50.459 08:41:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:50.459 08:41:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:50.459 08:41:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:50.459 08:41:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:15:50.459 08:41:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:50.459 08:41:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:50.459 08:41:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:50.459 08:41:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:15:50.459 08:41:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:50.459 08:41:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:50.459 08:41:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:50.459 08:41:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:15:50.459 08:41:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:50.459 08:41:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:50.459 08:41:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:50.459 08:41:45 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:15:50.459 08:41:45 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:15:50.459 08:41:45 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:50.459 00:15:50.459 real 0m1.414s 00:15:50.459 user 0m4.710s 00:15:50.459 sys 0m0.151s 00:15:50.459 08:41:45 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:50.459 08:41:45 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:15:50.459 ************************************ 00:15:50.459 END TEST accel_decomp_mcore 00:15:50.459 ************************************ 00:15:50.460 08:41:45 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:15:50.460 08:41:45 accel -- common/autotest_common.sh@1098 -- # '[' 13 -le 1 ']' 00:15:50.460 08:41:45 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:50.460 08:41:45 accel -- common/autotest_common.sh@10 -- # set +x 00:15:50.460 ************************************ 00:15:50.460 START TEST accel_decomp_full_mcore 00:15:50.460 ************************************ 00:15:50.460 08:41:45 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:15:50.460 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:15:50.460 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:15:50.460 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:50.460 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:15:50.460 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:50.460 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:15:50.460 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:15:50.460 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:50.460 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:50.460 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:50.460 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:50.460 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:50.460 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:15:50.460 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:15:50.460 [2024-05-15 08:41:45.109144] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:15:50.460 [2024-05-15 08:41:45.109209] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2151689 ] 00:15:50.460 EAL: No free 2048 kB hugepages reported on node 1 00:15:50.460 [2024-05-15 08:41:45.180045] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:50.719 [2024-05-15 08:41:45.276697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:50.719 [2024-05-15 08:41:45.276761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:50.719 [2024-05-15 08:41:45.276851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:50.719 [2024-05-15 08:41:45.276854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.719 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:15:50.719 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:50.719 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:50.719 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:50.719 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:15:50.719 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:50.719 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:50.719 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:50.719 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:15:50.719 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:50.719 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:50.720 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:50.720 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:15:50.720 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:50.720 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:50.720 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:50.720 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:15:50.720 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:50.720 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:50.720 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:50.720 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:15:50.720 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:50.720 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:50.720 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:50.720 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:15:50.720 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:50.720 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:15:50.720 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:50.720 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:50.720 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:15:50.720 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:50.720 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:50.720 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:50.720 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:15:50.720 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:50.720 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:50.720 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:50.720 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:15:50.720 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:50.720 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:15:50.720 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:50.720 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:50.720 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:15:50.720 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:50.720 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:50.720 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:50.720 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:15:50.720 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:50.720 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:50.720 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:50.720 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:15:50.720 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:50.720 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:50.720 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:50.720 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:15:50.720 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:50.720 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:50.720 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:50.720 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:15:50.720 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:50.720 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:50.720 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:50.720 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:15:50.720 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:50.720 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:50.720 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:50.720 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:15:50.720 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:50.720 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:50.720 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:50.720 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:15:50.720 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:50.720 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:50.720 08:41:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:52.094 08:41:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:15:52.094 08:41:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:52.094 08:41:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:52.094 08:41:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:52.094 08:41:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:15:52.094 08:41:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:52.094 08:41:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:52.094 08:41:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:52.094 08:41:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:15:52.094 08:41:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:52.094 08:41:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:52.094 08:41:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:52.094 08:41:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:15:52.094 08:41:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:52.094 08:41:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:52.094 08:41:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:52.094 08:41:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:15:52.094 08:41:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:52.094 08:41:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:52.094 08:41:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:52.094 08:41:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:15:52.094 08:41:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:52.094 08:41:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:52.094 08:41:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:52.094 08:41:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:15:52.094 08:41:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:52.094 08:41:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:52.094 08:41:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:52.094 08:41:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:15:52.094 08:41:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:52.094 08:41:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:52.094 08:41:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:52.094 08:41:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:15:52.094 08:41:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:52.094 08:41:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:52.094 08:41:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:52.094 08:41:46 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:15:52.094 08:41:46 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:15:52.094 08:41:46 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:52.094 00:15:52.094 real 0m1.445s 00:15:52.094 user 0m4.790s 00:15:52.094 sys 0m0.160s 00:15:52.094 08:41:46 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:52.094 08:41:46 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:15:52.094 ************************************ 00:15:52.094 END TEST accel_decomp_full_mcore 00:15:52.094 ************************************ 00:15:52.094 08:41:46 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:15:52.094 08:41:46 accel -- common/autotest_common.sh@1098 -- # '[' 11 -le 1 ']' 00:15:52.094 08:41:46 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:52.094 08:41:46 accel -- common/autotest_common.sh@10 -- # set +x 00:15:52.094 ************************************ 00:15:52.094 START TEST accel_decomp_mthread 00:15:52.094 ************************************ 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:15:52.094 [2024-05-15 08:41:46.613462] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:15:52.094 [2024-05-15 08:41:46.613549] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2151848 ] 00:15:52.094 EAL: No free 2048 kB hugepages reported on node 1 00:15:52.094 [2024-05-15 08:41:46.684394] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.094 [2024-05-15 08:41:46.769515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:15:52.094 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:52.095 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:52.095 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:52.095 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:15:52.095 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:52.095 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:52.095 08:41:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:53.466 08:41:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:15:53.466 08:41:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:53.466 08:41:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:53.466 08:41:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:53.466 08:41:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:15:53.466 08:41:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:53.466 08:41:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:53.466 08:41:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:53.466 08:41:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:15:53.466 08:41:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:53.466 08:41:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:53.466 08:41:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:53.466 08:41:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:15:53.466 08:41:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:53.466 08:41:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:53.466 08:41:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:53.466 08:41:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:15:53.466 08:41:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:53.466 08:41:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:53.466 08:41:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:53.466 08:41:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:15:53.466 08:41:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:53.466 08:41:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:53.466 08:41:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:53.466 08:41:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:15:53.466 08:41:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:53.466 08:41:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:53.466 08:41:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:53.466 08:41:48 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:15:53.466 08:41:48 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:15:53.466 08:41:48 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:53.466 00:15:53.466 real 0m1.407s 00:15:53.466 user 0m1.253s 00:15:53.466 sys 0m0.158s 00:15:53.466 08:41:48 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:53.466 08:41:48 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:15:53.466 ************************************ 00:15:53.466 END TEST accel_decomp_mthread 00:15:53.466 ************************************ 00:15:53.466 08:41:48 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:15:53.466 08:41:48 accel -- common/autotest_common.sh@1098 -- # '[' 13 -le 1 ']' 00:15:53.466 08:41:48 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:53.466 08:41:48 accel -- common/autotest_common.sh@10 -- # set +x 00:15:53.466 ************************************ 00:15:53.466 START TEST accel_decomp_full_mthread 00:15:53.466 ************************************ 00:15:53.466 08:41:48 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:15:53.466 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:15:53.467 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:15:53.467 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:53.467 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:15:53.467 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:53.467 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:15:53.467 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:15:53.467 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:53.467 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:53.467 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:53.467 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:53.467 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:53.467 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:15:53.467 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:15:53.467 [2024-05-15 08:41:48.070638] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:15:53.467 [2024-05-15 08:41:48.070700] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2152014 ] 00:15:53.467 EAL: No free 2048 kB hugepages reported on node 1 00:15:53.467 [2024-05-15 08:41:48.144427] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.467 [2024-05-15 08:41:48.234821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.724 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:15:53.724 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:53.725 08:41:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:55.096 08:41:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:15:55.096 08:41:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:55.096 08:41:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:55.096 08:41:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:55.096 08:41:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:15:55.096 08:41:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:55.096 08:41:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:55.096 08:41:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:55.096 08:41:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:15:55.096 08:41:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:55.096 08:41:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:55.096 08:41:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:55.096 08:41:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:15:55.096 08:41:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:55.096 08:41:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:55.096 08:41:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:55.096 08:41:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:15:55.096 08:41:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:55.096 08:41:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:55.096 08:41:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:55.096 08:41:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:15:55.096 08:41:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:55.096 08:41:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:55.096 08:41:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:55.096 08:41:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:15:55.096 08:41:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:55.096 08:41:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:55.096 08:41:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:55.096 08:41:49 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:15:55.096 08:41:49 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:15:55.096 08:41:49 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:55.096 00:15:55.096 real 0m1.456s 00:15:55.096 user 0m1.298s 00:15:55.096 sys 0m0.162s 00:15:55.096 08:41:49 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:55.096 08:41:49 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:15:55.096 ************************************ 00:15:55.096 END TEST accel_decomp_full_mthread 00:15:55.096 ************************************ 00:15:55.096 08:41:49 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:15:55.096 08:41:49 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:15:55.096 08:41:49 accel -- accel/accel.sh@137 -- # build_accel_config 00:15:55.096 08:41:49 accel -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:15:55.096 08:41:49 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:55.096 08:41:49 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:55.096 08:41:49 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:55.096 08:41:49 accel -- common/autotest_common.sh@10 -- # set +x 00:15:55.096 08:41:49 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:55.096 08:41:49 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:55.096 08:41:49 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:55.096 08:41:49 accel -- accel/accel.sh@40 -- # local IFS=, 00:15:55.096 08:41:49 accel -- accel/accel.sh@41 -- # jq -r . 00:15:55.096 ************************************ 00:15:55.096 START TEST accel_dif_functional_tests 00:15:55.096 ************************************ 00:15:55.096 08:41:49 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:15:55.096 [2024-05-15 08:41:49.598646] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:15:55.096 [2024-05-15 08:41:49.598719] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2152201 ] 00:15:55.096 EAL: No free 2048 kB hugepages reported on node 1 00:15:55.096 [2024-05-15 08:41:49.668585] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:55.096 [2024-05-15 08:41:49.759734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:55.096 [2024-05-15 08:41:49.759788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:55.097 [2024-05-15 08:41:49.759791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.097 00:15:55.097 00:15:55.097 CUnit - A unit testing framework for C - Version 2.1-3 00:15:55.097 http://cunit.sourceforge.net/ 00:15:55.097 00:15:55.097 00:15:55.097 Suite: accel_dif 00:15:55.097 Test: verify: DIF generated, GUARD check ...passed 00:15:55.097 Test: verify: DIF generated, APPTAG check ...passed 00:15:55.097 Test: verify: DIF generated, REFTAG check ...passed 00:15:55.097 Test: verify: DIF not generated, GUARD check ...[2024-05-15 08:41:49.848665] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:15:55.097 [2024-05-15 08:41:49.848727] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:15:55.097 passed 00:15:55.097 Test: verify: DIF not generated, APPTAG check ...[2024-05-15 08:41:49.848762] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:15:55.097 [2024-05-15 08:41:49.848801] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:15:55.097 passed 00:15:55.097 Test: verify: DIF not generated, REFTAG check ...[2024-05-15 08:41:49.848831] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:15:55.097 [2024-05-15 08:41:49.848857] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:15:55.097 passed 00:15:55.097 Test: verify: APPTAG correct, APPTAG check ...passed 00:15:55.097 Test: verify: APPTAG incorrect, APPTAG check ...[2024-05-15 08:41:49.848931] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:15:55.097 passed 00:15:55.097 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:15:55.097 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:15:55.097 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:15:55.097 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-05-15 08:41:49.849066] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:15:55.097 passed 00:15:55.097 Test: generate copy: DIF generated, GUARD check ...passed 00:15:55.097 Test: generate copy: DIF generated, APTTAG check ...passed 00:15:55.097 Test: generate copy: DIF generated, REFTAG check ...passed 00:15:55.097 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:15:55.097 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:15:55.097 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:15:55.097 Test: generate copy: iovecs-len validate ...[2024-05-15 08:41:49.849300] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:15:55.097 passed 00:15:55.097 Test: generate copy: buffer alignment validate ...passed 00:15:55.097 00:15:55.097 Run Summary: Type Total Ran Passed Failed Inactive 00:15:55.097 suites 1 1 n/a 0 0 00:15:55.097 tests 20 20 20 0 0 00:15:55.097 asserts 204 204 204 0 n/a 00:15:55.097 00:15:55.097 Elapsed time = 0.002 seconds 00:15:55.355 00:15:55.355 real 0m0.506s 00:15:55.355 user 0m0.767s 00:15:55.355 sys 0m0.193s 00:15:55.355 08:41:50 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:55.355 08:41:50 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:15:55.355 ************************************ 00:15:55.355 END TEST accel_dif_functional_tests 00:15:55.355 ************************************ 00:15:55.355 00:15:55.355 real 0m32.018s 00:15:55.355 user 0m35.082s 00:15:55.355 sys 0m4.868s 00:15:55.355 08:41:50 accel -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:55.355 08:41:50 accel -- common/autotest_common.sh@10 -- # set +x 00:15:55.355 ************************************ 00:15:55.355 END TEST accel 00:15:55.355 ************************************ 00:15:55.355 08:41:50 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:15:55.355 08:41:50 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:15:55.355 08:41:50 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:55.355 08:41:50 -- common/autotest_common.sh@10 -- # set +x 00:15:55.355 ************************************ 00:15:55.355 START TEST accel_rpc 00:15:55.355 ************************************ 00:15:55.355 08:41:50 accel_rpc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:15:55.613 * Looking for test storage... 00:15:55.613 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:15:55.613 08:41:50 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:15:55.613 08:41:50 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2152359 00:15:55.613 08:41:50 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:15:55.613 08:41:50 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 2152359 00:15:55.613 08:41:50 accel_rpc -- common/autotest_common.sh@828 -- # '[' -z 2152359 ']' 00:15:55.613 08:41:50 accel_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:55.613 08:41:50 accel_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:15:55.613 08:41:50 accel_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:55.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:55.613 08:41:50 accel_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:15:55.613 08:41:50 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:55.613 [2024-05-15 08:41:50.235702] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:15:55.613 [2024-05-15 08:41:50.235803] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2152359 ] 00:15:55.613 EAL: No free 2048 kB hugepages reported on node 1 00:15:55.613 [2024-05-15 08:41:50.307295] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:55.613 [2024-05-15 08:41:50.391289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.873 08:41:50 accel_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:15:55.873 08:41:50 accel_rpc -- common/autotest_common.sh@861 -- # return 0 00:15:55.873 08:41:50 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:15:55.873 08:41:50 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:15:55.873 08:41:50 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:15:55.873 08:41:50 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:15:55.873 08:41:50 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:15:55.873 08:41:50 accel_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:15:55.873 08:41:50 accel_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:55.873 08:41:50 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:55.873 ************************************ 00:15:55.873 START TEST accel_assign_opcode 00:15:55.873 ************************************ 00:15:55.873 08:41:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # accel_assign_opcode_test_suite 00:15:55.873 08:41:50 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:15:55.873 08:41:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:55.873 08:41:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:15:55.873 [2024-05-15 08:41:50.483992] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:15:55.873 08:41:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:55.873 08:41:50 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:15:55.873 08:41:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:55.873 08:41:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:15:55.873 [2024-05-15 08:41:50.492003] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:15:55.873 08:41:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:55.873 08:41:50 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:15:55.873 08:41:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:55.873 08:41:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:15:56.131 08:41:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:56.131 08:41:50 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:15:56.131 08:41:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:56.131 08:41:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:15:56.131 08:41:50 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:15:56.131 08:41:50 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:15:56.131 08:41:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:56.131 software 00:15:56.131 00:15:56.131 real 0m0.299s 00:15:56.131 user 0m0.041s 00:15:56.131 sys 0m0.006s 00:15:56.131 08:41:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:56.131 08:41:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:15:56.131 ************************************ 00:15:56.131 END TEST accel_assign_opcode 00:15:56.131 ************************************ 00:15:56.131 08:41:50 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 2152359 00:15:56.131 08:41:50 accel_rpc -- common/autotest_common.sh@947 -- # '[' -z 2152359 ']' 00:15:56.131 08:41:50 accel_rpc -- common/autotest_common.sh@951 -- # kill -0 2152359 00:15:56.131 08:41:50 accel_rpc -- common/autotest_common.sh@952 -- # uname 00:15:56.131 08:41:50 accel_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:15:56.131 08:41:50 accel_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2152359 00:15:56.131 08:41:50 accel_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:15:56.131 08:41:50 accel_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:15:56.131 08:41:50 accel_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2152359' 00:15:56.131 killing process with pid 2152359 00:15:56.131 08:41:50 accel_rpc -- common/autotest_common.sh@966 -- # kill 2152359 00:15:56.131 08:41:50 accel_rpc -- common/autotest_common.sh@971 -- # wait 2152359 00:15:56.698 00:15:56.698 real 0m1.097s 00:15:56.698 user 0m1.042s 00:15:56.698 sys 0m0.423s 00:15:56.698 08:41:51 accel_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:56.698 08:41:51 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.698 ************************************ 00:15:56.698 END TEST accel_rpc 00:15:56.698 ************************************ 00:15:56.698 08:41:51 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:15:56.698 08:41:51 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:15:56.698 08:41:51 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:56.698 08:41:51 -- common/autotest_common.sh@10 -- # set +x 00:15:56.698 ************************************ 00:15:56.698 START TEST app_cmdline 00:15:56.698 ************************************ 00:15:56.698 08:41:51 app_cmdline -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:15:56.698 * Looking for test storage... 00:15:56.698 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:15:56.698 08:41:51 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:15:56.698 08:41:51 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2152563 00:15:56.698 08:41:51 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:15:56.698 08:41:51 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2152563 00:15:56.698 08:41:51 app_cmdline -- common/autotest_common.sh@828 -- # '[' -z 2152563 ']' 00:15:56.698 08:41:51 app_cmdline -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.698 08:41:51 app_cmdline -- common/autotest_common.sh@833 -- # local max_retries=100 00:15:56.698 08:41:51 app_cmdline -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.698 08:41:51 app_cmdline -- common/autotest_common.sh@837 -- # xtrace_disable 00:15:56.698 08:41:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:15:56.698 [2024-05-15 08:41:51.382475] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:15:56.698 [2024-05-15 08:41:51.382566] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2152563 ] 00:15:56.698 EAL: No free 2048 kB hugepages reported on node 1 00:15:56.698 [2024-05-15 08:41:51.450172] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.957 [2024-05-15 08:41:51.538675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.213 08:41:51 app_cmdline -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:15:57.213 08:41:51 app_cmdline -- common/autotest_common.sh@861 -- # return 0 00:15:57.213 08:41:51 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:15:57.471 { 00:15:57.471 "version": "SPDK v24.05-pre git sha1 4506c0c36", 00:15:57.471 "fields": { 00:15:57.471 "major": 24, 00:15:57.471 "minor": 5, 00:15:57.471 "patch": 0, 00:15:57.471 "suffix": "-pre", 00:15:57.471 "commit": "4506c0c36" 00:15:57.471 } 00:15:57.471 } 00:15:57.471 08:41:52 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:15:57.471 08:41:52 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:15:57.471 08:41:52 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:15:57.471 08:41:52 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:15:57.471 08:41:52 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:15:57.471 08:41:52 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:15:57.471 08:41:52 app_cmdline -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:57.471 08:41:52 app_cmdline -- app/cmdline.sh@26 -- # sort 00:15:57.471 08:41:52 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:15:57.471 08:41:52 app_cmdline -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:57.471 08:41:52 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:15:57.471 08:41:52 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:15:57.471 08:41:52 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:15:57.471 08:41:52 app_cmdline -- common/autotest_common.sh@649 -- # local es=0 00:15:57.471 08:41:52 app_cmdline -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:15:57.471 08:41:52 app_cmdline -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:57.471 08:41:52 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:57.471 08:41:52 app_cmdline -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:57.471 08:41:52 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:57.471 08:41:52 app_cmdline -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:57.471 08:41:52 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:57.471 08:41:52 app_cmdline -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:57.471 08:41:52 app_cmdline -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:57.471 08:41:52 app_cmdline -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:15:57.729 request: 00:15:57.729 { 00:15:57.729 "method": "env_dpdk_get_mem_stats", 00:15:57.729 "req_id": 1 00:15:57.729 } 00:15:57.729 Got JSON-RPC error response 00:15:57.729 response: 00:15:57.729 { 00:15:57.729 "code": -32601, 00:15:57.729 "message": "Method not found" 00:15:57.729 } 00:15:57.729 08:41:52 app_cmdline -- common/autotest_common.sh@652 -- # es=1 00:15:57.729 08:41:52 app_cmdline -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:57.729 08:41:52 app_cmdline -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:57.729 08:41:52 app_cmdline -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:57.729 08:41:52 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2152563 00:15:57.729 08:41:52 app_cmdline -- common/autotest_common.sh@947 -- # '[' -z 2152563 ']' 00:15:57.729 08:41:52 app_cmdline -- common/autotest_common.sh@951 -- # kill -0 2152563 00:15:57.729 08:41:52 app_cmdline -- common/autotest_common.sh@952 -- # uname 00:15:57.729 08:41:52 app_cmdline -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:15:57.729 08:41:52 app_cmdline -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2152563 00:15:57.729 08:41:52 app_cmdline -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:15:57.729 08:41:52 app_cmdline -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:15:57.729 08:41:52 app_cmdline -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2152563' 00:15:57.729 killing process with pid 2152563 00:15:57.729 08:41:52 app_cmdline -- common/autotest_common.sh@966 -- # kill 2152563 00:15:57.729 08:41:52 app_cmdline -- common/autotest_common.sh@971 -- # wait 2152563 00:15:57.986 00:15:57.986 real 0m1.483s 00:15:57.986 user 0m1.804s 00:15:57.986 sys 0m0.462s 00:15:57.986 08:41:52 app_cmdline -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:57.986 08:41:52 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:15:57.986 ************************************ 00:15:57.986 END TEST app_cmdline 00:15:57.986 ************************************ 00:15:58.244 08:41:52 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:15:58.244 08:41:52 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:15:58.244 08:41:52 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:58.244 08:41:52 -- common/autotest_common.sh@10 -- # set +x 00:15:58.244 ************************************ 00:15:58.244 START TEST version 00:15:58.244 ************************************ 00:15:58.244 08:41:52 version -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:15:58.244 * Looking for test storage... 00:15:58.244 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:15:58.244 08:41:52 version -- app/version.sh@17 -- # get_header_version major 00:15:58.244 08:41:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:15:58.244 08:41:52 version -- app/version.sh@14 -- # cut -f2 00:15:58.244 08:41:52 version -- app/version.sh@14 -- # tr -d '"' 00:15:58.244 08:41:52 version -- app/version.sh@17 -- # major=24 00:15:58.244 08:41:52 version -- app/version.sh@18 -- # get_header_version minor 00:15:58.244 08:41:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:15:58.244 08:41:52 version -- app/version.sh@14 -- # cut -f2 00:15:58.244 08:41:52 version -- app/version.sh@14 -- # tr -d '"' 00:15:58.244 08:41:52 version -- app/version.sh@18 -- # minor=5 00:15:58.244 08:41:52 version -- app/version.sh@19 -- # get_header_version patch 00:15:58.244 08:41:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:15:58.244 08:41:52 version -- app/version.sh@14 -- # cut -f2 00:15:58.244 08:41:52 version -- app/version.sh@14 -- # tr -d '"' 00:15:58.244 08:41:52 version -- app/version.sh@19 -- # patch=0 00:15:58.244 08:41:52 version -- app/version.sh@20 -- # get_header_version suffix 00:15:58.244 08:41:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:15:58.244 08:41:52 version -- app/version.sh@14 -- # cut -f2 00:15:58.244 08:41:52 version -- app/version.sh@14 -- # tr -d '"' 00:15:58.244 08:41:52 version -- app/version.sh@20 -- # suffix=-pre 00:15:58.244 08:41:52 version -- app/version.sh@22 -- # version=24.5 00:15:58.244 08:41:52 version -- app/version.sh@25 -- # (( patch != 0 )) 00:15:58.244 08:41:52 version -- app/version.sh@28 -- # version=24.5rc0 00:15:58.244 08:41:52 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:15:58.244 08:41:52 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:15:58.244 08:41:52 version -- app/version.sh@30 -- # py_version=24.5rc0 00:15:58.244 08:41:52 version -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:15:58.244 00:15:58.244 real 0m0.112s 00:15:58.244 user 0m0.053s 00:15:58.244 sys 0m0.081s 00:15:58.244 08:41:52 version -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:58.244 08:41:52 version -- common/autotest_common.sh@10 -- # set +x 00:15:58.244 ************************************ 00:15:58.244 END TEST version 00:15:58.244 ************************************ 00:15:58.244 08:41:52 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:15:58.244 08:41:52 -- spdk/autotest.sh@194 -- # uname -s 00:15:58.244 08:41:52 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:15:58.244 08:41:52 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:15:58.244 08:41:52 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:15:58.244 08:41:52 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:15:58.244 08:41:52 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:15:58.244 08:41:52 -- spdk/autotest.sh@256 -- # timing_exit lib 00:15:58.244 08:41:52 -- common/autotest_common.sh@727 -- # xtrace_disable 00:15:58.244 08:41:52 -- common/autotest_common.sh@10 -- # set +x 00:15:58.244 08:41:52 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:15:58.244 08:41:52 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:15:58.244 08:41:52 -- spdk/autotest.sh@275 -- # '[' 1 -eq 1 ']' 00:15:58.244 08:41:52 -- spdk/autotest.sh@276 -- # export NET_TYPE 00:15:58.244 08:41:52 -- spdk/autotest.sh@279 -- # '[' tcp = rdma ']' 00:15:58.244 08:41:52 -- spdk/autotest.sh@282 -- # '[' tcp = tcp ']' 00:15:58.244 08:41:52 -- spdk/autotest.sh@283 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:15:58.244 08:41:52 -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:15:58.244 08:41:52 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:58.244 08:41:52 -- common/autotest_common.sh@10 -- # set +x 00:15:58.244 ************************************ 00:15:58.244 START TEST nvmf_tcp 00:15:58.244 ************************************ 00:15:58.244 08:41:53 nvmf_tcp -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:15:58.503 * Looking for test storage... 00:15:58.503 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:15:58.503 08:41:53 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:15:58.503 08:41:53 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:15:58.503 08:41:53 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:58.503 08:41:53 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:15:58.503 08:41:53 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:58.503 08:41:53 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:58.503 08:41:53 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:58.503 08:41:53 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:58.503 08:41:53 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:58.503 08:41:53 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:58.503 08:41:53 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:58.503 08:41:53 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:58.503 08:41:53 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:58.503 08:41:53 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:58.503 08:41:53 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:58.503 08:41:53 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:15:58.503 08:41:53 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:58.503 08:41:53 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:58.503 08:41:53 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:58.503 08:41:53 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:58.503 08:41:53 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:58.503 08:41:53 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:58.503 08:41:53 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:58.503 08:41:53 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:58.503 08:41:53 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.503 08:41:53 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.503 08:41:53 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.503 08:41:53 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:15:58.503 08:41:53 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.503 08:41:53 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:15:58.503 08:41:53 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:58.503 08:41:53 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:58.503 08:41:53 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:58.503 08:41:53 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:58.503 08:41:53 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:58.503 08:41:53 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:58.503 08:41:53 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:58.503 08:41:53 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:58.503 08:41:53 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:15:58.503 08:41:53 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:15:58.503 08:41:53 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:15:58.503 08:41:53 nvmf_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:15:58.503 08:41:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:58.503 08:41:53 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:15:58.503 08:41:53 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:15:58.503 08:41:53 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:15:58.503 08:41:53 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:58.503 08:41:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:58.503 ************************************ 00:15:58.503 START TEST nvmf_example 00:15:58.503 ************************************ 00:15:58.503 08:41:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:15:58.503 * Looking for test storage... 00:15:58.503 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:58.503 08:41:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:58.504 08:41:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:15:58.504 08:41:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:58.504 08:41:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:58.504 08:41:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:58.504 08:41:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:58.504 08:41:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:58.504 08:41:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:58.504 08:41:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:58.504 08:41:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:58.504 08:41:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:58.504 08:41:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:58.504 08:41:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:58.504 08:41:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:15:58.504 08:41:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:58.504 08:41:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:58.504 08:41:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:58.504 08:41:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:58.504 08:41:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:58.504 08:41:53 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:58.504 08:41:53 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:58.504 08:41:53 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:58.504 08:41:53 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.504 08:41:53 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.504 08:41:53 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.504 08:41:53 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:15:58.504 08:41:53 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.504 08:41:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:15:58.504 08:41:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:58.504 08:41:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:58.504 08:41:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:58.504 08:41:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:58.504 08:41:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:58.504 08:41:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:58.504 08:41:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:58.504 08:41:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:58.504 08:41:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:15:58.504 08:41:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:15:58.504 08:41:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:15:58.504 08:41:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:15:58.504 08:41:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:15:58.504 08:41:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:15:58.504 08:41:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:15:58.504 08:41:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:15:58.504 08:41:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@721 -- # xtrace_disable 00:15:58.504 08:41:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:58.504 08:41:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:15:58.504 08:41:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:58.504 08:41:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:58.504 08:41:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:58.504 08:41:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:58.504 08:41:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:58.504 08:41:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:58.504 08:41:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:58.504 08:41:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:58.504 08:41:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:58.504 08:41:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:58.504 08:41:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:15:58.504 08:41:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:16:01.034 Found 0000:09:00.0 (0x8086 - 0x159b) 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:16:01.034 Found 0000:09:00.1 (0x8086 - 0x159b) 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:16:01.034 Found net devices under 0000:09:00.0: cvl_0_0 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:16:01.034 Found net devices under 0000:09:00.1: cvl_0_1 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:01.034 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:01.293 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:01.293 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:01.293 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:01.293 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:01.293 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:01.293 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:01.293 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:16:01.293 00:16:01.293 --- 10.0.0.2 ping statistics --- 00:16:01.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.293 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:16:01.293 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:01.293 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:01.293 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:16:01.293 00:16:01.293 --- 10.0.0.1 ping statistics --- 00:16:01.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.293 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:16:01.293 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:01.293 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:16:01.293 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:01.293 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:01.293 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:01.293 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:01.293 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:01.293 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:01.293 08:41:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:01.293 08:41:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:16:01.293 08:41:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:16:01.293 08:41:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@721 -- # xtrace_disable 00:16:01.293 08:41:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:16:01.293 08:41:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:16:01.293 08:41:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:16:01.293 08:41:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2154880 00:16:01.293 08:41:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:16:01.293 08:41:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:01.293 08:41:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2154880 00:16:01.293 08:41:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@828 -- # '[' -z 2154880 ']' 00:16:01.293 08:41:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:01.293 08:41:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local max_retries=100 00:16:01.293 08:41:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:01.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:01.293 08:41:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@837 -- # xtrace_disable 00:16:01.293 08:41:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:16:01.293 EAL: No free 2048 kB hugepages reported on node 1 00:16:02.228 08:41:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:16:02.228 08:41:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@861 -- # return 0 00:16:02.228 08:41:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:16:02.228 08:41:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@727 -- # xtrace_disable 00:16:02.228 08:41:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:16:02.228 08:41:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:02.228 08:41:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:02.228 08:41:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:16:02.228 08:41:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:02.228 08:41:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:16:02.228 08:41:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:02.228 08:41:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:16:02.228 08:41:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:02.228 08:41:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:16:02.228 08:41:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:02.228 08:41:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:02.228 08:41:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:16:02.228 08:41:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:02.228 08:41:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:16:02.228 08:41:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:02.228 08:41:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:02.228 08:41:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:16:02.228 08:41:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:02.229 08:41:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:02.229 08:41:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:02.229 08:41:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:16:02.229 08:41:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:02.229 08:41:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:16:02.229 08:41:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:16:02.229 EAL: No free 2048 kB hugepages reported on node 1 00:16:14.463 Initializing NVMe Controllers 00:16:14.463 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:14.463 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:14.463 Initialization complete. Launching workers. 00:16:14.463 ======================================================== 00:16:14.463 Latency(us) 00:16:14.463 Device Information : IOPS MiB/s Average min max 00:16:14.463 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14827.19 57.92 4317.75 887.77 15253.63 00:16:14.463 ======================================================== 00:16:14.463 Total : 14827.19 57.92 4317.75 887.77 15253.63 00:16:14.463 00:16:14.463 08:42:07 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:16:14.463 08:42:07 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:16:14.463 08:42:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:14.463 08:42:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:16:14.463 08:42:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:14.463 08:42:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:16:14.463 08:42:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:14.463 08:42:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:14.463 rmmod nvme_tcp 00:16:14.463 rmmod nvme_fabrics 00:16:14.463 rmmod nvme_keyring 00:16:14.463 08:42:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:14.463 08:42:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:16:14.463 08:42:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:16:14.463 08:42:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 2154880 ']' 00:16:14.463 08:42:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 2154880 00:16:14.463 08:42:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@947 -- # '[' -z 2154880 ']' 00:16:14.463 08:42:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # kill -0 2154880 00:16:14.463 08:42:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # uname 00:16:14.463 08:42:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:16:14.463 08:42:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2154880 00:16:14.463 08:42:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # process_name=nvmf 00:16:14.463 08:42:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@957 -- # '[' nvmf = sudo ']' 00:16:14.463 08:42:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2154880' 00:16:14.463 killing process with pid 2154880 00:16:14.463 08:42:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # kill 2154880 00:16:14.463 08:42:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@971 -- # wait 2154880 00:16:14.463 nvmf threads initialize successfully 00:16:14.463 bdev subsystem init successfully 00:16:14.463 created a nvmf target service 00:16:14.463 create targets's poll groups done 00:16:14.463 all subsystems of target started 00:16:14.463 nvmf target is running 00:16:14.463 all subsystems of target stopped 00:16:14.463 destroy targets's poll groups done 00:16:14.463 destroyed the nvmf target service 00:16:14.463 bdev subsystem finish successfully 00:16:14.463 nvmf threads destroy successfully 00:16:14.463 08:42:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:14.463 08:42:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:14.463 08:42:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:14.463 08:42:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:14.463 08:42:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:14.463 08:42:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:14.463 08:42:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:14.463 08:42:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:15.034 08:42:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:15.034 08:42:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:16:15.034 08:42:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@727 -- # xtrace_disable 00:16:15.034 08:42:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:16:15.034 00:16:15.034 real 0m16.494s 00:16:15.034 user 0m45.416s 00:16:15.034 sys 0m3.730s 00:16:15.034 08:42:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # xtrace_disable 00:16:15.034 08:42:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:16:15.034 ************************************ 00:16:15.034 END TEST nvmf_example 00:16:15.034 ************************************ 00:16:15.034 08:42:09 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:16:15.034 08:42:09 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:16:15.034 08:42:09 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:16:15.034 08:42:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:15.034 ************************************ 00:16:15.034 START TEST nvmf_filesystem 00:16:15.034 ************************************ 00:16:15.034 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:16:15.034 * Looking for test storage... 00:16:15.034 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:15.034 08:42:09 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:16:15.034 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:16:15.034 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:16:15.034 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:16:15.034 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:16:15.034 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:16:15.034 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:16:15.034 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:16:15.034 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:16:15.034 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:16:15.034 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:16:15.034 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:16:15.034 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:16:15.034 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:16:15.034 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:16:15.034 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:16:15.034 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:16:15.034 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:16:15.034 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:16:15.034 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:16:15.034 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:16:15.034 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:16:15.034 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:16:15.034 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:16:15.034 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:16:15.034 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:16:15.034 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=n 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:16:15.035 08:42:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:16:15.035 #define SPDK_CONFIG_H 00:16:15.035 #define SPDK_CONFIG_APPS 1 00:16:15.035 #define SPDK_CONFIG_ARCH native 00:16:15.035 #undef SPDK_CONFIG_ASAN 00:16:15.035 #undef SPDK_CONFIG_AVAHI 00:16:15.035 #undef SPDK_CONFIG_CET 00:16:15.035 #define SPDK_CONFIG_COVERAGE 1 00:16:15.035 #define SPDK_CONFIG_CROSS_PREFIX 00:16:15.035 #undef SPDK_CONFIG_CRYPTO 00:16:15.035 #undef SPDK_CONFIG_CRYPTO_MLX5 00:16:15.035 #undef SPDK_CONFIG_CUSTOMOCF 00:16:15.035 #undef SPDK_CONFIG_DAOS 00:16:15.035 #define SPDK_CONFIG_DAOS_DIR 00:16:15.035 #define SPDK_CONFIG_DEBUG 1 00:16:15.035 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:16:15.035 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:16:15.035 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:16:15.035 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:16:15.035 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:16:15.035 #undef SPDK_CONFIG_DPDK_UADK 00:16:15.035 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:16:15.035 #define SPDK_CONFIG_EXAMPLES 1 00:16:15.035 #undef SPDK_CONFIG_FC 00:16:15.035 #define SPDK_CONFIG_FC_PATH 00:16:15.035 #define SPDK_CONFIG_FIO_PLUGIN 1 00:16:15.035 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:16:15.035 #undef SPDK_CONFIG_FUSE 00:16:15.035 #undef SPDK_CONFIG_FUZZER 00:16:15.035 #define SPDK_CONFIG_FUZZER_LIB 00:16:15.035 #undef SPDK_CONFIG_GOLANG 00:16:15.035 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:16:15.035 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:16:15.035 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:16:15.035 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:16:15.035 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:16:15.035 #undef SPDK_CONFIG_HAVE_LIBBSD 00:16:15.035 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:16:15.035 #define SPDK_CONFIG_IDXD 1 00:16:15.035 #undef SPDK_CONFIG_IDXD_KERNEL 00:16:15.035 #undef SPDK_CONFIG_IPSEC_MB 00:16:15.035 #define SPDK_CONFIG_IPSEC_MB_DIR 00:16:15.035 #define SPDK_CONFIG_ISAL 1 00:16:15.035 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:16:15.035 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:16:15.035 #define SPDK_CONFIG_LIBDIR 00:16:15.035 #undef SPDK_CONFIG_LTO 00:16:15.035 #define SPDK_CONFIG_MAX_LCORES 00:16:15.035 #define SPDK_CONFIG_NVME_CUSE 1 00:16:15.035 #undef SPDK_CONFIG_OCF 00:16:15.035 #define SPDK_CONFIG_OCF_PATH 00:16:15.035 #define SPDK_CONFIG_OPENSSL_PATH 00:16:15.035 #undef SPDK_CONFIG_PGO_CAPTURE 00:16:15.035 #define SPDK_CONFIG_PGO_DIR 00:16:15.035 #undef SPDK_CONFIG_PGO_USE 00:16:15.035 #define SPDK_CONFIG_PREFIX /usr/local 00:16:15.035 #undef SPDK_CONFIG_RAID5F 00:16:15.035 #undef SPDK_CONFIG_RBD 00:16:15.035 #define SPDK_CONFIG_RDMA 1 00:16:15.035 #define SPDK_CONFIG_RDMA_PROV verbs 00:16:15.035 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:16:15.035 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:16:15.035 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:16:15.035 #define SPDK_CONFIG_SHARED 1 00:16:15.035 #undef SPDK_CONFIG_SMA 00:16:15.036 #define SPDK_CONFIG_TESTS 1 00:16:15.036 #undef SPDK_CONFIG_TSAN 00:16:15.036 #define SPDK_CONFIG_UBLK 1 00:16:15.036 #define SPDK_CONFIG_UBSAN 1 00:16:15.036 #undef SPDK_CONFIG_UNIT_TESTS 00:16:15.036 #undef SPDK_CONFIG_URING 00:16:15.036 #define SPDK_CONFIG_URING_PATH 00:16:15.036 #undef SPDK_CONFIG_URING_ZNS 00:16:15.036 #undef SPDK_CONFIG_USDT 00:16:15.036 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:16:15.036 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:16:15.036 #define SPDK_CONFIG_VFIO_USER 1 00:16:15.036 #define SPDK_CONFIG_VFIO_USER_DIR 00:16:15.036 #define SPDK_CONFIG_VHOST 1 00:16:15.036 #define SPDK_CONFIG_VIRTIO 1 00:16:15.036 #undef SPDK_CONFIG_VTUNE 00:16:15.036 #define SPDK_CONFIG_VTUNE_DIR 00:16:15.036 #define SPDK_CONFIG_WERROR 1 00:16:15.036 #define SPDK_CONFIG_WPDK_DIR 00:16:15.036 #undef SPDK_CONFIG_XNVME 00:16:15.036 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:16:15.036 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : v22.11.4 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:16:15.037 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j48 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 2157203 ]] 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 2157203 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1677 -- # set_test_storage 2147483648 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.302S2W 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.302S2W/tests/target /tmp/spdk.302S2W 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=964968448 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4319461376 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=48412475392 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=61994729472 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=13582254080 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30992654336 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997364736 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12389961728 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12398948352 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=8986624 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30996684800 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997364736 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=679936 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6199468032 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6199472128 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:16:15.038 * Looking for test storage... 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=48412475392 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=15796846592 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:15.038 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # set -o errtrace 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # shopt -s extdebug 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1681 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # true 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # xtrace_fd 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:16:15.038 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:16:15.039 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:16:15.039 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:16:15.039 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:16:15.039 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:16:15.039 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:16:15.039 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:16:15.039 08:42:09 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:15.039 08:42:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:16:15.039 08:42:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:15.039 08:42:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:15.039 08:42:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:15.039 08:42:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:15.039 08:42:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:15.039 08:42:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:15.039 08:42:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:15.039 08:42:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:15.039 08:42:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:15.039 08:42:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:15.039 08:42:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:15.039 08:42:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:16:15.039 08:42:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:15.039 08:42:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:15.039 08:42:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:15.039 08:42:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:15.039 08:42:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:15.039 08:42:09 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:15.039 08:42:09 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:15.039 08:42:09 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:15.039 08:42:09 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.039 08:42:09 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.039 08:42:09 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.039 08:42:09 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:16:15.039 08:42:09 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.039 08:42:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:16:15.039 08:42:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:15.039 08:42:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:15.039 08:42:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:15.039 08:42:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:15.039 08:42:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:15.039 08:42:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:15.039 08:42:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:15.039 08:42:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:15.039 08:42:09 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:16:15.039 08:42:09 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:15.039 08:42:09 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:16:15.039 08:42:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:15.039 08:42:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:15.039 08:42:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:15.039 08:42:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:15.039 08:42:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:15.039 08:42:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:15.039 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:15.039 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:15.039 08:42:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:15.039 08:42:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:15.039 08:42:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:16:15.039 08:42:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:16:17.574 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:17.574 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:16:17.574 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:17.574 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:17.574 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:17.574 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:17.574 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:17.574 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:16:17.574 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:17.574 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:16:17.574 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:16:17.574 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:16:17.574 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:16:17.574 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:16:17.574 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:16:17.574 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:16:17.575 Found 0000:09:00.0 (0x8086 - 0x159b) 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:16:17.575 Found 0000:09:00.1 (0x8086 - 0x159b) 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:16:17.575 Found net devices under 0000:09:00.0: cvl_0_0 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:16:17.575 Found net devices under 0000:09:00.1: cvl_0_1 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:17.575 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:17.575 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:16:17.575 00:16:17.575 --- 10.0.0.2 ping statistics --- 00:16:17.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.575 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:16:17.575 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:17.575 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:17.575 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:16:17.576 00:16:17.576 --- 10.0.0.1 ping statistics --- 00:16:17.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.576 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:16:17.576 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:17.576 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:16:17.576 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:17.576 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:17.576 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:17.576 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:17.576 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:17.576 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:17.576 08:42:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:17.576 08:42:12 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:16:17.576 08:42:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:16:17.576 08:42:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1104 -- # xtrace_disable 00:16:17.576 08:42:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:16:17.834 ************************************ 00:16:17.834 START TEST nvmf_filesystem_no_in_capsule 00:16:17.834 ************************************ 00:16:17.834 08:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1122 -- # nvmf_filesystem_part 0 00:16:17.834 08:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:16:17.834 08:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:16:17.834 08:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:17.834 08:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@721 -- # xtrace_disable 00:16:17.834 08:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:17.834 08:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2159246 00:16:17.834 08:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:17.834 08:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2159246 00:16:17.834 08:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@828 -- # '[' -z 2159246 ']' 00:16:17.834 08:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:17.834 08:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local max_retries=100 00:16:17.834 08:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:17.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:17.834 08:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # xtrace_disable 00:16:17.834 08:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:17.834 [2024-05-15 08:42:12.435711] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:16:17.834 [2024-05-15 08:42:12.435806] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:17.834 EAL: No free 2048 kB hugepages reported on node 1 00:16:17.834 [2024-05-15 08:42:12.515323] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:17.834 [2024-05-15 08:42:12.610767] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:17.834 [2024-05-15 08:42:12.610833] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:17.834 [2024-05-15 08:42:12.610850] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:17.834 [2024-05-15 08:42:12.610863] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:17.834 [2024-05-15 08:42:12.610875] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:17.834 [2024-05-15 08:42:12.614240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:17.834 [2024-05-15 08:42:12.614288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:17.834 [2024-05-15 08:42:12.614372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:17.834 [2024-05-15 08:42:12.614376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.092 08:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:16:18.092 08:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@861 -- # return 0 00:16:18.092 08:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:18.092 08:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@727 -- # xtrace_disable 00:16:18.092 08:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:18.092 08:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:18.092 08:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:16:18.092 08:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:16:18.092 08:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:18.092 08:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:18.092 [2024-05-15 08:42:12.770953] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:18.092 08:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:18.092 08:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:16:18.092 08:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:18.092 08:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:18.351 Malloc1 00:16:18.351 08:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:18.351 08:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:18.351 08:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:18.351 08:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:18.351 08:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:18.351 08:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:18.351 08:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:18.351 08:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:18.351 08:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:18.351 08:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:18.351 08:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:18.351 08:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:18.351 [2024-05-15 08:42:12.954174] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:18.351 [2024-05-15 08:42:12.954529] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:18.351 08:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:18.351 08:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:16:18.351 08:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_name=Malloc1 00:16:18.351 08:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bdev_info 00:16:18.351 08:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local bs 00:16:18.351 08:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local nb 00:16:18.351 08:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:16:18.351 08:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:18.351 08:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:18.351 08:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:18.351 08:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # bdev_info='[ 00:16:18.351 { 00:16:18.351 "name": "Malloc1", 00:16:18.351 "aliases": [ 00:16:18.351 "20e74bb3-ac17-492c-b3da-bfff63622b4d" 00:16:18.351 ], 00:16:18.351 "product_name": "Malloc disk", 00:16:18.351 "block_size": 512, 00:16:18.351 "num_blocks": 1048576, 00:16:18.351 "uuid": "20e74bb3-ac17-492c-b3da-bfff63622b4d", 00:16:18.351 "assigned_rate_limits": { 00:16:18.351 "rw_ios_per_sec": 0, 00:16:18.351 "rw_mbytes_per_sec": 0, 00:16:18.351 "r_mbytes_per_sec": 0, 00:16:18.351 "w_mbytes_per_sec": 0 00:16:18.351 }, 00:16:18.351 "claimed": true, 00:16:18.351 "claim_type": "exclusive_write", 00:16:18.351 "zoned": false, 00:16:18.351 "supported_io_types": { 00:16:18.351 "read": true, 00:16:18.351 "write": true, 00:16:18.351 "unmap": true, 00:16:18.351 "write_zeroes": true, 00:16:18.351 "flush": true, 00:16:18.351 "reset": true, 00:16:18.351 "compare": false, 00:16:18.351 "compare_and_write": false, 00:16:18.351 "abort": true, 00:16:18.351 "nvme_admin": false, 00:16:18.351 "nvme_io": false 00:16:18.351 }, 00:16:18.351 "memory_domains": [ 00:16:18.351 { 00:16:18.351 "dma_device_id": "system", 00:16:18.351 "dma_device_type": 1 00:16:18.351 }, 00:16:18.351 { 00:16:18.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.351 "dma_device_type": 2 00:16:18.351 } 00:16:18.351 ], 00:16:18.351 "driver_specific": {} 00:16:18.351 } 00:16:18.351 ]' 00:16:18.351 08:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .block_size' 00:16:18.351 08:42:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # bs=512 00:16:18.351 08:42:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # jq '.[] .num_blocks' 00:16:18.351 08:42:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # nb=1048576 00:16:18.351 08:42:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # bdev_size=512 00:16:18.352 08:42:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # echo 512 00:16:18.352 08:42:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:16:18.352 08:42:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:18.916 08:42:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:16:18.916 08:42:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1195 -- # local i=0 00:16:18.916 08:42:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:16:18.916 08:42:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:16:18.916 08:42:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # sleep 2 00:16:21.443 08:42:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:16:21.443 08:42:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:16:21.443 08:42:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:16:21.443 08:42:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:16:21.443 08:42:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:16:21.443 08:42:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # return 0 00:16:21.443 08:42:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:16:21.443 08:42:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:16:21.443 08:42:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:16:21.443 08:42:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:16:21.444 08:42:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:16:21.444 08:42:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:21.444 08:42:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:16:21.444 08:42:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:16:21.444 08:42:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:16:21.444 08:42:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:16:21.444 08:42:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:16:21.444 08:42:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:16:22.011 08:42:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:16:22.941 08:42:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:16:22.941 08:42:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:16:22.941 08:42:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:16:22.941 08:42:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:16:22.941 08:42:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:22.941 ************************************ 00:16:22.941 START TEST filesystem_ext4 00:16:22.941 ************************************ 00:16:22.941 08:42:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create ext4 nvme0n1 00:16:23.198 08:42:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:16:23.198 08:42:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:16:23.198 08:42:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:16:23.198 08:42:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # local fstype=ext4 00:16:23.198 08:42:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:16:23.198 08:42:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local i=0 00:16:23.198 08:42:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local force 00:16:23.198 08:42:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # '[' ext4 = ext4 ']' 00:16:23.198 08:42:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # force=-F 00:16:23.198 08:42:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@934 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:16:23.198 mke2fs 1.46.5 (30-Dec-2021) 00:16:23.198 Discarding device blocks: 0/522240 done 00:16:23.198 Creating filesystem with 522240 1k blocks and 130560 inodes 00:16:23.198 Filesystem UUID: ae8bdc09-f727-4034-a83c-6124dfde38a4 00:16:23.198 Superblock backups stored on blocks: 00:16:23.198 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:16:23.198 00:16:23.198 Allocating group tables: 0/64 done 00:16:23.198 Writing inode tables: 0/64 done 00:16:23.455 Creating journal (8192 blocks): done 00:16:24.386 Writing superblocks and filesystem accounting information: 0/64 done 00:16:24.386 00:16:24.386 08:42:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@942 -- # return 0 00:16:24.386 08:42:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:16:25.319 08:42:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:16:25.319 08:42:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:16:25.319 08:42:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:16:25.319 08:42:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:16:25.319 08:42:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:16:25.319 08:42:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:16:25.319 08:42:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2159246 00:16:25.319 08:42:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:16:25.319 08:42:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:16:25.319 08:42:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:16:25.320 08:42:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:16:25.320 00:16:25.320 real 0m2.097s 00:16:25.320 user 0m0.017s 00:16:25.320 sys 0m0.031s 00:16:25.320 08:42:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:16:25.320 08:42:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:16:25.320 ************************************ 00:16:25.320 END TEST filesystem_ext4 00:16:25.320 ************************************ 00:16:25.320 08:42:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:16:25.320 08:42:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:16:25.320 08:42:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:16:25.320 08:42:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:25.320 ************************************ 00:16:25.320 START TEST filesystem_btrfs 00:16:25.320 ************************************ 00:16:25.320 08:42:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create btrfs nvme0n1 00:16:25.320 08:42:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:16:25.320 08:42:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:16:25.320 08:42:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:16:25.320 08:42:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # local fstype=btrfs 00:16:25.320 08:42:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:16:25.320 08:42:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local i=0 00:16:25.320 08:42:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local force 00:16:25.320 08:42:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # '[' btrfs = ext4 ']' 00:16:25.320 08:42:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # force=-f 00:16:25.320 08:42:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:16:25.320 btrfs-progs v6.6.2 00:16:25.320 See https://btrfs.readthedocs.io for more information. 00:16:25.320 00:16:25.320 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:16:25.320 NOTE: several default settings have changed in version 5.15, please make sure 00:16:25.320 this does not affect your deployments: 00:16:25.320 - DUP for metadata (-m dup) 00:16:25.320 - enabled no-holes (-O no-holes) 00:16:25.320 - enabled free-space-tree (-R free-space-tree) 00:16:25.320 00:16:25.320 Label: (null) 00:16:25.320 UUID: 29632fd4-c589-4183-883a-24054d81ccf6 00:16:25.320 Node size: 16384 00:16:25.320 Sector size: 4096 00:16:25.320 Filesystem size: 510.00MiB 00:16:25.320 Block group profiles: 00:16:25.320 Data: single 8.00MiB 00:16:25.320 Metadata: DUP 32.00MiB 00:16:25.320 System: DUP 8.00MiB 00:16:25.320 SSD detected: yes 00:16:25.320 Zoned device: no 00:16:25.320 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:16:25.320 Runtime features: free-space-tree 00:16:25.320 Checksum: crc32c 00:16:25.320 Number of devices: 1 00:16:25.320 Devices: 00:16:25.320 ID SIZE PATH 00:16:25.320 1 510.00MiB /dev/nvme0n1p1 00:16:25.320 00:16:25.320 08:42:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@942 -- # return 0 00:16:25.320 08:42:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:16:26.253 08:42:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:16:26.253 08:42:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:16:26.253 08:42:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:16:26.253 08:42:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:16:26.253 08:42:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:16:26.253 08:42:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:16:26.253 08:42:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2159246 00:16:26.253 08:42:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:16:26.253 08:42:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:16:26.253 08:42:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:16:26.253 08:42:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:16:26.253 00:16:26.253 real 0m0.970s 00:16:26.253 user 0m0.016s 00:16:26.253 sys 0m0.041s 00:16:26.253 08:42:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # xtrace_disable 00:16:26.253 08:42:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:16:26.253 ************************************ 00:16:26.253 END TEST filesystem_btrfs 00:16:26.253 ************************************ 00:16:26.253 08:42:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:16:26.253 08:42:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:16:26.253 08:42:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:16:26.253 08:42:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:26.253 ************************************ 00:16:26.253 START TEST filesystem_xfs 00:16:26.253 ************************************ 00:16:26.253 08:42:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create xfs nvme0n1 00:16:26.253 08:42:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:16:26.253 08:42:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:16:26.253 08:42:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:16:26.253 08:42:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # local fstype=xfs 00:16:26.253 08:42:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:16:26.253 08:42:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local i=0 00:16:26.253 08:42:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local force 00:16:26.253 08:42:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # '[' xfs = ext4 ']' 00:16:26.253 08:42:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # force=-f 00:16:26.253 08:42:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # mkfs.xfs -f /dev/nvme0n1p1 00:16:26.253 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:16:26.253 = sectsz=512 attr=2, projid32bit=1 00:16:26.253 = crc=1 finobt=1, sparse=1, rmapbt=0 00:16:26.253 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:16:26.254 data = bsize=4096 blocks=130560, imaxpct=25 00:16:26.254 = sunit=0 swidth=0 blks 00:16:26.254 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:16:26.254 log =internal log bsize=4096 blocks=16384, version=2 00:16:26.254 = sectsz=512 sunit=0 blks, lazy-count=1 00:16:26.254 realtime =none extsz=4096 blocks=0, rtextents=0 00:16:27.188 Discarding blocks...Done. 00:16:27.188 08:42:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@942 -- # return 0 00:16:27.188 08:42:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:16:29.714 08:42:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:16:29.714 08:42:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:16:29.714 08:42:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:16:29.714 08:42:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:16:29.714 08:42:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:16:29.714 08:42:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:16:29.714 08:42:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2159246 00:16:29.714 08:42:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:16:29.714 08:42:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:16:29.714 08:42:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:16:29.714 08:42:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:16:29.714 00:16:29.714 real 0m3.384s 00:16:29.714 user 0m0.013s 00:16:29.714 sys 0m0.043s 00:16:29.714 08:42:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # xtrace_disable 00:16:29.714 08:42:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:16:29.714 ************************************ 00:16:29.714 END TEST filesystem_xfs 00:16:29.714 ************************************ 00:16:29.714 08:42:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:16:29.714 08:42:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:16:29.714 08:42:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:29.714 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:29.714 08:42:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:29.714 08:42:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # local i=0 00:16:29.714 08:42:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:16:29.714 08:42:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:29.714 08:42:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:16:29.714 08:42:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:29.714 08:42:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1228 -- # return 0 00:16:29.714 08:42:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:29.714 08:42:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:29.714 08:42:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:29.715 08:42:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:29.715 08:42:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:29.715 08:42:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2159246 00:16:29.715 08:42:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@947 -- # '[' -z 2159246 ']' 00:16:29.715 08:42:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # kill -0 2159246 00:16:29.715 08:42:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # uname 00:16:29.715 08:42:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:16:29.715 08:42:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2159246 00:16:29.715 08:42:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:16:29.715 08:42:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:16:29.715 08:42:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2159246' 00:16:29.715 killing process with pid 2159246 00:16:29.715 08:42:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # kill 2159246 00:16:29.715 [2024-05-15 08:42:24.473770] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:29.715 08:42:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@971 -- # wait 2159246 00:16:30.314 08:42:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:16:30.314 00:16:30.314 real 0m12.527s 00:16:30.314 user 0m48.062s 00:16:30.314 sys 0m1.775s 00:16:30.314 08:42:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # xtrace_disable 00:16:30.314 08:42:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:30.314 ************************************ 00:16:30.314 END TEST nvmf_filesystem_no_in_capsule 00:16:30.314 ************************************ 00:16:30.314 08:42:24 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:16:30.314 08:42:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:16:30.314 08:42:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1104 -- # xtrace_disable 00:16:30.314 08:42:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:16:30.314 ************************************ 00:16:30.314 START TEST nvmf_filesystem_in_capsule 00:16:30.314 ************************************ 00:16:30.314 08:42:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1122 -- # nvmf_filesystem_part 4096 00:16:30.314 08:42:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:16:30.314 08:42:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:16:30.314 08:42:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:30.314 08:42:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@721 -- # xtrace_disable 00:16:30.314 08:42:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:30.314 08:42:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2160929 00:16:30.314 08:42:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:30.314 08:42:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2160929 00:16:30.314 08:42:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@828 -- # '[' -z 2160929 ']' 00:16:30.314 08:42:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:30.314 08:42:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local max_retries=100 00:16:30.314 08:42:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:30.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:30.314 08:42:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # xtrace_disable 00:16:30.314 08:42:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:30.314 [2024-05-15 08:42:25.023347] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:16:30.314 [2024-05-15 08:42:25.023416] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:30.314 EAL: No free 2048 kB hugepages reported on node 1 00:16:30.574 [2024-05-15 08:42:25.100680] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:30.574 [2024-05-15 08:42:25.189811] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:30.574 [2024-05-15 08:42:25.189869] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:30.574 [2024-05-15 08:42:25.189885] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:30.574 [2024-05-15 08:42:25.189899] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:30.574 [2024-05-15 08:42:25.189911] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:30.574 [2024-05-15 08:42:25.189976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:30.574 [2024-05-15 08:42:25.190035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:30.574 [2024-05-15 08:42:25.191236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:30.574 [2024-05-15 08:42:25.191240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.574 08:42:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:16:30.574 08:42:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@861 -- # return 0 00:16:30.574 08:42:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:30.574 08:42:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@727 -- # xtrace_disable 00:16:30.574 08:42:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:30.574 08:42:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:30.574 08:42:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:16:30.574 08:42:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:16:30.574 08:42:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:30.574 08:42:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:30.574 [2024-05-15 08:42:25.349036] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:30.574 08:42:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:30.574 08:42:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:16:30.574 08:42:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:30.574 08:42:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:30.832 Malloc1 00:16:30.832 08:42:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:30.832 08:42:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:30.832 08:42:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:30.832 08:42:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:30.832 08:42:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:30.832 08:42:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:30.832 08:42:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:30.832 08:42:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:30.832 08:42:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:30.832 08:42:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:30.832 08:42:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:30.832 08:42:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:30.832 [2024-05-15 08:42:25.542247] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:30.832 [2024-05-15 08:42:25.542618] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:30.832 08:42:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:30.832 08:42:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:16:30.832 08:42:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_name=Malloc1 00:16:30.832 08:42:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bdev_info 00:16:30.832 08:42:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local bs 00:16:30.832 08:42:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local nb 00:16:30.833 08:42:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:16:30.833 08:42:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:30.833 08:42:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:30.833 08:42:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:30.833 08:42:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # bdev_info='[ 00:16:30.833 { 00:16:30.833 "name": "Malloc1", 00:16:30.833 "aliases": [ 00:16:30.833 "85a0f110-cf89-4633-80c6-e279f4d59dc9" 00:16:30.833 ], 00:16:30.833 "product_name": "Malloc disk", 00:16:30.833 "block_size": 512, 00:16:30.833 "num_blocks": 1048576, 00:16:30.833 "uuid": "85a0f110-cf89-4633-80c6-e279f4d59dc9", 00:16:30.833 "assigned_rate_limits": { 00:16:30.833 "rw_ios_per_sec": 0, 00:16:30.833 "rw_mbytes_per_sec": 0, 00:16:30.833 "r_mbytes_per_sec": 0, 00:16:30.833 "w_mbytes_per_sec": 0 00:16:30.833 }, 00:16:30.833 "claimed": true, 00:16:30.833 "claim_type": "exclusive_write", 00:16:30.833 "zoned": false, 00:16:30.833 "supported_io_types": { 00:16:30.833 "read": true, 00:16:30.833 "write": true, 00:16:30.833 "unmap": true, 00:16:30.833 "write_zeroes": true, 00:16:30.833 "flush": true, 00:16:30.833 "reset": true, 00:16:30.833 "compare": false, 00:16:30.833 "compare_and_write": false, 00:16:30.833 "abort": true, 00:16:30.833 "nvme_admin": false, 00:16:30.833 "nvme_io": false 00:16:30.833 }, 00:16:30.833 "memory_domains": [ 00:16:30.833 { 00:16:30.833 "dma_device_id": "system", 00:16:30.833 "dma_device_type": 1 00:16:30.833 }, 00:16:30.833 { 00:16:30.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.833 "dma_device_type": 2 00:16:30.833 } 00:16:30.833 ], 00:16:30.833 "driver_specific": {} 00:16:30.833 } 00:16:30.833 ]' 00:16:30.833 08:42:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .block_size' 00:16:30.833 08:42:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # bs=512 00:16:30.833 08:42:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # jq '.[] .num_blocks' 00:16:31.091 08:42:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # nb=1048576 00:16:31.091 08:42:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # bdev_size=512 00:16:31.091 08:42:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # echo 512 00:16:31.091 08:42:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:16:31.091 08:42:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:31.656 08:42:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:16:31.656 08:42:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1195 -- # local i=0 00:16:31.656 08:42:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:16:31.656 08:42:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:16:31.656 08:42:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # sleep 2 00:16:33.556 08:42:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:16:33.556 08:42:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:16:33.556 08:42:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:16:33.556 08:42:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:16:33.556 08:42:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:16:33.556 08:42:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # return 0 00:16:33.556 08:42:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:16:33.556 08:42:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:16:33.556 08:42:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:16:33.556 08:42:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:16:33.556 08:42:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:16:33.556 08:42:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:33.556 08:42:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:16:33.556 08:42:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:16:33.556 08:42:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:16:33.556 08:42:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:16:33.556 08:42:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:16:34.122 08:42:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:16:34.380 08:42:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:16:35.314 08:42:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:16:35.314 08:42:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:16:35.314 08:42:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:16:35.314 08:42:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:16:35.314 08:42:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:35.314 ************************************ 00:16:35.314 START TEST filesystem_in_capsule_ext4 00:16:35.314 ************************************ 00:16:35.314 08:42:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create ext4 nvme0n1 00:16:35.314 08:42:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:16:35.314 08:42:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:16:35.314 08:42:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:16:35.314 08:42:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # local fstype=ext4 00:16:35.314 08:42:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:16:35.314 08:42:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local i=0 00:16:35.314 08:42:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local force 00:16:35.314 08:42:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # '[' ext4 = ext4 ']' 00:16:35.314 08:42:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # force=-F 00:16:35.314 08:42:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@934 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:16:35.314 mke2fs 1.46.5 (30-Dec-2021) 00:16:35.572 Discarding device blocks: 0/522240 done 00:16:35.572 Creating filesystem with 522240 1k blocks and 130560 inodes 00:16:35.572 Filesystem UUID: 0ef335fe-ae0b-406f-a295-31a62e7cce69 00:16:35.572 Superblock backups stored on blocks: 00:16:35.572 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:16:35.572 00:16:35.572 Allocating group tables: 0/64 done 00:16:35.572 Writing inode tables: 0/64 done 00:16:37.470 Creating journal (8192 blocks): done 00:16:38.036 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:16:38.036 00:16:38.036 08:42:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@942 -- # return 0 00:16:38.036 08:42:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:16:38.970 08:42:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:16:38.970 08:42:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:16:38.970 08:42:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:16:38.970 08:42:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:16:38.970 08:42:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:16:38.970 08:42:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:16:38.970 08:42:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2160929 00:16:38.970 08:42:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:16:38.970 08:42:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:16:38.970 08:42:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:16:38.970 08:42:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:16:38.970 00:16:38.970 real 0m3.576s 00:16:38.970 user 0m0.009s 00:16:38.970 sys 0m0.034s 00:16:38.970 08:42:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:16:38.970 08:42:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:16:38.970 ************************************ 00:16:38.970 END TEST filesystem_in_capsule_ext4 00:16:38.970 ************************************ 00:16:38.970 08:42:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:16:38.970 08:42:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:16:38.970 08:42:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:16:38.970 08:42:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:38.970 ************************************ 00:16:38.970 START TEST filesystem_in_capsule_btrfs 00:16:38.970 ************************************ 00:16:38.970 08:42:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create btrfs nvme0n1 00:16:38.970 08:42:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:16:38.970 08:42:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:16:38.970 08:42:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:16:38.970 08:42:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # local fstype=btrfs 00:16:38.970 08:42:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:16:38.970 08:42:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local i=0 00:16:38.970 08:42:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local force 00:16:38.970 08:42:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # '[' btrfs = ext4 ']' 00:16:38.970 08:42:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # force=-f 00:16:38.970 08:42:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:16:39.536 btrfs-progs v6.6.2 00:16:39.536 See https://btrfs.readthedocs.io for more information. 00:16:39.536 00:16:39.536 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:16:39.536 NOTE: several default settings have changed in version 5.15, please make sure 00:16:39.536 this does not affect your deployments: 00:16:39.536 - DUP for metadata (-m dup) 00:16:39.536 - enabled no-holes (-O no-holes) 00:16:39.536 - enabled free-space-tree (-R free-space-tree) 00:16:39.536 00:16:39.536 Label: (null) 00:16:39.536 UUID: a95d6976-d06c-4e82-9d24-ed6960e6c333 00:16:39.536 Node size: 16384 00:16:39.536 Sector size: 4096 00:16:39.536 Filesystem size: 510.00MiB 00:16:39.536 Block group profiles: 00:16:39.536 Data: single 8.00MiB 00:16:39.536 Metadata: DUP 32.00MiB 00:16:39.536 System: DUP 8.00MiB 00:16:39.536 SSD detected: yes 00:16:39.536 Zoned device: no 00:16:39.536 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:16:39.536 Runtime features: free-space-tree 00:16:39.536 Checksum: crc32c 00:16:39.536 Number of devices: 1 00:16:39.536 Devices: 00:16:39.536 ID SIZE PATH 00:16:39.536 1 510.00MiB /dev/nvme0n1p1 00:16:39.536 00:16:39.536 08:42:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@942 -- # return 0 00:16:39.536 08:42:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:16:40.102 08:42:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:16:40.102 08:42:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:16:40.102 08:42:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:16:40.102 08:42:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:16:40.102 08:42:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:16:40.102 08:42:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:16:40.102 08:42:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2160929 00:16:40.102 08:42:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:16:40.102 08:42:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:16:40.102 08:42:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:16:40.102 08:42:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:16:40.102 00:16:40.102 real 0m1.160s 00:16:40.102 user 0m0.012s 00:16:40.102 sys 0m0.038s 00:16:40.102 08:42:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # xtrace_disable 00:16:40.102 08:42:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:16:40.102 ************************************ 00:16:40.102 END TEST filesystem_in_capsule_btrfs 00:16:40.102 ************************************ 00:16:40.102 08:42:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:16:40.102 08:42:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:16:40.102 08:42:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:16:40.102 08:42:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:40.360 ************************************ 00:16:40.360 START TEST filesystem_in_capsule_xfs 00:16:40.360 ************************************ 00:16:40.360 08:42:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create xfs nvme0n1 00:16:40.360 08:42:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:16:40.360 08:42:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:16:40.360 08:42:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:16:40.360 08:42:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # local fstype=xfs 00:16:40.360 08:42:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:16:40.360 08:42:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local i=0 00:16:40.360 08:42:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local force 00:16:40.360 08:42:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # '[' xfs = ext4 ']' 00:16:40.360 08:42:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # force=-f 00:16:40.360 08:42:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # mkfs.xfs -f /dev/nvme0n1p1 00:16:40.360 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:16:40.360 = sectsz=512 attr=2, projid32bit=1 00:16:40.360 = crc=1 finobt=1, sparse=1, rmapbt=0 00:16:40.360 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:16:40.360 data = bsize=4096 blocks=130560, imaxpct=25 00:16:40.360 = sunit=0 swidth=0 blks 00:16:40.360 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:16:40.360 log =internal log bsize=4096 blocks=16384, version=2 00:16:40.360 = sectsz=512 sunit=0 blks, lazy-count=1 00:16:40.360 realtime =none extsz=4096 blocks=0, rtextents=0 00:16:41.293 Discarding blocks...Done. 00:16:41.293 08:42:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@942 -- # return 0 00:16:41.293 08:42:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:16:43.191 08:42:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:16:43.191 08:42:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:16:43.191 08:42:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:16:43.191 08:42:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:16:43.191 08:42:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:16:43.191 08:42:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:16:43.191 08:42:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2160929 00:16:43.191 08:42:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:16:43.191 08:42:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:16:43.191 08:42:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:16:43.191 08:42:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:16:43.191 00:16:43.191 real 0m2.922s 00:16:43.191 user 0m0.021s 00:16:43.191 sys 0m0.033s 00:16:43.191 08:42:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # xtrace_disable 00:16:43.191 08:42:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:16:43.191 ************************************ 00:16:43.191 END TEST filesystem_in_capsule_xfs 00:16:43.191 ************************************ 00:16:43.191 08:42:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:16:43.191 08:42:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:16:43.191 08:42:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:43.191 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:43.191 08:42:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:43.191 08:42:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # local i=0 00:16:43.191 08:42:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:16:43.191 08:42:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:43.191 08:42:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:16:43.191 08:42:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:43.449 08:42:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1228 -- # return 0 00:16:43.449 08:42:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:43.449 08:42:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:43.449 08:42:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:43.449 08:42:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:43.449 08:42:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:43.449 08:42:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2160929 00:16:43.449 08:42:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@947 -- # '[' -z 2160929 ']' 00:16:43.449 08:42:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # kill -0 2160929 00:16:43.449 08:42:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # uname 00:16:43.449 08:42:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:16:43.449 08:42:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2160929 00:16:43.449 08:42:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:16:43.449 08:42:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:16:43.449 08:42:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2160929' 00:16:43.449 killing process with pid 2160929 00:16:43.449 08:42:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # kill 2160929 00:16:43.449 [2024-05-15 08:42:38.029713] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:43.449 08:42:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@971 -- # wait 2160929 00:16:43.709 08:42:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:16:43.709 00:16:43.709 real 0m13.487s 00:16:43.709 user 0m51.836s 00:16:43.709 sys 0m1.768s 00:16:43.709 08:42:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # xtrace_disable 00:16:43.709 08:42:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:43.709 ************************************ 00:16:43.709 END TEST nvmf_filesystem_in_capsule 00:16:43.709 ************************************ 00:16:43.709 08:42:38 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:16:43.709 08:42:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:43.709 08:42:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:16:43.709 08:42:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:43.709 08:42:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:16:43.709 08:42:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:43.709 08:42:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:43.709 rmmod nvme_tcp 00:16:43.978 rmmod nvme_fabrics 00:16:43.978 rmmod nvme_keyring 00:16:43.978 08:42:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:43.978 08:42:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:16:43.978 08:42:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:16:43.978 08:42:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:16:43.978 08:42:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:43.978 08:42:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:43.978 08:42:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:43.978 08:42:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:43.978 08:42:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:43.978 08:42:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:43.978 08:42:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:43.978 08:42:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:45.891 08:42:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:45.891 00:16:45.891 real 0m30.923s 00:16:45.891 user 1m40.954s 00:16:45.891 sys 0m5.418s 00:16:45.891 08:42:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # xtrace_disable 00:16:45.891 08:42:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:16:45.891 ************************************ 00:16:45.891 END TEST nvmf_filesystem 00:16:45.891 ************************************ 00:16:45.891 08:42:40 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:16:45.891 08:42:40 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:16:45.891 08:42:40 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:16:45.891 08:42:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:45.891 ************************************ 00:16:45.891 START TEST nvmf_target_discovery 00:16:45.891 ************************************ 00:16:45.891 08:42:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:16:45.891 * Looking for test storage... 00:16:46.191 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:46.191 08:42:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:46.191 08:42:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:16:46.191 08:42:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:46.191 08:42:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:46.191 08:42:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:46.191 08:42:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:46.191 08:42:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:46.191 08:42:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:46.191 08:42:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:46.191 08:42:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:46.191 08:42:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:46.191 08:42:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:46.191 08:42:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:46.191 08:42:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:16:46.191 08:42:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:46.191 08:42:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:46.191 08:42:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:46.191 08:42:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:46.191 08:42:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:46.191 08:42:40 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:46.191 08:42:40 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:46.191 08:42:40 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:46.191 08:42:40 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.191 08:42:40 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.191 08:42:40 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.191 08:42:40 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:16:46.191 08:42:40 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.191 08:42:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:16:46.191 08:42:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:46.191 08:42:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:46.191 08:42:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:46.191 08:42:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:46.191 08:42:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:46.191 08:42:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:46.191 08:42:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:46.191 08:42:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:46.191 08:42:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:16:46.191 08:42:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:16:46.191 08:42:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:16:46.191 08:42:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:16:46.191 08:42:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:16:46.191 08:42:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:46.191 08:42:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:46.191 08:42:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:46.191 08:42:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:46.191 08:42:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:46.191 08:42:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:46.191 08:42:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:46.191 08:42:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:46.191 08:42:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:46.191 08:42:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:46.191 08:42:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:16:46.191 08:42:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:48.729 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:48.729 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:16:48.729 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:48.729 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:48.729 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:48.729 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:48.729 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:48.729 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:16:48.729 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:48.729 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:16:48.729 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:16:48.729 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:16:48.729 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:16:48.729 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:16:48.729 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:16:48.729 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:48.729 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:48.729 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:48.729 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:48.729 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:48.729 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:48.729 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:48.729 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:48.729 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:48.729 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:48.729 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:48.729 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:48.729 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:48.729 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:48.729 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:48.729 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:48.729 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:48.729 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:48.729 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:16:48.729 Found 0000:09:00.0 (0x8086 - 0x159b) 00:16:48.729 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:48.729 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:48.729 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:48.729 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:48.729 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:48.729 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:48.729 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:16:48.729 Found 0000:09:00.1 (0x8086 - 0x159b) 00:16:48.729 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:48.729 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:48.729 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:48.729 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:48.729 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:48.729 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:48.729 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:48.729 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:48.729 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:48.729 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:48.729 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:48.729 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:48.729 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:48.729 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:48.730 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:48.730 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:16:48.730 Found net devices under 0000:09:00.0: cvl_0_0 00:16:48.730 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:48.730 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:48.730 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:48.730 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:48.730 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:48.730 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:48.730 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:48.730 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:48.730 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:16:48.730 Found net devices under 0000:09:00.1: cvl_0_1 00:16:48.730 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:48.730 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:48.730 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:16:48.730 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:48.730 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:48.730 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:48.730 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:48.730 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:48.730 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:48.730 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:48.730 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:48.730 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:48.730 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:48.730 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:48.730 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:48.730 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:48.730 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:48.730 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:48.730 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:48.730 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:48.730 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:48.730 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:48.730 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:48.730 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:48.730 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:48.730 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:48.730 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:48.730 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:16:48.730 00:16:48.730 --- 10.0.0.2 ping statistics --- 00:16:48.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.730 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:16:48.730 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:48.730 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:48.730 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:16:48.730 00:16:48.730 --- 10.0.0.1 ping statistics --- 00:16:48.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.730 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:16:48.730 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:48.730 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:16:48.730 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:48.730 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:48.730 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:48.730 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:48.730 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:48.730 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:48.730 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:48.730 08:42:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:16:48.730 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:48.730 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@721 -- # xtrace_disable 00:16:48.730 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:48.730 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=2164970 00:16:48.730 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:48.730 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 2164970 00:16:48.730 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@828 -- # '[' -z 2164970 ']' 00:16:48.730 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.730 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local max_retries=100 00:16:48.730 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:48.730 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@837 -- # xtrace_disable 00:16:48.730 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:48.730 [2024-05-15 08:42:43.424669] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:16:48.730 [2024-05-15 08:42:43.424747] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:48.730 EAL: No free 2048 kB hugepages reported on node 1 00:16:48.730 [2024-05-15 08:42:43.504108] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:48.989 [2024-05-15 08:42:43.596060] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:48.989 [2024-05-15 08:42:43.596123] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:48.989 [2024-05-15 08:42:43.596140] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:48.989 [2024-05-15 08:42:43.596153] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:48.989 [2024-05-15 08:42:43.596165] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:48.989 [2024-05-15 08:42:43.596248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:48.989 [2024-05-15 08:42:43.596281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:48.989 [2024-05-15 08:42:43.596335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:48.989 [2024-05-15 08:42:43.596338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:48.989 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:16:48.989 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@861 -- # return 0 00:16:48.989 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:48.989 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@727 -- # xtrace_disable 00:16:48.989 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:48.989 08:42:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:48.989 08:42:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:48.989 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:48.989 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:48.989 [2024-05-15 08:42:43.761124] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:48.989 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:48.989 08:42:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:16:48.989 08:42:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:16:48.989 08:42:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:16:48.989 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:48.989 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:49.248 Null1 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:49.248 [2024-05-15 08:42:43.805189] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:49.248 [2024-05-15 08:42:43.805531] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:49.248 Null2 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:49.248 Null3 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:49.248 Null4 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:49.248 08:42:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:16:49.248 00:16:49.248 Discovery Log Number of Records 6, Generation counter 6 00:16:49.248 =====Discovery Log Entry 0====== 00:16:49.248 trtype: tcp 00:16:49.248 adrfam: ipv4 00:16:49.248 subtype: current discovery subsystem 00:16:49.248 treq: not required 00:16:49.248 portid: 0 00:16:49.248 trsvcid: 4420 00:16:49.248 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:49.248 traddr: 10.0.0.2 00:16:49.248 eflags: explicit discovery connections, duplicate discovery information 00:16:49.248 sectype: none 00:16:49.248 =====Discovery Log Entry 1====== 00:16:49.248 trtype: tcp 00:16:49.248 adrfam: ipv4 00:16:49.248 subtype: nvme subsystem 00:16:49.248 treq: not required 00:16:49.248 portid: 0 00:16:49.248 trsvcid: 4420 00:16:49.248 subnqn: nqn.2016-06.io.spdk:cnode1 00:16:49.248 traddr: 10.0.0.2 00:16:49.248 eflags: none 00:16:49.248 sectype: none 00:16:49.248 =====Discovery Log Entry 2====== 00:16:49.248 trtype: tcp 00:16:49.248 adrfam: ipv4 00:16:49.248 subtype: nvme subsystem 00:16:49.248 treq: not required 00:16:49.248 portid: 0 00:16:49.248 trsvcid: 4420 00:16:49.248 subnqn: nqn.2016-06.io.spdk:cnode2 00:16:49.248 traddr: 10.0.0.2 00:16:49.248 eflags: none 00:16:49.248 sectype: none 00:16:49.248 =====Discovery Log Entry 3====== 00:16:49.248 trtype: tcp 00:16:49.248 adrfam: ipv4 00:16:49.248 subtype: nvme subsystem 00:16:49.248 treq: not required 00:16:49.248 portid: 0 00:16:49.248 trsvcid: 4420 00:16:49.248 subnqn: nqn.2016-06.io.spdk:cnode3 00:16:49.248 traddr: 10.0.0.2 00:16:49.248 eflags: none 00:16:49.248 sectype: none 00:16:49.248 =====Discovery Log Entry 4====== 00:16:49.248 trtype: tcp 00:16:49.248 adrfam: ipv4 00:16:49.248 subtype: nvme subsystem 00:16:49.248 treq: not required 00:16:49.249 portid: 0 00:16:49.249 trsvcid: 4420 00:16:49.249 subnqn: nqn.2016-06.io.spdk:cnode4 00:16:49.249 traddr: 10.0.0.2 00:16:49.249 eflags: none 00:16:49.249 sectype: none 00:16:49.249 =====Discovery Log Entry 5====== 00:16:49.249 trtype: tcp 00:16:49.249 adrfam: ipv4 00:16:49.249 subtype: discovery subsystem referral 00:16:49.249 treq: not required 00:16:49.249 portid: 0 00:16:49.249 trsvcid: 4430 00:16:49.249 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:49.249 traddr: 10.0.0.2 00:16:49.249 eflags: none 00:16:49.249 sectype: none 00:16:49.249 08:42:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:16:49.249 Perform nvmf subsystem discovery via RPC 00:16:49.249 08:42:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:16:49.249 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:49.249 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:49.249 [ 00:16:49.249 { 00:16:49.249 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:49.249 "subtype": "Discovery", 00:16:49.249 "listen_addresses": [ 00:16:49.249 { 00:16:49.249 "trtype": "TCP", 00:16:49.249 "adrfam": "IPv4", 00:16:49.249 "traddr": "10.0.0.2", 00:16:49.249 "trsvcid": "4420" 00:16:49.249 } 00:16:49.249 ], 00:16:49.249 "allow_any_host": true, 00:16:49.249 "hosts": [] 00:16:49.249 }, 00:16:49.249 { 00:16:49.249 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:49.249 "subtype": "NVMe", 00:16:49.249 "listen_addresses": [ 00:16:49.249 { 00:16:49.249 "trtype": "TCP", 00:16:49.249 "adrfam": "IPv4", 00:16:49.249 "traddr": "10.0.0.2", 00:16:49.249 "trsvcid": "4420" 00:16:49.249 } 00:16:49.249 ], 00:16:49.249 "allow_any_host": true, 00:16:49.249 "hosts": [], 00:16:49.249 "serial_number": "SPDK00000000000001", 00:16:49.249 "model_number": "SPDK bdev Controller", 00:16:49.249 "max_namespaces": 32, 00:16:49.249 "min_cntlid": 1, 00:16:49.249 "max_cntlid": 65519, 00:16:49.249 "namespaces": [ 00:16:49.249 { 00:16:49.249 "nsid": 1, 00:16:49.249 "bdev_name": "Null1", 00:16:49.249 "name": "Null1", 00:16:49.249 "nguid": "20F40DFD6E9141B2826AFCB129621D23", 00:16:49.249 "uuid": "20f40dfd-6e91-41b2-826a-fcb129621d23" 00:16:49.249 } 00:16:49.249 ] 00:16:49.249 }, 00:16:49.249 { 00:16:49.249 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:49.249 "subtype": "NVMe", 00:16:49.249 "listen_addresses": [ 00:16:49.249 { 00:16:49.249 "trtype": "TCP", 00:16:49.249 "adrfam": "IPv4", 00:16:49.249 "traddr": "10.0.0.2", 00:16:49.249 "trsvcid": "4420" 00:16:49.249 } 00:16:49.249 ], 00:16:49.249 "allow_any_host": true, 00:16:49.249 "hosts": [], 00:16:49.249 "serial_number": "SPDK00000000000002", 00:16:49.249 "model_number": "SPDK bdev Controller", 00:16:49.249 "max_namespaces": 32, 00:16:49.249 "min_cntlid": 1, 00:16:49.249 "max_cntlid": 65519, 00:16:49.249 "namespaces": [ 00:16:49.249 { 00:16:49.249 "nsid": 1, 00:16:49.249 "bdev_name": "Null2", 00:16:49.249 "name": "Null2", 00:16:49.249 "nguid": "10A11A184DA540D4BE68715569CDE145", 00:16:49.249 "uuid": "10a11a18-4da5-40d4-be68-715569cde145" 00:16:49.249 } 00:16:49.249 ] 00:16:49.249 }, 00:16:49.249 { 00:16:49.249 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:16:49.249 "subtype": "NVMe", 00:16:49.249 "listen_addresses": [ 00:16:49.249 { 00:16:49.249 "trtype": "TCP", 00:16:49.249 "adrfam": "IPv4", 00:16:49.249 "traddr": "10.0.0.2", 00:16:49.249 "trsvcid": "4420" 00:16:49.249 } 00:16:49.249 ], 00:16:49.249 "allow_any_host": true, 00:16:49.249 "hosts": [], 00:16:49.249 "serial_number": "SPDK00000000000003", 00:16:49.249 "model_number": "SPDK bdev Controller", 00:16:49.249 "max_namespaces": 32, 00:16:49.249 "min_cntlid": 1, 00:16:49.249 "max_cntlid": 65519, 00:16:49.249 "namespaces": [ 00:16:49.249 { 00:16:49.249 "nsid": 1, 00:16:49.249 "bdev_name": "Null3", 00:16:49.249 "name": "Null3", 00:16:49.249 "nguid": "819C57F8596D44D48D5E92FCB038B196", 00:16:49.249 "uuid": "819c57f8-596d-44d4-8d5e-92fcb038b196" 00:16:49.249 } 00:16:49.249 ] 00:16:49.249 }, 00:16:49.249 { 00:16:49.249 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:16:49.249 "subtype": "NVMe", 00:16:49.249 "listen_addresses": [ 00:16:49.249 { 00:16:49.249 "trtype": "TCP", 00:16:49.249 "adrfam": "IPv4", 00:16:49.249 "traddr": "10.0.0.2", 00:16:49.249 "trsvcid": "4420" 00:16:49.249 } 00:16:49.249 ], 00:16:49.249 "allow_any_host": true, 00:16:49.249 "hosts": [], 00:16:49.249 "serial_number": "SPDK00000000000004", 00:16:49.249 "model_number": "SPDK bdev Controller", 00:16:49.249 "max_namespaces": 32, 00:16:49.249 "min_cntlid": 1, 00:16:49.249 "max_cntlid": 65519, 00:16:49.249 "namespaces": [ 00:16:49.249 { 00:16:49.249 "nsid": 1, 00:16:49.249 "bdev_name": "Null4", 00:16:49.249 "name": "Null4", 00:16:49.249 "nguid": "D3D6A7EC4B7B49BD989AB96A366C3860", 00:16:49.249 "uuid": "d3d6a7ec-4b7b-49bd-989a-b96a366c3860" 00:16:49.249 } 00:16:49.249 ] 00:16:49.249 } 00:16:49.249 ] 00:16:49.249 08:42:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:49.249 08:42:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:16:49.249 08:42:44 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:16:49.249 08:42:44 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:49.249 08:42:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:49.249 08:42:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:49.249 08:42:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:49.249 08:42:44 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:16:49.249 08:42:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:49.249 08:42:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:49.249 08:42:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:49.249 08:42:44 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:16:49.249 08:42:44 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:16:49.249 08:42:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:49.249 08:42:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:49.249 08:42:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:49.249 08:42:44 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:16:49.249 08:42:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:49.249 08:42:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:49.249 08:42:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:49.249 08:42:44 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:16:49.249 08:42:44 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:16:49.249 08:42:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:49.249 08:42:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:49.507 08:42:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:49.507 08:42:44 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:16:49.507 08:42:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:49.507 08:42:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:49.507 08:42:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:49.507 08:42:44 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:16:49.507 08:42:44 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:16:49.507 08:42:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:49.507 08:42:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:49.507 08:42:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:49.507 08:42:44 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:16:49.507 08:42:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:49.507 08:42:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:49.507 08:42:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:49.507 08:42:44 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:16:49.507 08:42:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:49.507 08:42:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:49.507 08:42:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:49.507 08:42:44 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:16:49.507 08:42:44 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:16:49.507 08:42:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:49.507 08:42:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:49.507 08:42:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:49.507 08:42:44 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:16:49.507 08:42:44 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:16:49.507 08:42:44 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:16:49.507 08:42:44 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:16:49.507 08:42:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:49.507 08:42:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:16:49.507 08:42:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:49.507 08:42:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:16:49.507 08:42:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:49.507 08:42:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:49.507 rmmod nvme_tcp 00:16:49.507 rmmod nvme_fabrics 00:16:49.507 rmmod nvme_keyring 00:16:49.507 08:42:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:49.507 08:42:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:16:49.507 08:42:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:16:49.507 08:42:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 2164970 ']' 00:16:49.508 08:42:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 2164970 00:16:49.508 08:42:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@947 -- # '[' -z 2164970 ']' 00:16:49.508 08:42:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # kill -0 2164970 00:16:49.508 08:42:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # uname 00:16:49.508 08:42:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:16:49.508 08:42:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2164970 00:16:49.508 08:42:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:16:49.508 08:42:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:16:49.508 08:42:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2164970' 00:16:49.508 killing process with pid 2164970 00:16:49.508 08:42:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # kill 2164970 00:16:49.508 [2024-05-15 08:42:44.197575] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:49.508 08:42:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@971 -- # wait 2164970 00:16:49.767 08:42:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:49.767 08:42:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:49.767 08:42:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:49.767 08:42:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:49.767 08:42:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:49.767 08:42:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:49.767 08:42:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:49.767 08:42:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:51.670 08:42:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:51.670 00:16:51.670 real 0m5.830s 00:16:51.670 user 0m4.186s 00:16:51.670 sys 0m2.204s 00:16:51.670 08:42:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # xtrace_disable 00:16:51.671 08:42:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:51.671 ************************************ 00:16:51.671 END TEST nvmf_target_discovery 00:16:51.671 ************************************ 00:16:51.929 08:42:46 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:16:51.929 08:42:46 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:16:51.929 08:42:46 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:16:51.929 08:42:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:51.929 ************************************ 00:16:51.929 START TEST nvmf_referrals 00:16:51.929 ************************************ 00:16:51.929 08:42:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:16:51.929 * Looking for test storage... 00:16:51.929 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:51.929 08:42:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:51.929 08:42:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:16:51.929 08:42:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:51.929 08:42:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:51.929 08:42:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:51.929 08:42:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:51.929 08:42:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:51.929 08:42:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:51.929 08:42:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:51.929 08:42:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:51.929 08:42:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:51.929 08:42:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:51.929 08:42:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:51.929 08:42:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:16:51.929 08:42:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:51.929 08:42:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:51.929 08:42:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:51.929 08:42:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:51.929 08:42:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:51.929 08:42:46 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:51.929 08:42:46 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:51.929 08:42:46 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:51.929 08:42:46 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.929 08:42:46 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.929 08:42:46 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.929 08:42:46 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:16:51.929 08:42:46 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.929 08:42:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:16:51.929 08:42:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:51.929 08:42:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:51.929 08:42:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:51.929 08:42:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:51.929 08:42:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:51.929 08:42:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:51.929 08:42:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:51.929 08:42:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:51.929 08:42:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:16:51.929 08:42:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:16:51.929 08:42:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:16:51.929 08:42:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:16:51.929 08:42:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:16:51.929 08:42:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:16:51.929 08:42:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:16:51.929 08:42:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:51.929 08:42:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:51.929 08:42:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:51.929 08:42:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:51.929 08:42:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:51.929 08:42:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:51.929 08:42:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:51.929 08:42:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:51.929 08:42:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:51.929 08:42:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:51.929 08:42:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:16:51.929 08:42:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:54.456 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:54.456 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:16:54.456 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:54.456 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:54.456 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:54.456 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:54.456 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:54.456 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:16:54.456 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:54.456 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:16:54.456 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:16:54.456 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:16:54.456 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:16:54.456 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:16:54.456 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:16:54.456 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:54.456 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:54.456 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:54.456 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:54.456 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:54.456 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:54.456 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:54.456 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:54.456 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:54.456 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:54.456 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:54.456 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:54.456 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:54.456 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:54.456 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:54.456 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:54.456 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:54.456 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:54.456 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:16:54.456 Found 0000:09:00.0 (0x8086 - 0x159b) 00:16:54.456 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:54.456 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:54.456 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:54.456 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:54.456 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:54.456 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:54.456 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:16:54.456 Found 0000:09:00.1 (0x8086 - 0x159b) 00:16:54.456 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:54.456 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:54.456 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:54.456 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:54.456 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:54.456 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:54.456 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:54.456 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:54.456 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:54.456 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:54.456 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:54.456 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:54.456 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:54.456 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:54.456 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:54.456 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:16:54.456 Found net devices under 0000:09:00.0: cvl_0_0 00:16:54.456 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:54.456 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:54.456 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:54.456 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:54.456 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:54.456 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:54.457 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:54.457 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:54.457 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:16:54.457 Found net devices under 0000:09:00.1: cvl_0_1 00:16:54.457 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:54.457 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:54.457 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:16:54.457 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:54.457 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:54.457 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:54.457 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:54.457 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:54.457 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:54.457 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:54.457 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:54.457 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:54.457 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:54.457 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:54.457 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:54.457 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:54.457 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:54.457 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:54.457 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:54.457 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:54.457 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:54.457 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:54.457 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:54.457 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:54.457 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:54.715 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:54.715 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:54.715 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:16:54.715 00:16:54.715 --- 10.0.0.2 ping statistics --- 00:16:54.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:54.715 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:16:54.715 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:54.715 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:54.715 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:16:54.715 00:16:54.715 --- 10.0.0.1 ping statistics --- 00:16:54.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:54.715 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:16:54.715 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:54.715 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:16:54.715 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:54.715 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:54.715 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:54.715 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:54.715 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:54.715 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:54.715 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:54.715 08:42:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:16:54.715 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:54.715 08:42:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@721 -- # xtrace_disable 00:16:54.715 08:42:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:54.715 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=2167355 00:16:54.715 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:54.715 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 2167355 00:16:54.715 08:42:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@828 -- # '[' -z 2167355 ']' 00:16:54.715 08:42:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:54.715 08:42:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local max_retries=100 00:16:54.715 08:42:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:54.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:54.715 08:42:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@837 -- # xtrace_disable 00:16:54.715 08:42:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:54.715 [2024-05-15 08:42:49.316666] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:16:54.715 [2024-05-15 08:42:49.316748] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:54.715 EAL: No free 2048 kB hugepages reported on node 1 00:16:54.715 [2024-05-15 08:42:49.395539] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:54.715 [2024-05-15 08:42:49.493010] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:54.715 [2024-05-15 08:42:49.493067] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:54.715 [2024-05-15 08:42:49.493085] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:54.715 [2024-05-15 08:42:49.493098] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:54.715 [2024-05-15 08:42:49.493109] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:54.715 [2024-05-15 08:42:49.493189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:54.715 [2024-05-15 08:42:49.493245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:54.715 [2024-05-15 08:42:49.493284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:54.715 [2024-05-15 08:42:49.493287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.973 08:42:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:16:54.973 08:42:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@861 -- # return 0 00:16:54.973 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:54.973 08:42:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@727 -- # xtrace_disable 00:16:54.973 08:42:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:54.973 08:42:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:54.973 08:42:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:54.973 08:42:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:54.973 08:42:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:54.973 [2024-05-15 08:42:49.652966] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:54.973 08:42:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:54.973 08:42:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:16:54.973 08:42:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:54.973 08:42:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:54.973 [2024-05-15 08:42:49.664935] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:54.973 [2024-05-15 08:42:49.665295] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:16:54.973 08:42:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:54.973 08:42:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:16:54.973 08:42:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:54.973 08:42:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:54.973 08:42:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:54.973 08:42:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:16:54.973 08:42:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:54.973 08:42:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:54.973 08:42:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:54.973 08:42:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:16:54.973 08:42:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:54.973 08:42:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:54.973 08:42:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:54.973 08:42:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:54.973 08:42:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:16:54.973 08:42:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:54.973 08:42:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:54.973 08:42:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:54.973 08:42:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:16:54.973 08:42:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:16:54.973 08:42:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:16:54.973 08:42:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:54.973 08:42:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:16:54.973 08:42:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:54.973 08:42:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:16:54.973 08:42:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:54.973 08:42:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:55.231 08:42:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:16:55.231 08:42:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:16:55.231 08:42:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:16:55.231 08:42:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:55.231 08:42:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:55.231 08:42:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:55.231 08:42:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:55.231 08:42:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:55.231 08:42:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:16:55.231 08:42:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:16:55.231 08:42:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:16:55.231 08:42:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:55.231 08:42:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:55.231 08:42:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:55.231 08:42:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:16:55.231 08:42:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:55.231 08:42:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:55.231 08:42:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:55.231 08:42:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:16:55.232 08:42:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:55.232 08:42:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:55.232 08:42:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:55.232 08:42:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:55.232 08:42:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:16:55.232 08:42:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:55.232 08:42:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:55.232 08:42:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:55.232 08:42:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:16:55.232 08:42:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:16:55.232 08:42:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:55.232 08:42:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:55.232 08:42:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:55.232 08:42:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:55.232 08:42:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:55.489 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:16:55.489 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:16:55.489 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:16:55.489 08:42:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:55.489 08:42:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:55.489 08:42:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:55.489 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:16:55.489 08:42:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:55.489 08:42:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:55.489 08:42:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:55.489 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:16:55.489 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:16:55.490 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:55.490 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:16:55.490 08:42:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:55.490 08:42:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:55.490 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:16:55.490 08:42:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:55.490 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:16:55.490 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:16:55.490 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:16:55.490 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:55.490 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:55.490 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:55.490 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:55.490 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:55.490 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:16:55.490 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:16:55.490 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:16:55.490 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:16:55.490 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:16:55.490 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:55.490 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:16:55.747 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:16:55.747 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:16:55.747 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:16:55.747 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:16:55.747 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:55.747 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:16:55.747 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:16:55.747 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:16:55.747 08:42:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:55.747 08:42:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:55.747 08:42:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:55.747 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:16:55.747 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:16:55.747 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:55.747 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:16:55.747 08:42:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:55.747 08:42:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:55.747 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:16:55.747 08:42:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:55.747 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:16:55.747 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:16:55.747 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:16:55.747 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:55.748 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:56.005 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:56.005 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:56.005 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:56.005 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:16:56.005 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:16:56.005 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:16:56.005 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:16:56.005 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:16:56.005 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:56.005 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:16:56.005 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:16:56.005 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:16:56.005 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:16:56.005 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:16:56.005 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:56.005 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:16:56.263 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:16:56.263 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:16:56.263 08:42:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:56.263 08:42:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:56.263 08:42:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:56.263 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:56.263 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:16:56.263 08:42:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:56.263 08:42:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:56.263 08:42:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:56.263 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:16:56.263 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:16:56.263 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:56.263 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:56.263 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:56.263 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:56.263 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:56.263 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:16:56.263 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:16:56.263 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:16:56.263 08:42:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:16:56.263 08:42:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:56.263 08:42:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:16:56.263 08:42:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:56.263 08:42:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:16:56.263 08:42:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:56.264 08:42:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:56.264 rmmod nvme_tcp 00:16:56.264 rmmod nvme_fabrics 00:16:56.264 rmmod nvme_keyring 00:16:56.264 08:42:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:56.264 08:42:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:16:56.264 08:42:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:16:56.264 08:42:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 2167355 ']' 00:16:56.264 08:42:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 2167355 00:16:56.264 08:42:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@947 -- # '[' -z 2167355 ']' 00:16:56.264 08:42:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # kill -0 2167355 00:16:56.264 08:42:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # uname 00:16:56.264 08:42:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:16:56.264 08:42:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2167355 00:16:56.264 08:42:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:16:56.264 08:42:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:16:56.264 08:42:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2167355' 00:16:56.264 killing process with pid 2167355 00:16:56.264 08:42:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # kill 2167355 00:16:56.264 [2024-05-15 08:42:51.032569] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:56.264 08:42:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@971 -- # wait 2167355 00:16:56.522 08:42:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:56.522 08:42:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:56.522 08:42:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:56.522 08:42:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:56.523 08:42:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:56.523 08:42:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.523 08:42:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:56.523 08:42:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:59.056 08:42:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:59.056 00:16:59.056 real 0m6.785s 00:16:59.056 user 0m8.607s 00:16:59.056 sys 0m2.348s 00:16:59.056 08:42:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # xtrace_disable 00:16:59.056 08:42:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:59.056 ************************************ 00:16:59.056 END TEST nvmf_referrals 00:16:59.056 ************************************ 00:16:59.056 08:42:53 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:16:59.056 08:42:53 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:16:59.056 08:42:53 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:16:59.056 08:42:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:59.056 ************************************ 00:16:59.056 START TEST nvmf_connect_disconnect 00:16:59.056 ************************************ 00:16:59.056 08:42:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:16:59.056 * Looking for test storage... 00:16:59.056 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:59.056 08:42:53 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:59.056 08:42:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:16:59.056 08:42:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:59.056 08:42:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:59.056 08:42:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:59.056 08:42:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:59.056 08:42:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:59.056 08:42:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:59.056 08:42:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:59.056 08:42:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:59.056 08:42:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:59.056 08:42:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:59.056 08:42:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:59.056 08:42:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:16:59.056 08:42:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:59.056 08:42:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:59.056 08:42:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:59.056 08:42:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:59.056 08:42:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:59.056 08:42:53 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:59.056 08:42:53 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:59.056 08:42:53 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:59.056 08:42:53 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.056 08:42:53 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.056 08:42:53 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.056 08:42:53 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:16:59.056 08:42:53 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.056 08:42:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:16:59.056 08:42:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:59.056 08:42:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:59.056 08:42:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:59.056 08:42:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:59.056 08:42:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:59.056 08:42:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:59.056 08:42:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:59.056 08:42:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:59.056 08:42:53 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:59.056 08:42:53 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:59.056 08:42:53 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:16:59.056 08:42:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:59.056 08:42:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:59.056 08:42:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:59.056 08:42:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:59.056 08:42:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:59.056 08:42:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:59.056 08:42:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:59.056 08:42:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:59.056 08:42:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:59.056 08:42:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:59.056 08:42:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:16:59.056 08:42:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:01.585 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:01.585 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:17:01.586 Found 0000:09:00.0 (0x8086 - 0x159b) 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:17:01.586 Found 0000:09:00.1 (0x8086 - 0x159b) 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:17:01.586 Found net devices under 0000:09:00.0: cvl_0_0 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:17:01.586 Found net devices under 0000:09:00.1: cvl_0_1 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:01.586 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:01.586 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:17:01.586 00:17:01.586 --- 10.0.0.2 ping statistics --- 00:17:01.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.586 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:01.586 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:01.586 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:17:01.586 00:17:01.586 --- 10.0.0.1 ping statistics --- 00:17:01.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.586 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@721 -- # xtrace_disable 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=2169937 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:01.586 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 2169937 00:17:01.587 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@828 -- # '[' -z 2169937 ']' 00:17:01.587 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:01.587 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local max_retries=100 00:17:01.587 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:01.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:01.587 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # xtrace_disable 00:17:01.587 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:01.587 [2024-05-15 08:42:56.224360] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:17:01.587 [2024-05-15 08:42:56.224461] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:01.587 EAL: No free 2048 kB hugepages reported on node 1 00:17:01.587 [2024-05-15 08:42:56.299756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:01.845 [2024-05-15 08:42:56.386463] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:01.845 [2024-05-15 08:42:56.386528] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:01.845 [2024-05-15 08:42:56.386542] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:01.845 [2024-05-15 08:42:56.386568] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:01.845 [2024-05-15 08:42:56.386578] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:01.845 [2024-05-15 08:42:56.386659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:01.845 [2024-05-15 08:42:56.386724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:01.845 [2024-05-15 08:42:56.386772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:01.845 [2024-05-15 08:42:56.386775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.845 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:17:01.845 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@861 -- # return 0 00:17:01.845 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:01.845 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@727 -- # xtrace_disable 00:17:01.845 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:01.845 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:01.845 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:17:01.845 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:01.845 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:01.845 [2024-05-15 08:42:56.529713] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:01.845 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:01.845 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:17:01.845 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:01.845 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:01.845 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:01.845 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:17:01.845 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:01.845 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:01.845 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:01.845 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:01.845 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:01.845 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:01.845 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:01.845 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:01.845 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:01.845 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:01.845 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:01.845 [2024-05-15 08:42:56.580438] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:01.845 [2024-05-15 08:42:56.580755] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:01.845 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:01.845 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:17:01.845 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:17:01.845 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:17:01.845 08:42:56 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:17:04.408 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:06.937 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:08.835 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:11.362 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:13.261 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:15.789 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:18.317 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:20.213 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:22.741 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:24.677 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:27.202 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:29.121 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:31.646 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:34.173 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:36.072 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:38.608 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:40.506 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:43.033 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:44.963 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:47.490 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:50.017 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:51.915 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:54.437 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:56.333 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:58.857 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:00.751 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:03.277 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:05.219 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:07.745 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:10.271 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:12.170 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:14.696 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:16.594 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:19.121 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:21.647 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:23.546 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:26.141 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:28.041 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:30.565 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:32.461 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:34.987 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:36.886 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:39.425 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:41.324 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:43.851 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:46.443 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:48.341 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:50.869 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:52.769 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:55.295 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:57.191 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:59.719 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:01.613 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:04.144 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:06.711 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:08.605 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:11.132 [2024-05-15 08:45:05.361002] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1342f30 is same with the state(5) to be set 00:19:11.132 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:13.031 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:15.559 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:17.457 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:19.980 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:22.505 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:24.462 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:26.354 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:28.880 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:31.407 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:33.304 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:35.829 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:37.728 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:40.253 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:42.782 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:44.708 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:47.237 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:49.133 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:51.659 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:53.558 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:56.085 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:58.611 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:00.509 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:03.035 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:04.976 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:07.498 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:09.393 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:11.916 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:13.813 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:16.341 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:18.239 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:20.767 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:22.665 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:25.234 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:27.134 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:29.662 [2024-05-15 08:46:24.061963] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a06c0 is same with the state(5) to be set 00:20:29.662 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:31.561 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:34.089 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:35.987 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:38.533 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:41.062 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:42.960 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:45.512 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:47.411 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:47.411 08:46:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:20:47.411 08:46:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:20:47.411 08:46:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:47.411 08:46:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:20:47.411 08:46:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:47.411 08:46:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:20:47.411 08:46:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:47.411 08:46:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:47.411 rmmod nvme_tcp 00:20:47.411 rmmod nvme_fabrics 00:20:47.411 rmmod nvme_keyring 00:20:47.411 08:46:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:47.411 08:46:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:20:47.411 08:46:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:20:47.411 08:46:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 2169937 ']' 00:20:47.411 08:46:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 2169937 00:20:47.411 08:46:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@947 -- # '[' -z 2169937 ']' 00:20:47.411 08:46:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # kill -0 2169937 00:20:47.411 08:46:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # uname 00:20:47.411 08:46:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:20:47.411 08:46:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2169937 00:20:47.411 08:46:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:20:47.411 08:46:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:20:47.411 08:46:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2169937' 00:20:47.411 killing process with pid 2169937 00:20:47.411 08:46:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # kill 2169937 00:20:47.411 [2024-05-15 08:46:42.054775] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:47.411 08:46:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@971 -- # wait 2169937 00:20:47.670 08:46:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:47.670 08:46:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:47.670 08:46:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:47.670 08:46:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:47.670 08:46:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:47.670 08:46:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:47.670 08:46:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:47.670 08:46:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:49.573 08:46:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:49.573 00:20:49.573 real 3m50.993s 00:20:49.573 user 14m37.027s 00:20:49.573 sys 0m31.684s 00:20:49.573 08:46:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # xtrace_disable 00:20:49.573 08:46:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:20:49.573 ************************************ 00:20:49.573 END TEST nvmf_connect_disconnect 00:20:49.573 ************************************ 00:20:49.831 08:46:44 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:20:49.831 08:46:44 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:20:49.831 08:46:44 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:20:49.831 08:46:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:49.831 ************************************ 00:20:49.831 START TEST nvmf_multitarget 00:20:49.831 ************************************ 00:20:49.831 08:46:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:20:49.831 * Looking for test storage... 00:20:49.831 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:49.831 08:46:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:49.831 08:46:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:20:49.831 08:46:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:49.831 08:46:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:49.831 08:46:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:49.831 08:46:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:49.831 08:46:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:49.831 08:46:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:49.831 08:46:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:49.831 08:46:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:49.831 08:46:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:49.831 08:46:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:49.831 08:46:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:49.831 08:46:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:49.831 08:46:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:49.831 08:46:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:49.831 08:46:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:49.831 08:46:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:49.831 08:46:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:49.831 08:46:44 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:49.831 08:46:44 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:49.831 08:46:44 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:49.832 08:46:44 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.832 08:46:44 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.832 08:46:44 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.832 08:46:44 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:20:49.832 08:46:44 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.832 08:46:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:20:49.832 08:46:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:49.832 08:46:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:49.832 08:46:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:49.832 08:46:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:49.832 08:46:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:49.832 08:46:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:49.832 08:46:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:49.832 08:46:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:49.832 08:46:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:20:49.832 08:46:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:20:49.832 08:46:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:49.832 08:46:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:49.832 08:46:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:49.832 08:46:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:49.832 08:46:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:49.832 08:46:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:49.832 08:46:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:49.832 08:46:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:49.832 08:46:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:49.832 08:46:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:49.832 08:46:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:20:49.832 08:46:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:20:52.362 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:52.362 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:20:52.362 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:52.362 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:52.362 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:52.363 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:52.363 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:52.363 Found net devices under 0000:09:00.0: cvl_0_0 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:52.363 Found net devices under 0000:09:00.1: cvl_0_1 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:52.363 08:46:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:52.363 08:46:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:52.363 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:52.363 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:20:52.363 00:20:52.363 --- 10.0.0.2 ping statistics --- 00:20:52.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:52.363 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:20:52.363 08:46:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:52.363 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:52.363 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:20:52.363 00:20:52.363 --- 10.0.0.1 ping statistics --- 00:20:52.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:52.363 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:20:52.363 08:46:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:52.363 08:46:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:20:52.363 08:46:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:52.363 08:46:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:52.363 08:46:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:52.363 08:46:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:52.363 08:46:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:52.363 08:46:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:52.363 08:46:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:52.363 08:46:47 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:20:52.363 08:46:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:52.363 08:46:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@721 -- # xtrace_disable 00:20:52.363 08:46:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:20:52.363 08:46:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=2200632 00:20:52.364 08:46:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:52.364 08:46:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 2200632 00:20:52.364 08:46:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@828 -- # '[' -z 2200632 ']' 00:20:52.364 08:46:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:52.364 08:46:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local max_retries=100 00:20:52.364 08:46:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:52.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:52.364 08:46:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@837 -- # xtrace_disable 00:20:52.364 08:46:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:20:52.364 [2024-05-15 08:46:47.078439] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:20:52.364 [2024-05-15 08:46:47.078549] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:52.364 EAL: No free 2048 kB hugepages reported on node 1 00:20:52.364 [2024-05-15 08:46:47.154548] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:52.622 [2024-05-15 08:46:47.243008] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:52.622 [2024-05-15 08:46:47.243072] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:52.622 [2024-05-15 08:46:47.243085] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:52.622 [2024-05-15 08:46:47.243096] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:52.622 [2024-05-15 08:46:47.243105] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:52.622 [2024-05-15 08:46:47.243183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:52.622 [2024-05-15 08:46:47.243249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:52.622 [2024-05-15 08:46:47.243286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:52.622 [2024-05-15 08:46:47.243288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:52.622 08:46:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:20:52.622 08:46:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@861 -- # return 0 00:20:52.622 08:46:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:52.622 08:46:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@727 -- # xtrace_disable 00:20:52.622 08:46:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:20:52.622 08:46:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:52.622 08:46:47 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:20:52.622 08:46:47 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:20:52.622 08:46:47 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:20:52.882 08:46:47 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:20:52.882 08:46:47 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:20:52.882 "nvmf_tgt_1" 00:20:52.882 08:46:47 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:20:53.141 "nvmf_tgt_2" 00:20:53.141 08:46:47 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:20:53.141 08:46:47 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:20:53.141 08:46:47 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:20:53.141 08:46:47 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:20:53.141 true 00:20:53.141 08:46:47 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:20:53.400 true 00:20:53.400 08:46:48 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:20:53.400 08:46:48 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:20:53.400 08:46:48 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:20:53.400 08:46:48 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:20:53.400 08:46:48 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:20:53.400 08:46:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:53.400 08:46:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:20:53.400 08:46:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:53.400 08:46:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:20:53.400 08:46:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:53.400 08:46:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:53.400 rmmod nvme_tcp 00:20:53.400 rmmod nvme_fabrics 00:20:53.400 rmmod nvme_keyring 00:20:53.400 08:46:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:53.400 08:46:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:20:53.400 08:46:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:20:53.400 08:46:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 2200632 ']' 00:20:53.400 08:46:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 2200632 00:20:53.400 08:46:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@947 -- # '[' -z 2200632 ']' 00:20:53.400 08:46:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # kill -0 2200632 00:20:53.400 08:46:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # uname 00:20:53.660 08:46:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:20:53.660 08:46:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2200632 00:20:53.660 08:46:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:20:53.660 08:46:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:20:53.660 08:46:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2200632' 00:20:53.660 killing process with pid 2200632 00:20:53.660 08:46:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # kill 2200632 00:20:53.660 08:46:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@971 -- # wait 2200632 00:20:53.660 08:46:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:53.660 08:46:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:53.660 08:46:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:53.660 08:46:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:53.660 08:46:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:53.660 08:46:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:53.661 08:46:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:53.661 08:46:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:56.198 08:46:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:56.199 00:20:56.199 real 0m6.079s 00:20:56.199 user 0m6.359s 00:20:56.199 sys 0m2.146s 00:20:56.199 08:46:50 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # xtrace_disable 00:20:56.199 08:46:50 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:20:56.199 ************************************ 00:20:56.199 END TEST nvmf_multitarget 00:20:56.199 ************************************ 00:20:56.199 08:46:50 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:20:56.199 08:46:50 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:20:56.199 08:46:50 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:20:56.199 08:46:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:56.199 ************************************ 00:20:56.199 START TEST nvmf_rpc 00:20:56.199 ************************************ 00:20:56.199 08:46:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:20:56.199 * Looking for test storage... 00:20:56.199 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:56.199 08:46:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:56.199 08:46:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:20:56.199 08:46:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:56.199 08:46:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:56.199 08:46:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:56.199 08:46:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:56.199 08:46:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:56.199 08:46:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:56.199 08:46:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:56.199 08:46:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:56.199 08:46:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:56.199 08:46:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:56.199 08:46:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:56.199 08:46:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:56.199 08:46:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:56.199 08:46:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:56.199 08:46:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:56.199 08:46:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:56.199 08:46:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:56.199 08:46:50 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:56.199 08:46:50 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:56.199 08:46:50 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:56.199 08:46:50 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.199 08:46:50 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.199 08:46:50 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.199 08:46:50 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:20:56.199 08:46:50 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.199 08:46:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:20:56.199 08:46:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:56.199 08:46:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:56.199 08:46:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:56.199 08:46:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:56.199 08:46:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:56.199 08:46:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:56.199 08:46:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:56.199 08:46:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:56.199 08:46:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:20:56.199 08:46:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:20:56.199 08:46:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:56.199 08:46:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:56.199 08:46:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:56.199 08:46:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:56.199 08:46:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:56.199 08:46:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:56.199 08:46:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:56.199 08:46:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:56.199 08:46:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:56.199 08:46:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:56.199 08:46:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:20:56.199 08:46:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:58.729 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:58.729 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:20:58.729 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:58.729 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:58.729 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:58.729 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:58.729 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:58.729 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:20:58.729 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:58.729 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:20:58.729 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:20:58.729 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:20:58.729 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:20:58.729 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:20:58.729 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:20:58.729 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:58.729 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:58.729 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:58.729 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:58.729 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:58.729 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:58.729 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:58.730 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:58.730 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:58.730 Found net devices under 0000:09:00.0: cvl_0_0 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:58.730 Found net devices under 0000:09:00.1: cvl_0_1 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:58.730 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:58.730 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:20:58.730 00:20:58.730 --- 10.0.0.2 ping statistics --- 00:20:58.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:58.730 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:58.730 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:58.730 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:20:58.730 00:20:58.730 --- 10.0.0.1 ping statistics --- 00:20:58.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:58.730 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@721 -- # xtrace_disable 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=2203139 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 2203139 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@828 -- # '[' -z 2203139 ']' 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:58.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:20:58.730 08:46:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:58.730 [2024-05-15 08:46:53.278252] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:20:58.730 [2024-05-15 08:46:53.278336] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:58.730 EAL: No free 2048 kB hugepages reported on node 1 00:20:58.730 [2024-05-15 08:46:53.358865] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:58.730 [2024-05-15 08:46:53.450550] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:58.730 [2024-05-15 08:46:53.450612] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:58.730 [2024-05-15 08:46:53.450628] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:58.730 [2024-05-15 08:46:53.450642] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:58.730 [2024-05-15 08:46:53.450654] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:58.730 [2024-05-15 08:46:53.450754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:58.730 [2024-05-15 08:46:53.450790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:58.731 [2024-05-15 08:46:53.450839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:58.731 [2024-05-15 08:46:53.450842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:58.988 08:46:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:20:58.988 08:46:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@861 -- # return 0 00:20:58.988 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:58.988 08:46:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@727 -- # xtrace_disable 00:20:58.988 08:46:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:58.988 08:46:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:58.988 08:46:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:20:58.988 08:46:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:58.988 08:46:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:58.988 08:46:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:58.988 08:46:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:20:58.988 "tick_rate": 2700000000, 00:20:58.988 "poll_groups": [ 00:20:58.988 { 00:20:58.988 "name": "nvmf_tgt_poll_group_000", 00:20:58.988 "admin_qpairs": 0, 00:20:58.988 "io_qpairs": 0, 00:20:58.988 "current_admin_qpairs": 0, 00:20:58.988 "current_io_qpairs": 0, 00:20:58.988 "pending_bdev_io": 0, 00:20:58.988 "completed_nvme_io": 0, 00:20:58.988 "transports": [] 00:20:58.988 }, 00:20:58.988 { 00:20:58.988 "name": "nvmf_tgt_poll_group_001", 00:20:58.988 "admin_qpairs": 0, 00:20:58.988 "io_qpairs": 0, 00:20:58.988 "current_admin_qpairs": 0, 00:20:58.988 "current_io_qpairs": 0, 00:20:58.988 "pending_bdev_io": 0, 00:20:58.988 "completed_nvme_io": 0, 00:20:58.988 "transports": [] 00:20:58.988 }, 00:20:58.988 { 00:20:58.988 "name": "nvmf_tgt_poll_group_002", 00:20:58.988 "admin_qpairs": 0, 00:20:58.988 "io_qpairs": 0, 00:20:58.988 "current_admin_qpairs": 0, 00:20:58.988 "current_io_qpairs": 0, 00:20:58.988 "pending_bdev_io": 0, 00:20:58.988 "completed_nvme_io": 0, 00:20:58.988 "transports": [] 00:20:58.988 }, 00:20:58.988 { 00:20:58.988 "name": "nvmf_tgt_poll_group_003", 00:20:58.988 "admin_qpairs": 0, 00:20:58.988 "io_qpairs": 0, 00:20:58.988 "current_admin_qpairs": 0, 00:20:58.988 "current_io_qpairs": 0, 00:20:58.988 "pending_bdev_io": 0, 00:20:58.988 "completed_nvme_io": 0, 00:20:58.988 "transports": [] 00:20:58.988 } 00:20:58.988 ] 00:20:58.988 }' 00:20:58.988 08:46:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:20:58.988 08:46:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:20:58.988 08:46:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:20:58.988 08:46:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:20:58.988 08:46:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:20:58.988 08:46:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:20:58.988 08:46:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:20:58.988 08:46:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:58.988 08:46:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:58.988 08:46:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:58.988 [2024-05-15 08:46:53.709425] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:58.988 08:46:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:58.988 08:46:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:20:58.988 08:46:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:58.988 08:46:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:58.989 08:46:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:58.989 08:46:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:20:58.989 "tick_rate": 2700000000, 00:20:58.989 "poll_groups": [ 00:20:58.989 { 00:20:58.989 "name": "nvmf_tgt_poll_group_000", 00:20:58.989 "admin_qpairs": 0, 00:20:58.989 "io_qpairs": 0, 00:20:58.989 "current_admin_qpairs": 0, 00:20:58.989 "current_io_qpairs": 0, 00:20:58.989 "pending_bdev_io": 0, 00:20:58.989 "completed_nvme_io": 0, 00:20:58.989 "transports": [ 00:20:58.989 { 00:20:58.989 "trtype": "TCP" 00:20:58.989 } 00:20:58.989 ] 00:20:58.989 }, 00:20:58.989 { 00:20:58.989 "name": "nvmf_tgt_poll_group_001", 00:20:58.989 "admin_qpairs": 0, 00:20:58.989 "io_qpairs": 0, 00:20:58.989 "current_admin_qpairs": 0, 00:20:58.989 "current_io_qpairs": 0, 00:20:58.989 "pending_bdev_io": 0, 00:20:58.989 "completed_nvme_io": 0, 00:20:58.989 "transports": [ 00:20:58.989 { 00:20:58.989 "trtype": "TCP" 00:20:58.989 } 00:20:58.989 ] 00:20:58.989 }, 00:20:58.989 { 00:20:58.989 "name": "nvmf_tgt_poll_group_002", 00:20:58.989 "admin_qpairs": 0, 00:20:58.989 "io_qpairs": 0, 00:20:58.989 "current_admin_qpairs": 0, 00:20:58.989 "current_io_qpairs": 0, 00:20:58.989 "pending_bdev_io": 0, 00:20:58.989 "completed_nvme_io": 0, 00:20:58.989 "transports": [ 00:20:58.989 { 00:20:58.989 "trtype": "TCP" 00:20:58.989 } 00:20:58.989 ] 00:20:58.989 }, 00:20:58.989 { 00:20:58.989 "name": "nvmf_tgt_poll_group_003", 00:20:58.989 "admin_qpairs": 0, 00:20:58.989 "io_qpairs": 0, 00:20:58.989 "current_admin_qpairs": 0, 00:20:58.989 "current_io_qpairs": 0, 00:20:58.989 "pending_bdev_io": 0, 00:20:58.989 "completed_nvme_io": 0, 00:20:58.989 "transports": [ 00:20:58.989 { 00:20:58.989 "trtype": "TCP" 00:20:58.989 } 00:20:58.989 ] 00:20:58.989 } 00:20:58.989 ] 00:20:58.989 }' 00:20:58.989 08:46:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:20:58.989 08:46:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:20:58.989 08:46:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:20:58.989 08:46:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:20:58.989 08:46:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:20:58.989 08:46:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:20:58.989 08:46:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:20:58.989 08:46:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:20:58.989 08:46:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:20:59.246 08:46:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:20:59.246 08:46:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:20:59.246 08:46:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:20:59.246 08:46:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:20:59.246 08:46:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:59.246 08:46:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:59.246 08:46:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:59.246 Malloc1 00:20:59.246 08:46:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:59.246 08:46:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:59.246 08:46:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:59.246 08:46:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:59.246 08:46:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:59.246 08:46:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:59.246 08:46:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:59.246 08:46:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:59.246 08:46:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:59.247 08:46:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:20:59.247 08:46:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:59.247 08:46:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:59.247 08:46:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:59.247 08:46:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:59.247 08:46:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:59.247 08:46:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:59.247 [2024-05-15 08:46:53.866924] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:59.247 [2024-05-15 08:46:53.867296] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:59.247 08:46:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:59.247 08:46:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:20:59.247 08:46:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:20:59.247 08:46:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:20:59.247 08:46:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:20:59.247 08:46:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:59.247 08:46:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:20:59.247 08:46:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:59.247 08:46:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:20:59.247 08:46:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:59.247 08:46:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:20:59.247 08:46:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:20:59.247 08:46:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:20:59.247 [2024-05-15 08:46:53.889716] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:20:59.247 Failed to write to /dev/nvme-fabrics: Input/output error 00:20:59.247 could not add new controller: failed to write to nvme-fabrics device 00:20:59.247 08:46:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:20:59.247 08:46:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:59.247 08:46:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:59.247 08:46:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:59.247 08:46:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:59.247 08:46:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:59.247 08:46:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:59.247 08:46:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:59.247 08:46:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:59.811 08:46:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:20:59.811 08:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:20:59.811 08:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:20:59.811 08:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:20:59.812 08:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:21:01.710 08:46:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:21:01.710 08:46:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:21:01.710 08:46:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:21:01.710 08:46:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:21:01.710 08:46:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:21:01.710 08:46:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:21:01.710 08:46:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:01.994 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:01.994 08:46:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:01.994 08:46:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:21:01.994 08:46:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:21:01.994 08:46:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:01.994 08:46:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:21:01.994 08:46:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:01.994 08:46:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:21:01.994 08:46:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:01.994 08:46:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:01.994 08:46:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:01.994 08:46:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:01.994 08:46:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:01.994 08:46:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:21:01.994 08:46:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:01.994 08:46:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:21:01.994 08:46:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:01.994 08:46:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:21:01.994 08:46:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:01.994 08:46:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:21:01.994 08:46:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:01.994 08:46:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:21:01.994 08:46:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:21:01.994 08:46:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:01.994 [2024-05-15 08:46:56.553980] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:21:01.994 Failed to write to /dev/nvme-fabrics: Input/output error 00:21:01.994 could not add new controller: failed to write to nvme-fabrics device 00:21:01.994 08:46:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:21:01.994 08:46:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:01.994 08:46:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:01.994 08:46:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:01.994 08:46:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:21:01.994 08:46:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:01.994 08:46:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:01.994 08:46:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:01.994 08:46:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:02.559 08:46:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:21:02.559 08:46:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:21:02.559 08:46:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:21:02.559 08:46:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:21:02.559 08:46:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:21:04.457 08:46:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:21:04.457 08:46:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:21:04.457 08:46:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:21:04.457 08:46:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:21:04.457 08:46:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:21:04.457 08:46:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:21:04.457 08:46:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:04.715 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:04.715 08:46:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:04.715 08:46:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:21:04.715 08:46:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:21:04.715 08:46:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:04.715 08:46:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:21:04.715 08:46:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:04.715 08:46:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:21:04.715 08:46:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:04.715 08:46:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:04.715 08:46:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:04.715 08:46:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:04.715 08:46:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:21:04.715 08:46:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:21:04.715 08:46:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:21:04.715 08:46:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:04.715 08:46:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:04.715 08:46:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:04.715 08:46:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:04.715 08:46:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:04.715 08:46:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:04.715 [2024-05-15 08:46:59.305588] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:04.715 08:46:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:04.715 08:46:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:21:04.715 08:46:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:04.715 08:46:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:04.715 08:46:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:04.715 08:46:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:21:04.715 08:46:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:04.715 08:46:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:04.715 08:46:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:04.715 08:46:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:05.282 08:46:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:21:05.282 08:46:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:21:05.282 08:46:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:21:05.282 08:46:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:21:05.282 08:46:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:21:07.179 08:47:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:21:07.179 08:47:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:21:07.179 08:47:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:21:07.179 08:47:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:21:07.179 08:47:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:21:07.179 08:47:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:21:07.179 08:47:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:07.437 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:07.437 08:47:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:07.437 08:47:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:21:07.437 08:47:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:21:07.437 08:47:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:07.437 08:47:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:21:07.437 08:47:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:07.437 08:47:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:21:07.437 08:47:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:21:07.437 08:47:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:07.437 08:47:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:07.437 08:47:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:07.437 08:47:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:07.437 08:47:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:07.437 08:47:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:07.437 08:47:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:07.437 08:47:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:21:07.437 08:47:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:21:07.437 08:47:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:07.437 08:47:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:07.437 08:47:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:07.437 08:47:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:07.437 08:47:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:07.437 08:47:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:07.437 [2024-05-15 08:47:02.022822] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:07.437 08:47:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:07.437 08:47:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:21:07.437 08:47:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:07.437 08:47:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:07.437 08:47:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:07.437 08:47:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:21:07.437 08:47:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:07.437 08:47:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:07.437 08:47:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:07.437 08:47:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:08.003 08:47:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:21:08.003 08:47:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:21:08.003 08:47:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:21:08.003 08:47:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:21:08.003 08:47:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:21:09.898 08:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:21:09.898 08:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:21:09.898 08:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:21:09.898 08:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:21:09.898 08:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:21:09.898 08:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:21:09.898 08:47:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:10.155 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:10.155 08:47:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:10.155 08:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:21:10.155 08:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:21:10.155 08:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:10.155 08:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:21:10.155 08:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:10.155 08:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:21:10.155 08:47:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:21:10.155 08:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:10.155 08:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:10.155 08:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:10.155 08:47:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:10.155 08:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:10.155 08:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:10.155 08:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:10.155 08:47:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:21:10.155 08:47:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:21:10.155 08:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:10.155 08:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:10.155 08:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:10.155 08:47:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:10.155 08:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:10.155 08:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:10.155 [2024-05-15 08:47:04.787181] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:10.155 08:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:10.155 08:47:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:21:10.155 08:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:10.155 08:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:10.155 08:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:10.155 08:47:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:21:10.155 08:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:10.155 08:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:10.155 08:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:10.155 08:47:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:10.718 08:47:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:21:10.718 08:47:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:21:10.718 08:47:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:21:10.718 08:47:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:21:10.718 08:47:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:21:12.615 08:47:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:21:12.615 08:47:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:21:12.615 08:47:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:21:12.615 08:47:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:21:12.615 08:47:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:21:12.615 08:47:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:21:12.615 08:47:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:12.873 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:12.873 08:47:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:12.873 08:47:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:21:12.873 08:47:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:21:12.873 08:47:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:12.873 08:47:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:21:12.873 08:47:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:12.873 08:47:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:21:12.873 08:47:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:21:12.873 08:47:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:12.873 08:47:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:12.873 08:47:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:12.873 08:47:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:12.873 08:47:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:12.873 08:47:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:12.873 08:47:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:12.873 08:47:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:21:12.873 08:47:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:21:12.873 08:47:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:12.873 08:47:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:12.873 08:47:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:12.873 08:47:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:12.873 08:47:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:12.873 08:47:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:12.873 [2024-05-15 08:47:07.470267] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:12.873 08:47:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:12.873 08:47:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:21:12.873 08:47:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:12.873 08:47:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:12.873 08:47:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:12.873 08:47:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:21:12.873 08:47:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:12.873 08:47:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:12.873 08:47:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:12.873 08:47:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:13.438 08:47:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:21:13.438 08:47:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:21:13.438 08:47:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:21:13.438 08:47:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:21:13.438 08:47:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:21:15.334 08:47:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:21:15.334 08:47:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:21:15.334 08:47:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:21:15.593 08:47:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:21:15.593 08:47:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:21:15.593 08:47:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:21:15.593 08:47:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:15.593 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:15.593 08:47:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:15.593 08:47:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:21:15.593 08:47:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:21:15.593 08:47:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:15.593 08:47:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:21:15.593 08:47:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:15.593 08:47:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:21:15.593 08:47:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:21:15.593 08:47:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:15.593 08:47:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:15.593 08:47:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:15.593 08:47:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:15.593 08:47:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:15.593 08:47:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:15.593 08:47:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:15.593 08:47:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:21:15.593 08:47:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:21:15.593 08:47:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:15.593 08:47:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:15.593 08:47:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:15.593 08:47:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:15.593 08:47:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:15.593 08:47:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:15.593 [2024-05-15 08:47:10.267763] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:15.593 08:47:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:15.593 08:47:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:21:15.593 08:47:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:15.593 08:47:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:15.593 08:47:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:15.593 08:47:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:21:15.593 08:47:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:15.593 08:47:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:15.593 08:47:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:15.593 08:47:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:16.158 08:47:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:21:16.158 08:47:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:21:16.158 08:47:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:21:16.158 08:47:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:21:16.158 08:47:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:21:18.713 08:47:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:21:18.713 08:47:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:21:18.713 08:47:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:21:18.713 08:47:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:21:18.713 08:47:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:21:18.713 08:47:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:21:18.713 08:47:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:18.713 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:18.713 08:47:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:18.713 08:47:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:21:18.713 08:47:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:21:18.713 08:47:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:18.713 08:47:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:21:18.713 08:47:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:18.713 [2024-05-15 08:47:13.039250] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:18.713 [2024-05-15 08:47:13.087284] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:18.713 [2024-05-15 08:47:13.135431] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:18.713 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:18.714 [2024-05-15 08:47:13.183630] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:18.714 [2024-05-15 08:47:13.231805] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:21:18.714 "tick_rate": 2700000000, 00:21:18.714 "poll_groups": [ 00:21:18.714 { 00:21:18.714 "name": "nvmf_tgt_poll_group_000", 00:21:18.714 "admin_qpairs": 2, 00:21:18.714 "io_qpairs": 84, 00:21:18.714 "current_admin_qpairs": 0, 00:21:18.714 "current_io_qpairs": 0, 00:21:18.714 "pending_bdev_io": 0, 00:21:18.714 "completed_nvme_io": 185, 00:21:18.714 "transports": [ 00:21:18.714 { 00:21:18.714 "trtype": "TCP" 00:21:18.714 } 00:21:18.714 ] 00:21:18.714 }, 00:21:18.714 { 00:21:18.714 "name": "nvmf_tgt_poll_group_001", 00:21:18.714 "admin_qpairs": 2, 00:21:18.714 "io_qpairs": 84, 00:21:18.714 "current_admin_qpairs": 0, 00:21:18.714 "current_io_qpairs": 0, 00:21:18.714 "pending_bdev_io": 0, 00:21:18.714 "completed_nvme_io": 138, 00:21:18.714 "transports": [ 00:21:18.714 { 00:21:18.714 "trtype": "TCP" 00:21:18.714 } 00:21:18.714 ] 00:21:18.714 }, 00:21:18.714 { 00:21:18.714 "name": "nvmf_tgt_poll_group_002", 00:21:18.714 "admin_qpairs": 1, 00:21:18.714 "io_qpairs": 84, 00:21:18.714 "current_admin_qpairs": 0, 00:21:18.714 "current_io_qpairs": 0, 00:21:18.714 "pending_bdev_io": 0, 00:21:18.714 "completed_nvme_io": 216, 00:21:18.714 "transports": [ 00:21:18.714 { 00:21:18.714 "trtype": "TCP" 00:21:18.714 } 00:21:18.714 ] 00:21:18.714 }, 00:21:18.714 { 00:21:18.714 "name": "nvmf_tgt_poll_group_003", 00:21:18.714 "admin_qpairs": 2, 00:21:18.714 "io_qpairs": 84, 00:21:18.714 "current_admin_qpairs": 0, 00:21:18.714 "current_io_qpairs": 0, 00:21:18.714 "pending_bdev_io": 0, 00:21:18.714 "completed_nvme_io": 147, 00:21:18.714 "transports": [ 00:21:18.714 { 00:21:18.714 "trtype": "TCP" 00:21:18.714 } 00:21:18.714 ] 00:21:18.714 } 00:21:18.714 ] 00:21:18.714 }' 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:21:18.714 08:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:21:18.715 08:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:21:18.715 08:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:21:18.715 08:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:21:18.715 08:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:21:18.715 08:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:18.715 08:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:21:18.715 08:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:18.715 08:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:21:18.715 08:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:18.715 08:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:18.715 rmmod nvme_tcp 00:21:18.715 rmmod nvme_fabrics 00:21:18.715 rmmod nvme_keyring 00:21:18.715 08:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:18.715 08:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:21:18.715 08:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:21:18.715 08:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 2203139 ']' 00:21:18.715 08:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 2203139 00:21:18.715 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@947 -- # '[' -z 2203139 ']' 00:21:18.715 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # kill -0 2203139 00:21:18.715 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # uname 00:21:18.715 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:21:18.715 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2203139 00:21:18.715 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:21:18.715 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:21:18.715 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2203139' 00:21:18.715 killing process with pid 2203139 00:21:18.715 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # kill 2203139 00:21:18.715 [2024-05-15 08:47:13.462798] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:18.715 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@971 -- # wait 2203139 00:21:18.973 08:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:18.973 08:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:18.973 08:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:18.973 08:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:18.973 08:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:18.973 08:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:18.973 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:18.973 08:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.508 08:47:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:21.508 00:21:21.508 real 0m25.200s 00:21:21.508 user 1m20.334s 00:21:21.508 sys 0m4.176s 00:21:21.508 08:47:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:21:21.508 08:47:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:21.508 ************************************ 00:21:21.508 END TEST nvmf_rpc 00:21:21.508 ************************************ 00:21:21.508 08:47:15 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:21:21.508 08:47:15 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:21:21.508 08:47:15 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:21:21.508 08:47:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:21.508 ************************************ 00:21:21.508 START TEST nvmf_invalid 00:21:21.508 ************************************ 00:21:21.508 08:47:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:21:21.508 * Looking for test storage... 00:21:21.508 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:21.508 08:47:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:21.508 08:47:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:21:21.508 08:47:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:21.508 08:47:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:21.508 08:47:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:21.508 08:47:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:21.508 08:47:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:21.508 08:47:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:21.508 08:47:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:21.508 08:47:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:21.508 08:47:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:21.508 08:47:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:21.508 08:47:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:21.508 08:47:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:21.508 08:47:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:21.508 08:47:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:21.508 08:47:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:21.508 08:47:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:21.508 08:47:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:21.508 08:47:15 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:21.508 08:47:15 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:21.508 08:47:15 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:21.508 08:47:15 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.508 08:47:15 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.508 08:47:15 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.508 08:47:15 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:21:21.508 08:47:15 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.508 08:47:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:21:21.508 08:47:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:21.508 08:47:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:21.508 08:47:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:21.508 08:47:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:21.508 08:47:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:21.508 08:47:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:21.508 08:47:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:21.508 08:47:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:21.508 08:47:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:21:21.508 08:47:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:21.508 08:47:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:21:21.508 08:47:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:21:21.508 08:47:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:21:21.508 08:47:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:21:21.508 08:47:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:21.508 08:47:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:21.508 08:47:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:21.508 08:47:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:21.508 08:47:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:21.508 08:47:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:21.508 08:47:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:21.508 08:47:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.508 08:47:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:21.508 08:47:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:21.508 08:47:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:21:21.508 08:47:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:24.040 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:24.040 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:24.040 Found net devices under 0000:09:00.0: cvl_0_0 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:24.040 Found net devices under 0000:09:00.1: cvl_0_1 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:24.040 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:24.041 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:24.041 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:24.041 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:24.041 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:24.041 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:24.041 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:24.041 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:24.041 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:24.041 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:24.041 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:24.041 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:24.041 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:24.041 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:24.041 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:24.041 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:24.041 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:24.041 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:24.041 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:21:24.041 00:21:24.041 --- 10.0.0.2 ping statistics --- 00:21:24.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:24.041 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:21:24.041 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:24.041 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:24.041 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:21:24.041 00:21:24.041 --- 10.0.0.1 ping statistics --- 00:21:24.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:24.041 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:21:24.041 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:24.041 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:21:24.041 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:24.041 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:24.041 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:24.041 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:24.041 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:24.041 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:24.041 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:24.041 08:47:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:21:24.041 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:24.041 08:47:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@721 -- # xtrace_disable 00:21:24.041 08:47:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:21:24.041 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=2207928 00:21:24.041 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:24.041 08:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 2207928 00:21:24.041 08:47:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@828 -- # '[' -z 2207928 ']' 00:21:24.041 08:47:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:24.041 08:47:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local max_retries=100 00:21:24.041 08:47:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:24.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:24.041 08:47:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@837 -- # xtrace_disable 00:21:24.041 08:47:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:21:24.041 [2024-05-15 08:47:18.609133] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:21:24.041 [2024-05-15 08:47:18.609239] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:24.041 EAL: No free 2048 kB hugepages reported on node 1 00:21:24.041 [2024-05-15 08:47:18.691473] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:24.041 [2024-05-15 08:47:18.783049] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:24.041 [2024-05-15 08:47:18.783110] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:24.041 [2024-05-15 08:47:18.783136] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:24.041 [2024-05-15 08:47:18.783158] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:24.041 [2024-05-15 08:47:18.783170] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:24.041 [2024-05-15 08:47:18.783282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:24.041 [2024-05-15 08:47:18.783336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:24.041 [2024-05-15 08:47:18.783391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:24.041 [2024-05-15 08:47:18.783394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:24.975 08:47:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:21:24.975 08:47:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@861 -- # return 0 00:21:24.975 08:47:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:24.975 08:47:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@727 -- # xtrace_disable 00:21:24.975 08:47:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:21:24.975 08:47:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:24.975 08:47:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:21:24.975 08:47:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode19380 00:21:25.233 [2024-05-15 08:47:19.785853] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:21:25.233 08:47:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:21:25.233 { 00:21:25.233 "nqn": "nqn.2016-06.io.spdk:cnode19380", 00:21:25.233 "tgt_name": "foobar", 00:21:25.233 "method": "nvmf_create_subsystem", 00:21:25.233 "req_id": 1 00:21:25.233 } 00:21:25.233 Got JSON-RPC error response 00:21:25.233 response: 00:21:25.233 { 00:21:25.233 "code": -32603, 00:21:25.233 "message": "Unable to find target foobar" 00:21:25.233 }' 00:21:25.233 08:47:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:21:25.233 { 00:21:25.233 "nqn": "nqn.2016-06.io.spdk:cnode19380", 00:21:25.233 "tgt_name": "foobar", 00:21:25.233 "method": "nvmf_create_subsystem", 00:21:25.233 "req_id": 1 00:21:25.233 } 00:21:25.233 Got JSON-RPC error response 00:21:25.233 response: 00:21:25.233 { 00:21:25.233 "code": -32603, 00:21:25.233 "message": "Unable to find target foobar" 00:21:25.233 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:21:25.233 08:47:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:21:25.233 08:47:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode31129 00:21:25.492 [2024-05-15 08:47:20.030797] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31129: invalid serial number 'SPDKISFASTANDAWESOME' 00:21:25.492 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:21:25.492 { 00:21:25.492 "nqn": "nqn.2016-06.io.spdk:cnode31129", 00:21:25.492 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:21:25.492 "method": "nvmf_create_subsystem", 00:21:25.492 "req_id": 1 00:21:25.492 } 00:21:25.492 Got JSON-RPC error response 00:21:25.492 response: 00:21:25.492 { 00:21:25.492 "code": -32602, 00:21:25.492 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:21:25.492 }' 00:21:25.492 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:21:25.492 { 00:21:25.492 "nqn": "nqn.2016-06.io.spdk:cnode31129", 00:21:25.492 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:21:25.492 "method": "nvmf_create_subsystem", 00:21:25.492 "req_id": 1 00:21:25.492 } 00:21:25.492 Got JSON-RPC error response 00:21:25.492 response: 00:21:25.492 { 00:21:25.492 "code": -32602, 00:21:25.492 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:21:25.492 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:21:25.492 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:21:25.492 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode9572 00:21:25.750 [2024-05-15 08:47:20.295632] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9572: invalid model number 'SPDK_Controller' 00:21:25.750 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:21:25.750 { 00:21:25.750 "nqn": "nqn.2016-06.io.spdk:cnode9572", 00:21:25.750 "model_number": "SPDK_Controller\u001f", 00:21:25.750 "method": "nvmf_create_subsystem", 00:21:25.750 "req_id": 1 00:21:25.750 } 00:21:25.750 Got JSON-RPC error response 00:21:25.750 response: 00:21:25.750 { 00:21:25.750 "code": -32602, 00:21:25.750 "message": "Invalid MN SPDK_Controller\u001f" 00:21:25.750 }' 00:21:25.750 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:21:25.750 { 00:21:25.750 "nqn": "nqn.2016-06.io.spdk:cnode9572", 00:21:25.750 "model_number": "SPDK_Controller\u001f", 00:21:25.750 "method": "nvmf_create_subsystem", 00:21:25.750 "req_id": 1 00:21:25.750 } 00:21:25.750 Got JSON-RPC error response 00:21:25.750 response: 00:21:25.751 { 00:21:25.751 "code": -32602, 00:21:25.751 "message": "Invalid MN SPDK_Controller\u001f" 00:21:25.751 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:21:25.751 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:21:25.752 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:25.752 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:25.752 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ t == \- ]] 00:21:25.752 08:47:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'tO*U2ocyX&7_Pwv&/ /dev/null' 00:21:28.599 08:47:23 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:31.131 08:47:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:31.131 00:21:31.131 real 0m9.617s 00:21:31.131 user 0m22.183s 00:21:31.131 sys 0m2.822s 00:21:31.131 08:47:25 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # xtrace_disable 00:21:31.131 08:47:25 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:21:31.131 ************************************ 00:21:31.131 END TEST nvmf_invalid 00:21:31.131 ************************************ 00:21:31.131 08:47:25 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:21:31.131 08:47:25 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:21:31.131 08:47:25 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:21:31.131 08:47:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:31.131 ************************************ 00:21:31.131 START TEST nvmf_abort 00:21:31.131 ************************************ 00:21:31.131 08:47:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:21:31.131 * Looking for test storage... 00:21:31.131 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:31.131 08:47:25 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:31.131 08:47:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:21:31.131 08:47:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:31.131 08:47:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:31.131 08:47:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:31.131 08:47:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:31.131 08:47:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:31.131 08:47:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:31.131 08:47:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:31.131 08:47:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:31.131 08:47:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:31.131 08:47:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:31.131 08:47:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:31.131 08:47:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:31.131 08:47:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:31.131 08:47:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:31.131 08:47:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:31.131 08:47:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:31.131 08:47:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:31.131 08:47:25 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:31.131 08:47:25 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:31.131 08:47:25 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:31.131 08:47:25 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.131 08:47:25 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.131 08:47:25 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.131 08:47:25 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:21:31.131 08:47:25 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.131 08:47:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:21:31.131 08:47:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:31.131 08:47:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:31.131 08:47:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:31.131 08:47:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:31.131 08:47:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:31.131 08:47:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:31.131 08:47:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:31.131 08:47:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:31.131 08:47:25 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:31.131 08:47:25 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:21:31.131 08:47:25 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:21:31.131 08:47:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:31.131 08:47:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:31.131 08:47:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:31.131 08:47:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:31.131 08:47:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:31.132 08:47:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:31.132 08:47:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:31.132 08:47:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:31.132 08:47:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:31.132 08:47:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:31.132 08:47:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:21:31.132 08:47:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:21:33.672 08:47:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:33.673 08:47:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:21:33.673 08:47:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:33.673 08:47:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:33.673 08:47:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:33.673 08:47:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:33.673 08:47:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:33.673 08:47:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:21:33.673 08:47:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:33.673 08:47:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:21:33.673 08:47:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:21:33.673 08:47:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:21:33.673 08:47:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:21:33.673 08:47:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:21:33.673 08:47:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:21:33.673 08:47:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:33.673 08:47:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:33.673 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:33.673 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:33.673 Found net devices under 0000:09:00.0: cvl_0_0 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:33.673 Found net devices under 0000:09:00.1: cvl_0_1 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:33.673 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:33.673 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:21:33.673 00:21:33.673 --- 10.0.0.2 ping statistics --- 00:21:33.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:33.673 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:33.673 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:33.673 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:21:33.673 00:21:33.673 --- 10.0.0.1 ping statistics --- 00:21:33.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:33.673 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@721 -- # xtrace_disable 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=2210863 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 2210863 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@828 -- # '[' -z 2210863 ']' 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local max_retries=100 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:33.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@837 -- # xtrace_disable 00:21:33.673 08:47:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:21:33.674 [2024-05-15 08:47:28.207482] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:21:33.674 [2024-05-15 08:47:28.207566] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:33.674 EAL: No free 2048 kB hugepages reported on node 1 00:21:33.674 [2024-05-15 08:47:28.293561] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:33.674 [2024-05-15 08:47:28.377007] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:33.674 [2024-05-15 08:47:28.377067] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:33.674 [2024-05-15 08:47:28.377095] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:33.674 [2024-05-15 08:47:28.377106] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:33.674 [2024-05-15 08:47:28.377116] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:33.674 [2024-05-15 08:47:28.377201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:33.674 [2024-05-15 08:47:28.377265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:33.674 [2024-05-15 08:47:28.377269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:33.932 08:47:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:21:33.932 08:47:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@861 -- # return 0 00:21:33.932 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:33.932 08:47:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@727 -- # xtrace_disable 00:21:33.932 08:47:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:21:33.932 08:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:33.932 08:47:28 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:21:33.932 08:47:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:33.932 08:47:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:21:33.932 [2024-05-15 08:47:28.517881] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:33.932 08:47:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:33.932 08:47:28 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:21:33.932 08:47:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:33.932 08:47:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:21:33.932 Malloc0 00:21:33.932 08:47:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:33.932 08:47:28 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:21:33.932 08:47:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:33.932 08:47:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:21:33.932 Delay0 00:21:33.932 08:47:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:33.932 08:47:28 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:21:33.932 08:47:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:33.932 08:47:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:21:33.932 08:47:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:33.932 08:47:28 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:21:33.932 08:47:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:33.932 08:47:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:21:33.932 08:47:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:33.932 08:47:28 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:33.932 08:47:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:33.932 08:47:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:21:33.932 [2024-05-15 08:47:28.593910] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:33.932 [2024-05-15 08:47:28.594258] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:33.932 08:47:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:33.932 08:47:28 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:33.932 08:47:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:33.932 08:47:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:21:33.932 08:47:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:33.932 08:47:28 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:21:33.932 EAL: No free 2048 kB hugepages reported on node 1 00:21:33.932 [2024-05-15 08:47:28.699317] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:21:36.510 Initializing NVMe Controllers 00:21:36.510 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:21:36.510 controller IO queue size 128 less than required 00:21:36.510 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:21:36.510 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:21:36.510 Initialization complete. Launching workers. 00:21:36.510 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 34205 00:21:36.510 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 34266, failed to submit 62 00:21:36.510 success 34209, unsuccess 57, failed 0 00:21:36.510 08:47:30 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:36.510 08:47:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:36.510 08:47:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:21:36.510 08:47:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:36.510 08:47:30 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:21:36.510 08:47:30 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:21:36.510 08:47:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:36.510 08:47:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:21:36.510 08:47:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:36.510 08:47:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:21:36.510 08:47:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:36.510 08:47:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:36.510 rmmod nvme_tcp 00:21:36.510 rmmod nvme_fabrics 00:21:36.510 rmmod nvme_keyring 00:21:36.510 08:47:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:36.510 08:47:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:21:36.510 08:47:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:21:36.510 08:47:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 2210863 ']' 00:21:36.510 08:47:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 2210863 00:21:36.510 08:47:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@947 -- # '[' -z 2210863 ']' 00:21:36.510 08:47:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # kill -0 2210863 00:21:36.510 08:47:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # uname 00:21:36.510 08:47:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:21:36.510 08:47:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2210863 00:21:36.510 08:47:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:21:36.510 08:47:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:21:36.510 08:47:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2210863' 00:21:36.510 killing process with pid 2210863 00:21:36.510 08:47:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # kill 2210863 00:21:36.510 [2024-05-15 08:47:30.901642] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:36.510 08:47:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@971 -- # wait 2210863 00:21:36.510 08:47:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:36.510 08:47:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:36.510 08:47:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:36.510 08:47:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:36.510 08:47:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:36.510 08:47:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:36.510 08:47:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:36.510 08:47:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:39.045 08:47:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:39.045 00:21:39.045 real 0m7.741s 00:21:39.045 user 0m10.795s 00:21:39.045 sys 0m2.841s 00:21:39.045 08:47:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # xtrace_disable 00:21:39.045 08:47:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:21:39.045 ************************************ 00:21:39.045 END TEST nvmf_abort 00:21:39.045 ************************************ 00:21:39.045 08:47:33 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:21:39.045 08:47:33 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:21:39.045 08:47:33 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:21:39.045 08:47:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:39.045 ************************************ 00:21:39.045 START TEST nvmf_ns_hotplug_stress 00:21:39.045 ************************************ 00:21:39.045 08:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:21:39.045 * Looking for test storage... 00:21:39.045 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:39.045 08:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:39.045 08:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:21:39.045 08:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:39.045 08:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:39.045 08:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:39.045 08:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:39.045 08:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:39.045 08:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:39.045 08:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:39.045 08:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:39.045 08:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:39.045 08:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:39.045 08:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:39.045 08:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:39.045 08:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:39.045 08:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:39.045 08:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:39.045 08:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:39.045 08:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:39.045 08:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:39.045 08:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:39.045 08:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:39.045 08:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.045 08:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.046 08:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.046 08:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:21:39.046 08:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.046 08:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:21:39.046 08:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:39.046 08:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:39.046 08:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:39.046 08:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:39.046 08:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:39.046 08:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:39.046 08:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:39.046 08:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:39.046 08:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:39.046 08:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:21:39.046 08:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:39.046 08:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:39.046 08:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:39.046 08:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:39.046 08:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:39.046 08:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:39.046 08:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:39.046 08:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:39.046 08:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:39.046 08:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:39.046 08:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:21:39.046 08:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:41.578 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:41.578 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:41.578 Found net devices under 0000:09:00.0: cvl_0_0 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:41.578 Found net devices under 0000:09:00.1: cvl_0_1 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:41.578 08:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:41.578 08:47:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:41.578 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:41.578 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:21:41.578 00:21:41.578 --- 10.0.0.2 ping statistics --- 00:21:41.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.578 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:21:41.578 08:47:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:41.578 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:41.578 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:21:41.578 00:21:41.578 --- 10.0.0.1 ping statistics --- 00:21:41.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.578 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:21:41.578 08:47:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:41.578 08:47:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:21:41.578 08:47:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:41.578 08:47:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:41.578 08:47:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:41.578 08:47:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:41.578 08:47:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:41.578 08:47:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:41.578 08:47:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:41.578 08:47:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:21:41.578 08:47:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:41.578 08:47:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@721 -- # xtrace_disable 00:21:41.578 08:47:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:21:41.578 08:47:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=2213495 00:21:41.578 08:47:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:41.578 08:47:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 2213495 00:21:41.578 08:47:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@828 -- # '[' -z 2213495 ']' 00:21:41.578 08:47:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:41.578 08:47:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local max_retries=100 00:21:41.578 08:47:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:41.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:41.578 08:47:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # xtrace_disable 00:21:41.578 08:47:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:21:41.578 [2024-05-15 08:47:36.074929] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:21:41.578 [2024-05-15 08:47:36.075015] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:41.578 EAL: No free 2048 kB hugepages reported on node 1 00:21:41.578 [2024-05-15 08:47:36.152689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:41.578 [2024-05-15 08:47:36.244005] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:41.578 [2024-05-15 08:47:36.244059] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:41.578 [2024-05-15 08:47:36.244077] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:41.578 [2024-05-15 08:47:36.244090] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:41.578 [2024-05-15 08:47:36.244113] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:41.578 [2024-05-15 08:47:36.244194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:41.578 [2024-05-15 08:47:36.244245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:41.579 [2024-05-15 08:47:36.244249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:41.579 08:47:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:21:41.579 08:47:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@861 -- # return 0 00:21:41.579 08:47:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:41.579 08:47:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@727 -- # xtrace_disable 00:21:41.579 08:47:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:21:41.836 08:47:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:41.836 08:47:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:21:41.836 08:47:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:41.836 [2024-05-15 08:47:36.614287] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:42.094 08:47:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:21:42.352 08:47:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:42.609 [2024-05-15 08:47:37.205479] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:42.609 [2024-05-15 08:47:37.205758] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:42.609 08:47:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:42.867 08:47:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:21:43.124 Malloc0 00:21:43.124 08:47:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:21:43.381 Delay0 00:21:43.381 08:47:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:43.639 08:47:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:21:43.895 NULL1 00:21:43.895 08:47:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:21:44.153 08:47:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2213906 00:21:44.153 08:47:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:21:44.153 08:47:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2213906 00:21:44.153 08:47:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:44.153 EAL: No free 2048 kB hugepages reported on node 1 00:21:45.085 Read completed with error (sct=0, sc=11) 00:21:45.343 08:47:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:45.343 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:21:45.343 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:21:45.343 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:21:45.343 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:21:45.343 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:21:45.343 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:21:45.601 08:47:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:21:45.601 08:47:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:21:45.601 true 00:21:45.601 08:47:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2213906 00:21:45.601 08:47:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:46.533 08:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:46.791 08:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:21:46.791 08:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:21:47.048 true 00:21:47.048 08:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2213906 00:21:47.048 08:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:47.305 08:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:47.564 08:47:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:21:47.564 08:47:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:21:47.821 true 00:21:47.821 08:47:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2213906 00:21:47.821 08:47:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:48.078 08:47:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:48.335 08:47:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:21:48.335 08:47:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:21:48.591 true 00:21:48.591 08:47:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2213906 00:21:48.591 08:47:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:49.521 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:21:49.521 08:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:49.778 08:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:21:49.778 08:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:21:50.036 true 00:21:50.036 08:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2213906 00:21:50.036 08:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:50.294 08:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:50.552 08:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:21:50.552 08:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:21:50.810 true 00:21:50.810 08:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2213906 00:21:50.810 08:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:51.765 08:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:51.765 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:21:51.765 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:21:51.765 08:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:21:51.765 08:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:21:52.022 true 00:21:52.280 08:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2213906 00:21:52.280 08:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:52.280 08:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:52.538 08:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:21:52.538 08:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:21:52.796 true 00:21:53.054 08:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2213906 00:21:53.054 08:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:53.619 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:21:53.619 08:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:54.185 08:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:21:54.185 08:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:21:54.185 true 00:21:54.185 08:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2213906 00:21:54.185 08:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:54.442 08:47:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:54.700 08:47:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:21:54.700 08:47:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:21:54.957 true 00:21:54.957 08:47:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2213906 00:21:54.957 08:47:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:55.891 08:47:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:55.891 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:21:55.891 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:21:56.149 08:47:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:21:56.149 08:47:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:21:56.406 true 00:21:56.406 08:47:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2213906 00:21:56.406 08:47:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:56.664 08:47:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:56.921 08:47:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:21:56.921 08:47:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:21:57.179 true 00:21:57.179 08:47:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2213906 00:21:57.179 08:47:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:57.436 08:47:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:57.694 08:47:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:21:57.694 08:47:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:21:57.951 true 00:21:57.951 08:47:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2213906 00:21:57.951 08:47:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:58.885 08:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:58.885 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:21:59.142 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:21:59.142 08:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:21:59.142 08:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:21:59.399 true 00:21:59.399 08:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2213906 00:21:59.399 08:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:59.657 08:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:59.914 08:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:21:59.914 08:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:22:00.172 true 00:22:00.172 08:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2213906 00:22:00.172 08:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:01.103 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:22:01.103 08:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:01.103 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:22:01.359 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:22:01.359 08:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:22:01.359 08:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:22:01.616 true 00:22:01.616 08:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2213906 00:22:01.616 08:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:01.874 08:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:02.131 08:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:22:02.131 08:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:22:02.390 true 00:22:02.390 08:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2213906 00:22:02.390 08:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:03.323 08:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:03.580 08:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:22:03.580 08:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:22:03.838 true 00:22:03.838 08:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2213906 00:22:03.838 08:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:04.097 08:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:04.392 08:47:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:22:04.392 08:47:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:22:04.673 true 00:22:04.673 08:47:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2213906 00:22:04.673 08:47:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:05.605 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:22:05.605 08:48:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:05.605 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:22:05.605 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:22:05.862 08:48:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:22:05.862 08:48:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:22:05.862 true 00:22:05.862 08:48:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2213906 00:22:06.120 08:48:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:06.120 08:48:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:06.378 08:48:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:22:06.378 08:48:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:22:06.636 true 00:22:06.636 08:48:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2213906 00:22:06.636 08:48:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:07.568 08:48:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:07.568 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:22:07.826 08:48:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:22:07.826 08:48:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:22:08.084 true 00:22:08.084 08:48:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2213906 00:22:08.084 08:48:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:08.342 08:48:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:08.597 08:48:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:22:08.597 08:48:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:22:08.854 true 00:22:08.854 08:48:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2213906 00:22:08.854 08:48:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:09.786 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:22:09.786 08:48:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:10.044 08:48:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:22:10.044 08:48:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:22:10.301 true 00:22:10.301 08:48:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2213906 00:22:10.302 08:48:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:10.560 08:48:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:10.818 08:48:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:22:10.818 08:48:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:22:11.075 true 00:22:11.075 08:48:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2213906 00:22:11.075 08:48:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:12.008 08:48:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:12.008 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:22:12.008 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:22:12.265 08:48:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:22:12.265 08:48:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:22:12.265 true 00:22:12.523 08:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2213906 00:22:12.523 08:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:12.523 08:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:13.089 08:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:22:13.089 08:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:22:13.089 true 00:22:13.089 08:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2213906 00:22:13.089 08:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:14.023 08:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:14.281 08:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:22:14.281 08:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:22:14.281 Initializing NVMe Controllers 00:22:14.281 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:14.281 Controller IO queue size 128, less than required. 00:22:14.281 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:14.281 Controller IO queue size 128, less than required. 00:22:14.281 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:14.281 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:14.281 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:14.281 Initialization complete. Launching workers. 00:22:14.281 ======================================================== 00:22:14.281 Latency(us) 00:22:14.281 Device Information : IOPS MiB/s Average min max 00:22:14.281 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 871.25 0.43 77356.56 2484.99 1012467.31 00:22:14.281 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 11169.04 5.45 11461.44 3477.38 450966.07 00:22:14.281 ======================================================== 00:22:14.281 Total : 12040.28 5.88 16229.68 2484.99 1012467.31 00:22:14.281 00:22:14.539 true 00:22:14.539 08:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2213906 00:22:14.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2213906) - No such process 00:22:14.539 08:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2213906 00:22:14.539 08:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:14.797 08:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:22:15.053 08:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:22:15.053 08:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:22:15.053 08:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:22:15.053 08:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:22:15.053 08:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:22:15.310 null0 00:22:15.310 08:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:22:15.310 08:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:22:15.310 08:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:22:15.567 null1 00:22:15.567 08:48:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:22:15.567 08:48:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:22:15.567 08:48:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:22:15.824 null2 00:22:15.824 08:48:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:22:15.824 08:48:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:22:15.824 08:48:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:22:16.081 null3 00:22:16.081 08:48:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:22:16.081 08:48:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:22:16.081 08:48:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:22:16.339 null4 00:22:16.339 08:48:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:22:16.339 08:48:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:22:16.339 08:48:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:22:16.595 null5 00:22:16.595 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:22:16.595 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:22:16.595 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:22:16.852 null6 00:22:16.852 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:22:16.852 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:22:16.852 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:22:17.109 null7 00:22:17.109 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:22:17.109 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:22:17.109 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:22:17.109 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:22:17.109 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:22:17.109 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:22:17.109 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:22:17.109 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:22:17.109 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:22:17.109 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:22:17.109 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:17.109 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:22:17.109 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:22:17.109 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:22:17.109 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:22:17.109 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:22:17.109 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:22:17.109 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:22:17.109 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:17.109 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:22:17.109 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:22:17.109 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:22:17.109 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:22:17.109 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:22:17.109 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:22:17.110 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:22:17.110 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:17.110 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:22:17.110 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:22:17.110 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:22:17.110 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:22:17.110 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:22:17.110 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:22:17.110 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:22:17.110 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:17.110 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:22:17.110 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:22:17.110 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:22:17.110 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:22:17.110 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:22:17.110 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:22:17.110 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:22:17.110 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:17.110 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:22:17.110 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:22:17.110 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:22:17.110 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:22:17.110 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:22:17.110 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:22:17.110 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:22:17.110 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:17.110 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:22:17.110 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:22:17.110 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:22:17.110 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:22:17.110 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:22:17.110 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:22:17.110 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:22:17.110 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:17.110 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:22:17.110 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:22:17.110 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:22:17.110 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:22:17.110 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:22:17.110 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:22:17.110 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:22:17.110 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2218443 2218444 2218446 2218448 2218450 2218452 2218454 2218456 00:22:17.110 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:17.110 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:22:17.368 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:22:17.368 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:22:17.368 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:17.368 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:22:17.368 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:22:17.368 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:22:17.368 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:22:17.368 08:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:22:17.627 08:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:17.627 08:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:17.627 08:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:22:17.627 08:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:17.627 08:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:17.627 08:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:22:17.627 08:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:17.627 08:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:17.627 08:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:22:17.627 08:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:17.627 08:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:17.627 08:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:22:17.627 08:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:17.627 08:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:17.627 08:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:22:17.627 08:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:17.627 08:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:17.627 08:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:22:17.627 08:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:17.627 08:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:17.627 08:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:22:17.627 08:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:17.627 08:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:17.627 08:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:22:17.920 08:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:22:17.920 08:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:22:17.920 08:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:22:17.920 08:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:17.920 08:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:22:17.920 08:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:22:17.920 08:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:22:17.920 08:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:22:18.177 08:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:18.177 08:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:18.177 08:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:22:18.177 08:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:18.177 08:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:18.177 08:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:22:18.177 08:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:18.177 08:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:18.177 08:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:22:18.177 08:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:18.177 08:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:18.177 08:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:22:18.177 08:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:18.177 08:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:18.177 08:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:22:18.177 08:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:18.177 08:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:18.177 08:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:22:18.177 08:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:18.177 08:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:18.177 08:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:22:18.177 08:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:18.177 08:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:18.177 08:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:22:18.435 08:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:22:18.435 08:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:22:18.435 08:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:18.435 08:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:22:18.435 08:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:22:18.435 08:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:22:18.435 08:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:22:18.435 08:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:22:18.693 08:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:18.693 08:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:18.693 08:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:22:18.693 08:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:18.693 08:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:18.693 08:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:22:18.693 08:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:18.693 08:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:18.693 08:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:22:18.693 08:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:18.693 08:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:18.693 08:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:22:18.693 08:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:18.693 08:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:18.693 08:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:22:18.693 08:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:18.693 08:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:18.693 08:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:22:18.693 08:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:18.693 08:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:18.693 08:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:22:18.693 08:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:18.693 08:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:18.693 08:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:22:18.950 08:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:22:18.950 08:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:22:18.950 08:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:22:18.950 08:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:22:18.950 08:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:22:18.950 08:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:22:18.950 08:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:18.950 08:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:22:19.207 08:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:19.207 08:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:19.207 08:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:22:19.207 08:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:19.207 08:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:19.207 08:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:22:19.207 08:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:19.207 08:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:19.207 08:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:22:19.207 08:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:19.207 08:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:19.207 08:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:22:19.207 08:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:19.207 08:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:19.207 08:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:22:19.207 08:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:19.207 08:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:19.207 08:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:19.207 08:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:22:19.207 08:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:19.207 08:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:22:19.207 08:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:19.207 08:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:19.207 08:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:22:19.465 08:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:22:19.465 08:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:22:19.465 08:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:22:19.465 08:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:22:19.465 08:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:22:19.465 08:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:22:19.465 08:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:22:19.465 08:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:19.723 08:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:19.723 08:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:19.723 08:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:22:19.723 08:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:19.723 08:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:19.723 08:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:22:19.723 08:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:19.723 08:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:19.723 08:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:22:19.723 08:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:19.723 08:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:19.723 08:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:22:19.723 08:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:19.723 08:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:19.723 08:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:22:19.723 08:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:19.723 08:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:19.723 08:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:22:19.723 08:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:19.723 08:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:19.723 08:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:22:19.723 08:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:19.723 08:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:19.723 08:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:22:19.982 08:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:22:19.982 08:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:22:19.982 08:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:22:19.982 08:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:22:19.982 08:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:22:19.982 08:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:19.982 08:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:22:19.982 08:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:22:20.239 08:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:20.239 08:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:20.239 08:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:20.239 08:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:20.239 08:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:22:20.240 08:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:22:20.240 08:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:20.240 08:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:20.240 08:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:22:20.240 08:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:20.240 08:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:20.240 08:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:22:20.240 08:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:20.240 08:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:20.240 08:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:22:20.240 08:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:20.240 08:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:20.240 08:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:22:20.240 08:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:20.240 08:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:20.240 08:48:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:22:20.240 08:48:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:20.240 08:48:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:20.240 08:48:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:22:20.497 08:48:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:22:20.497 08:48:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:22:20.497 08:48:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:22:20.497 08:48:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:20.497 08:48:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:22:20.497 08:48:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:22:20.497 08:48:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:22:20.497 08:48:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:22:20.754 08:48:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:20.754 08:48:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:20.754 08:48:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:22:20.754 08:48:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:20.754 08:48:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:20.755 08:48:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:22:20.755 08:48:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:20.755 08:48:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:20.755 08:48:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:22:20.755 08:48:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:20.755 08:48:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:20.755 08:48:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:22:20.755 08:48:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:20.755 08:48:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:20.755 08:48:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:22:20.755 08:48:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:20.755 08:48:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:20.755 08:48:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:22:21.012 08:48:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:21.012 08:48:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:21.012 08:48:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:22:21.012 08:48:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:21.012 08:48:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:21.012 08:48:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:22:21.012 08:48:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:22:21.013 08:48:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:22:21.271 08:48:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:22:21.271 08:48:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:21.271 08:48:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:22:21.271 08:48:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:22:21.271 08:48:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:22:21.271 08:48:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:22:21.529 08:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:21.529 08:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:21.529 08:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:22:21.529 08:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:21.529 08:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:21.529 08:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:22:21.529 08:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:21.529 08:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:21.529 08:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:22:21.529 08:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:21.529 08:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:21.529 08:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:22:21.529 08:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:21.529 08:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:21.529 08:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:21.529 08:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:22:21.529 08:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:21.529 08:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:22:21.529 08:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:21.529 08:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:21.529 08:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:22:21.529 08:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:21.529 08:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:21.529 08:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:22:21.787 08:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:22:21.787 08:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:22:21.787 08:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:22:21.787 08:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:22:21.787 08:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:22:21.787 08:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:21.787 08:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:22:21.787 08:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:22:22.045 08:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:22.045 08:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:22.045 08:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:22:22.045 08:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:22.045 08:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:22.045 08:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:22:22.045 08:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:22.045 08:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:22.045 08:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:22:22.045 08:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:22.045 08:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:22.045 08:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:22:22.045 08:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:22.045 08:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:22.045 08:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:22:22.045 08:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:22.045 08:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:22.045 08:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:22:22.045 08:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:22.045 08:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:22.045 08:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:22:22.045 08:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:22.045 08:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:22.045 08:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:22:22.303 08:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:22:22.303 08:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:22:22.303 08:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:22.303 08:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:22:22.303 08:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:22:22.303 08:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:22:22.303 08:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:22:22.303 08:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:22:22.560 08:48:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:22.560 08:48:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:22.560 08:48:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:22.560 08:48:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:22.560 08:48:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:22.560 08:48:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:22.560 08:48:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:22.560 08:48:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:22.560 08:48:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:22.560 08:48:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:22.560 08:48:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:22.560 08:48:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:22.560 08:48:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:22.560 08:48:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:22.560 08:48:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:22.560 08:48:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:22.560 08:48:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:22:22.560 08:48:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:22:22.560 08:48:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:22.560 08:48:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:22:22.560 08:48:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:22.560 08:48:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:22:22.560 08:48:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:22.560 08:48:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:22.560 rmmod nvme_tcp 00:22:22.560 rmmod nvme_fabrics 00:22:22.560 rmmod nvme_keyring 00:22:22.560 08:48:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:22.560 08:48:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:22:22.560 08:48:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:22:22.560 08:48:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 2213495 ']' 00:22:22.560 08:48:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 2213495 00:22:22.560 08:48:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@947 -- # '[' -z 2213495 ']' 00:22:22.560 08:48:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # kill -0 2213495 00:22:22.560 08:48:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # uname 00:22:22.560 08:48:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:22.560 08:48:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2213495 00:22:22.560 08:48:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:22:22.560 08:48:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:22:22.560 08:48:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2213495' 00:22:22.560 killing process with pid 2213495 00:22:22.560 08:48:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # kill 2213495 00:22:22.560 [2024-05-15 08:48:17.281267] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:22.560 08:48:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # wait 2213495 00:22:22.819 08:48:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:22.819 08:48:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:22.819 08:48:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:22.819 08:48:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:22.819 08:48:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:22.819 08:48:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:22.819 08:48:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:22.819 08:48:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:25.353 08:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:25.353 00:22:25.353 real 0m46.310s 00:22:25.353 user 3m29.495s 00:22:25.353 sys 0m15.945s 00:22:25.353 08:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # xtrace_disable 00:22:25.353 08:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:22:25.353 ************************************ 00:22:25.353 END TEST nvmf_ns_hotplug_stress 00:22:25.353 ************************************ 00:22:25.353 08:48:19 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:22:25.353 08:48:19 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:22:25.353 08:48:19 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:22:25.353 08:48:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:25.353 ************************************ 00:22:25.353 START TEST nvmf_connect_stress 00:22:25.353 ************************************ 00:22:25.353 08:48:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:22:25.353 * Looking for test storage... 00:22:25.353 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:25.353 08:48:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:25.353 08:48:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:22:25.353 08:48:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:25.353 08:48:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:25.353 08:48:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:25.353 08:48:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:25.353 08:48:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:25.353 08:48:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:25.353 08:48:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:25.353 08:48:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:25.353 08:48:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:25.353 08:48:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:25.353 08:48:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:25.353 08:48:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:25.353 08:48:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:25.353 08:48:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:25.353 08:48:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:25.353 08:48:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:25.353 08:48:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:25.353 08:48:19 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:25.353 08:48:19 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:25.353 08:48:19 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:25.353 08:48:19 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.354 08:48:19 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.354 08:48:19 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.354 08:48:19 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:22:25.354 08:48:19 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.354 08:48:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:22:25.354 08:48:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:25.354 08:48:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:25.354 08:48:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:25.354 08:48:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:25.354 08:48:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:25.354 08:48:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:25.354 08:48:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:25.354 08:48:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:25.354 08:48:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:22:25.354 08:48:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:25.354 08:48:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:25.354 08:48:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:25.354 08:48:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:25.354 08:48:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:25.354 08:48:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:25.354 08:48:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:25.354 08:48:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:25.354 08:48:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:25.354 08:48:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:25.354 08:48:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:22:25.354 08:48:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:27.885 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:27.885 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:22:27.885 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:27.885 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:27.885 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:27.885 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:27.885 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:27.885 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:22:27.885 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:27.885 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:22:27.885 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:22:27.885 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:22:27.885 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:22:27.885 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:22:27.885 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:22:27.885 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:27.885 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:27.885 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:27.885 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:27.885 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:27.885 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:27.885 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:27.885 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:27.885 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:27.885 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:27.885 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:27.885 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:27.885 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:27.885 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:27.885 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:27.885 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:27.885 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:27.885 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:27.885 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:27.885 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:27.885 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:27.885 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:27.885 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:27.885 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:27.885 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:27.885 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:27.885 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:27.885 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:27.885 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:27.885 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:27.885 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:27.885 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:27.885 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:27.885 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:27.885 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:27.885 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:27.885 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:27.885 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:27.885 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:27.885 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:27.885 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:27.885 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:27.885 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:27.885 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:27.885 Found net devices under 0000:09:00.0: cvl_0_0 00:22:27.885 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:27.885 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:27.885 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:27.885 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:27.885 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:27.885 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:27.885 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:27.886 Found net devices under 0000:09:00.1: cvl_0_1 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:27.886 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:27.886 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:22:27.886 00:22:27.886 --- 10.0.0.2 ping statistics --- 00:22:27.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:27.886 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:27.886 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:27.886 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:22:27.886 00:22:27.886 --- 10.0.0.1 ping statistics --- 00:22:27.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:27.886 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@721 -- # xtrace_disable 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=2221613 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 2221613 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@828 -- # '[' -z 2221613 ']' 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:27.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:27.886 [2024-05-15 08:48:22.329272] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:22:27.886 [2024-05-15 08:48:22.329340] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:27.886 EAL: No free 2048 kB hugepages reported on node 1 00:22:27.886 [2024-05-15 08:48:22.402600] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:27.886 [2024-05-15 08:48:22.482820] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:27.886 [2024-05-15 08:48:22.482870] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:27.886 [2024-05-15 08:48:22.482885] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:27.886 [2024-05-15 08:48:22.482898] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:27.886 [2024-05-15 08:48:22.482924] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:27.886 [2024-05-15 08:48:22.482993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:27.886 [2024-05-15 08:48:22.483051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:27.886 [2024-05-15 08:48:22.483054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@861 -- # return 0 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@727 -- # xtrace_disable 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:27.886 [2024-05-15 08:48:22.621805] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:27.886 [2024-05-15 08:48:22.639056] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:27.886 [2024-05-15 08:48:22.646395] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:27.886 NULL1 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2221646 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:22:27.886 08:48:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:28.147 08:48:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:22:28.147 08:48:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:28.147 08:48:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:22:28.147 08:48:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:28.147 08:48:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:22:28.147 08:48:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:28.147 08:48:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:22:28.147 08:48:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:28.147 08:48:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:22:28.147 08:48:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:28.147 08:48:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:22:28.147 EAL: No free 2048 kB hugepages reported on node 1 00:22:28.147 08:48:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:28.147 08:48:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:22:28.147 08:48:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:28.147 08:48:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:22:28.147 08:48:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:28.147 08:48:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:22:28.147 08:48:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:28.147 08:48:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:22:28.147 08:48:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:28.147 08:48:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:22:28.147 08:48:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:28.147 08:48:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:22:28.147 08:48:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:28.147 08:48:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:22:28.147 08:48:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:28.147 08:48:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:22:28.147 08:48:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:28.147 08:48:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:22:28.147 08:48:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2221646 00:22:28.147 08:48:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:28.147 08:48:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:28.147 08:48:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:28.406 08:48:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:28.406 08:48:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2221646 00:22:28.406 08:48:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:28.406 08:48:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:28.406 08:48:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:28.662 08:48:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:28.662 08:48:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2221646 00:22:28.662 08:48:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:28.662 08:48:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:28.662 08:48:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:28.920 08:48:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:28.920 08:48:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2221646 00:22:28.920 08:48:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:28.920 08:48:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:28.920 08:48:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:29.485 08:48:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:29.485 08:48:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2221646 00:22:29.485 08:48:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:29.485 08:48:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:29.485 08:48:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:29.743 08:48:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:29.743 08:48:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2221646 00:22:29.743 08:48:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:29.743 08:48:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:29.743 08:48:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:30.001 08:48:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:30.001 08:48:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2221646 00:22:30.001 08:48:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:30.001 08:48:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:30.001 08:48:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:30.259 08:48:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:30.259 08:48:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2221646 00:22:30.259 08:48:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:30.259 08:48:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:30.259 08:48:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:30.517 08:48:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:30.517 08:48:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2221646 00:22:30.517 08:48:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:30.517 08:48:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:30.517 08:48:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:31.082 08:48:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:31.082 08:48:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2221646 00:22:31.082 08:48:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:31.082 08:48:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:31.082 08:48:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:31.339 08:48:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:31.339 08:48:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2221646 00:22:31.339 08:48:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:31.339 08:48:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:31.339 08:48:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:31.596 08:48:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:31.596 08:48:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2221646 00:22:31.596 08:48:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:31.596 08:48:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:31.596 08:48:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:31.854 08:48:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:31.854 08:48:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2221646 00:22:31.854 08:48:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:31.854 08:48:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:31.854 08:48:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:32.165 08:48:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:32.165 08:48:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2221646 00:22:32.165 08:48:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:32.165 08:48:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:32.165 08:48:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:32.446 08:48:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:32.446 08:48:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2221646 00:22:32.446 08:48:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:32.446 08:48:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:32.446 08:48:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:33.009 08:48:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:33.009 08:48:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2221646 00:22:33.009 08:48:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:33.009 08:48:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:33.009 08:48:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:33.310 08:48:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:33.310 08:48:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2221646 00:22:33.310 08:48:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:33.310 08:48:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:33.310 08:48:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:33.568 08:48:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:33.568 08:48:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2221646 00:22:33.568 08:48:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:33.568 08:48:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:33.568 08:48:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:33.826 08:48:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:33.826 08:48:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2221646 00:22:33.826 08:48:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:33.826 08:48:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:33.826 08:48:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:34.083 08:48:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:34.083 08:48:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2221646 00:22:34.083 08:48:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:34.083 08:48:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:34.083 08:48:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:34.340 08:48:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:34.340 08:48:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2221646 00:22:34.340 08:48:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:34.597 08:48:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:34.597 08:48:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:34.854 08:48:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:34.854 08:48:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2221646 00:22:34.854 08:48:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:34.854 08:48:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:34.855 08:48:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:35.112 08:48:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:35.112 08:48:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2221646 00:22:35.112 08:48:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:35.112 08:48:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:35.112 08:48:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:35.370 08:48:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:35.370 08:48:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2221646 00:22:35.370 08:48:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:35.370 08:48:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:35.370 08:48:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:35.935 08:48:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:35.935 08:48:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2221646 00:22:35.935 08:48:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:35.935 08:48:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:35.935 08:48:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:36.192 08:48:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:36.192 08:48:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2221646 00:22:36.192 08:48:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:36.192 08:48:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:36.192 08:48:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:36.450 08:48:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:36.450 08:48:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2221646 00:22:36.450 08:48:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:36.450 08:48:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:36.450 08:48:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:36.707 08:48:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:36.707 08:48:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2221646 00:22:36.707 08:48:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:36.707 08:48:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:36.707 08:48:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:36.965 08:48:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:36.965 08:48:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2221646 00:22:36.965 08:48:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:36.965 08:48:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:36.965 08:48:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:37.530 08:48:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:37.530 08:48:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2221646 00:22:37.530 08:48:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:37.530 08:48:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:37.530 08:48:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:37.787 08:48:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:37.787 08:48:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2221646 00:22:37.787 08:48:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:37.787 08:48:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:37.787 08:48:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:38.044 08:48:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:38.044 08:48:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2221646 00:22:38.044 08:48:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:38.044 08:48:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:38.044 08:48:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:38.044 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:38.302 08:48:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:38.302 08:48:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2221646 00:22:38.302 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2221646) - No such process 00:22:38.302 08:48:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2221646 00:22:38.302 08:48:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:22:38.302 08:48:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:22:38.302 08:48:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:22:38.302 08:48:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:38.302 08:48:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:22:38.302 08:48:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:38.302 08:48:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:22:38.302 08:48:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:38.302 08:48:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:38.302 rmmod nvme_tcp 00:22:38.302 rmmod nvme_fabrics 00:22:38.302 rmmod nvme_keyring 00:22:38.302 08:48:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:38.302 08:48:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:22:38.302 08:48:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:22:38.302 08:48:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 2221613 ']' 00:22:38.302 08:48:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 2221613 00:22:38.302 08:48:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@947 -- # '[' -z 2221613 ']' 00:22:38.302 08:48:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # kill -0 2221613 00:22:38.302 08:48:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # uname 00:22:38.302 08:48:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:38.302 08:48:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2221613 00:22:38.302 08:48:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:22:38.302 08:48:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:22:38.302 08:48:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2221613' 00:22:38.302 killing process with pid 2221613 00:22:38.302 08:48:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # kill 2221613 00:22:38.302 [2024-05-15 08:48:33.067349] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:38.302 08:48:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@971 -- # wait 2221613 00:22:38.568 08:48:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:38.568 08:48:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:38.568 08:48:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:38.568 08:48:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:38.568 08:48:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:38.568 08:48:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:38.568 08:48:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:38.568 08:48:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.104 08:48:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:41.104 00:22:41.104 real 0m15.713s 00:22:41.104 user 0m38.579s 00:22:41.104 sys 0m6.141s 00:22:41.104 08:48:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # xtrace_disable 00:22:41.104 08:48:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:41.104 ************************************ 00:22:41.104 END TEST nvmf_connect_stress 00:22:41.104 ************************************ 00:22:41.104 08:48:35 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:22:41.104 08:48:35 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:22:41.104 08:48:35 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:22:41.104 08:48:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:41.104 ************************************ 00:22:41.104 START TEST nvmf_fused_ordering 00:22:41.104 ************************************ 00:22:41.104 08:48:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:22:41.104 * Looking for test storage... 00:22:41.104 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:41.104 08:48:35 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:41.104 08:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:22:41.104 08:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:41.104 08:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:41.104 08:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:41.104 08:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:41.104 08:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:41.104 08:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:41.104 08:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:41.104 08:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:41.104 08:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:41.104 08:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:41.104 08:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:41.104 08:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:41.104 08:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:41.104 08:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:41.104 08:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:41.104 08:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:41.104 08:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:41.104 08:48:35 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:41.104 08:48:35 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:41.105 08:48:35 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:41.105 08:48:35 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.105 08:48:35 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.105 08:48:35 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.105 08:48:35 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:22:41.105 08:48:35 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.105 08:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:22:41.105 08:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:41.105 08:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:41.105 08:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:41.105 08:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:41.105 08:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:41.105 08:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:41.105 08:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:41.105 08:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:41.105 08:48:35 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:22:41.105 08:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:41.105 08:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:41.105 08:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:41.105 08:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:41.105 08:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:41.105 08:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:41.105 08:48:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:41.105 08:48:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.105 08:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:41.105 08:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:41.105 08:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:22:41.105 08:48:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:22:43.635 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:43.635 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:22:43.635 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:43.635 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:43.635 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:43.635 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:43.635 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:43.635 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:22:43.635 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:43.635 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:22:43.635 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:22:43.635 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:22:43.635 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:22:43.635 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:22:43.635 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:22:43.635 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:43.635 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:43.635 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:43.635 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:43.635 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:43.635 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:43.635 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:43.635 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:43.635 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:43.635 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:43.635 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:43.635 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:43.635 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:43.635 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:43.635 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:43.635 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:43.635 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:43.635 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:43.635 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:43.635 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:43.635 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:43.635 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:43.635 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:43.635 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:43.635 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:43.635 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:43.635 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:43.635 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:43.635 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:43.635 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:43.635 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:43.635 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:43.636 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:43.636 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:43.636 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:43.636 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:43.636 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:43.636 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:43.636 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:43.636 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:43.636 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:43.636 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:43.636 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:43.636 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:43.636 Found net devices under 0000:09:00.0: cvl_0_0 00:22:43.636 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:43.636 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:43.636 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:43.636 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:43.636 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:43.636 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:43.636 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:43.636 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:43.636 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:43.636 Found net devices under 0000:09:00.1: cvl_0_1 00:22:43.636 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:43.636 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:43.636 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:22:43.636 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:43.636 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:43.636 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:43.636 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:43.636 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:43.636 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:43.636 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:43.636 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:43.636 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:43.636 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:43.636 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:43.636 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:43.636 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:43.636 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:43.636 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:43.636 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:43.636 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:43.636 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:43.636 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:43.636 08:48:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:43.636 08:48:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:43.636 08:48:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:43.636 08:48:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:43.636 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:43.636 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:22:43.636 00:22:43.636 --- 10.0.0.2 ping statistics --- 00:22:43.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:43.636 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:22:43.636 08:48:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:43.636 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:43.636 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:22:43.636 00:22:43.636 --- 10.0.0.1 ping statistics --- 00:22:43.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:43.636 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:22:43.636 08:48:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:43.636 08:48:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:22:43.636 08:48:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:43.636 08:48:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:43.636 08:48:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:43.636 08:48:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:43.636 08:48:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:43.636 08:48:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:43.636 08:48:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:43.636 08:48:38 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:22:43.636 08:48:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:43.636 08:48:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@721 -- # xtrace_disable 00:22:43.636 08:48:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:22:43.636 08:48:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=2225087 00:22:43.636 08:48:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:43.636 08:48:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 2225087 00:22:43.636 08:48:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@828 -- # '[' -z 2225087 ']' 00:22:43.636 08:48:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:43.636 08:48:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:43.636 08:48:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:43.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:43.636 08:48:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:43.636 08:48:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:22:43.636 [2024-05-15 08:48:38.102455] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:22:43.636 [2024-05-15 08:48:38.102550] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:43.636 EAL: No free 2048 kB hugepages reported on node 1 00:22:43.636 [2024-05-15 08:48:38.177457] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:43.636 [2024-05-15 08:48:38.260323] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:43.636 [2024-05-15 08:48:38.260377] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:43.636 [2024-05-15 08:48:38.260405] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:43.636 [2024-05-15 08:48:38.260417] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:43.636 [2024-05-15 08:48:38.260427] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:43.636 [2024-05-15 08:48:38.260454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:43.636 08:48:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:43.636 08:48:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@861 -- # return 0 00:22:43.636 08:48:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:43.636 08:48:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@727 -- # xtrace_disable 00:22:43.636 08:48:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:22:43.636 08:48:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:43.636 08:48:38 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:43.636 08:48:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:43.636 08:48:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:22:43.636 [2024-05-15 08:48:38.399023] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:43.636 08:48:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:43.636 08:48:38 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:22:43.636 08:48:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:43.636 08:48:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:22:43.636 08:48:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:43.636 08:48:38 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:43.636 08:48:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:43.636 08:48:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:22:43.636 [2024-05-15 08:48:38.414995] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:43.636 [2024-05-15 08:48:38.415326] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:43.637 08:48:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:43.637 08:48:38 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:22:43.637 08:48:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:43.637 08:48:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:22:43.895 NULL1 00:22:43.895 08:48:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:43.895 08:48:38 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:22:43.895 08:48:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:43.895 08:48:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:22:43.895 08:48:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:43.895 08:48:38 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:22:43.895 08:48:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:43.895 08:48:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:22:43.895 08:48:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:43.895 08:48:38 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:43.895 [2024-05-15 08:48:38.459143] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:22:43.895 [2024-05-15 08:48:38.459183] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2225227 ] 00:22:43.895 EAL: No free 2048 kB hugepages reported on node 1 00:22:44.153 Attached to nqn.2016-06.io.spdk:cnode1 00:22:44.153 Namespace ID: 1 size: 1GB 00:22:44.153 fused_ordering(0) 00:22:44.153 fused_ordering(1) 00:22:44.153 fused_ordering(2) 00:22:44.153 fused_ordering(3) 00:22:44.153 fused_ordering(4) 00:22:44.153 fused_ordering(5) 00:22:44.153 fused_ordering(6) 00:22:44.153 fused_ordering(7) 00:22:44.153 fused_ordering(8) 00:22:44.153 fused_ordering(9) 00:22:44.153 fused_ordering(10) 00:22:44.153 fused_ordering(11) 00:22:44.153 fused_ordering(12) 00:22:44.153 fused_ordering(13) 00:22:44.153 fused_ordering(14) 00:22:44.153 fused_ordering(15) 00:22:44.153 fused_ordering(16) 00:22:44.153 fused_ordering(17) 00:22:44.153 fused_ordering(18) 00:22:44.153 fused_ordering(19) 00:22:44.153 fused_ordering(20) 00:22:44.153 fused_ordering(21) 00:22:44.153 fused_ordering(22) 00:22:44.153 fused_ordering(23) 00:22:44.153 fused_ordering(24) 00:22:44.153 fused_ordering(25) 00:22:44.153 fused_ordering(26) 00:22:44.153 fused_ordering(27) 00:22:44.153 fused_ordering(28) 00:22:44.153 fused_ordering(29) 00:22:44.153 fused_ordering(30) 00:22:44.153 fused_ordering(31) 00:22:44.153 fused_ordering(32) 00:22:44.153 fused_ordering(33) 00:22:44.153 fused_ordering(34) 00:22:44.153 fused_ordering(35) 00:22:44.153 fused_ordering(36) 00:22:44.153 fused_ordering(37) 00:22:44.153 fused_ordering(38) 00:22:44.153 fused_ordering(39) 00:22:44.153 fused_ordering(40) 00:22:44.153 fused_ordering(41) 00:22:44.153 fused_ordering(42) 00:22:44.153 fused_ordering(43) 00:22:44.153 fused_ordering(44) 00:22:44.153 fused_ordering(45) 00:22:44.153 fused_ordering(46) 00:22:44.153 fused_ordering(47) 00:22:44.153 fused_ordering(48) 00:22:44.153 fused_ordering(49) 00:22:44.153 fused_ordering(50) 00:22:44.153 fused_ordering(51) 00:22:44.153 fused_ordering(52) 00:22:44.153 fused_ordering(53) 00:22:44.153 fused_ordering(54) 00:22:44.153 fused_ordering(55) 00:22:44.153 fused_ordering(56) 00:22:44.153 fused_ordering(57) 00:22:44.153 fused_ordering(58) 00:22:44.153 fused_ordering(59) 00:22:44.153 fused_ordering(60) 00:22:44.153 fused_ordering(61) 00:22:44.153 fused_ordering(62) 00:22:44.153 fused_ordering(63) 00:22:44.153 fused_ordering(64) 00:22:44.153 fused_ordering(65) 00:22:44.153 fused_ordering(66) 00:22:44.153 fused_ordering(67) 00:22:44.153 fused_ordering(68) 00:22:44.153 fused_ordering(69) 00:22:44.153 fused_ordering(70) 00:22:44.153 fused_ordering(71) 00:22:44.153 fused_ordering(72) 00:22:44.153 fused_ordering(73) 00:22:44.153 fused_ordering(74) 00:22:44.153 fused_ordering(75) 00:22:44.153 fused_ordering(76) 00:22:44.153 fused_ordering(77) 00:22:44.153 fused_ordering(78) 00:22:44.153 fused_ordering(79) 00:22:44.153 fused_ordering(80) 00:22:44.153 fused_ordering(81) 00:22:44.153 fused_ordering(82) 00:22:44.153 fused_ordering(83) 00:22:44.153 fused_ordering(84) 00:22:44.153 fused_ordering(85) 00:22:44.153 fused_ordering(86) 00:22:44.153 fused_ordering(87) 00:22:44.153 fused_ordering(88) 00:22:44.153 fused_ordering(89) 00:22:44.153 fused_ordering(90) 00:22:44.153 fused_ordering(91) 00:22:44.153 fused_ordering(92) 00:22:44.153 fused_ordering(93) 00:22:44.153 fused_ordering(94) 00:22:44.153 fused_ordering(95) 00:22:44.153 fused_ordering(96) 00:22:44.153 fused_ordering(97) 00:22:44.153 fused_ordering(98) 00:22:44.153 fused_ordering(99) 00:22:44.153 fused_ordering(100) 00:22:44.153 fused_ordering(101) 00:22:44.153 fused_ordering(102) 00:22:44.153 fused_ordering(103) 00:22:44.153 fused_ordering(104) 00:22:44.153 fused_ordering(105) 00:22:44.153 fused_ordering(106) 00:22:44.153 fused_ordering(107) 00:22:44.153 fused_ordering(108) 00:22:44.153 fused_ordering(109) 00:22:44.153 fused_ordering(110) 00:22:44.153 fused_ordering(111) 00:22:44.153 fused_ordering(112) 00:22:44.153 fused_ordering(113) 00:22:44.153 fused_ordering(114) 00:22:44.153 fused_ordering(115) 00:22:44.153 fused_ordering(116) 00:22:44.153 fused_ordering(117) 00:22:44.153 fused_ordering(118) 00:22:44.153 fused_ordering(119) 00:22:44.153 fused_ordering(120) 00:22:44.153 fused_ordering(121) 00:22:44.153 fused_ordering(122) 00:22:44.153 fused_ordering(123) 00:22:44.153 fused_ordering(124) 00:22:44.153 fused_ordering(125) 00:22:44.153 fused_ordering(126) 00:22:44.153 fused_ordering(127) 00:22:44.153 fused_ordering(128) 00:22:44.153 fused_ordering(129) 00:22:44.153 fused_ordering(130) 00:22:44.153 fused_ordering(131) 00:22:44.153 fused_ordering(132) 00:22:44.153 fused_ordering(133) 00:22:44.153 fused_ordering(134) 00:22:44.153 fused_ordering(135) 00:22:44.153 fused_ordering(136) 00:22:44.153 fused_ordering(137) 00:22:44.153 fused_ordering(138) 00:22:44.153 fused_ordering(139) 00:22:44.153 fused_ordering(140) 00:22:44.153 fused_ordering(141) 00:22:44.153 fused_ordering(142) 00:22:44.153 fused_ordering(143) 00:22:44.153 fused_ordering(144) 00:22:44.153 fused_ordering(145) 00:22:44.153 fused_ordering(146) 00:22:44.153 fused_ordering(147) 00:22:44.153 fused_ordering(148) 00:22:44.153 fused_ordering(149) 00:22:44.153 fused_ordering(150) 00:22:44.153 fused_ordering(151) 00:22:44.153 fused_ordering(152) 00:22:44.153 fused_ordering(153) 00:22:44.153 fused_ordering(154) 00:22:44.153 fused_ordering(155) 00:22:44.153 fused_ordering(156) 00:22:44.153 fused_ordering(157) 00:22:44.153 fused_ordering(158) 00:22:44.153 fused_ordering(159) 00:22:44.153 fused_ordering(160) 00:22:44.153 fused_ordering(161) 00:22:44.153 fused_ordering(162) 00:22:44.153 fused_ordering(163) 00:22:44.153 fused_ordering(164) 00:22:44.153 fused_ordering(165) 00:22:44.153 fused_ordering(166) 00:22:44.153 fused_ordering(167) 00:22:44.153 fused_ordering(168) 00:22:44.153 fused_ordering(169) 00:22:44.153 fused_ordering(170) 00:22:44.153 fused_ordering(171) 00:22:44.153 fused_ordering(172) 00:22:44.153 fused_ordering(173) 00:22:44.153 fused_ordering(174) 00:22:44.153 fused_ordering(175) 00:22:44.153 fused_ordering(176) 00:22:44.153 fused_ordering(177) 00:22:44.153 fused_ordering(178) 00:22:44.153 fused_ordering(179) 00:22:44.153 fused_ordering(180) 00:22:44.153 fused_ordering(181) 00:22:44.153 fused_ordering(182) 00:22:44.153 fused_ordering(183) 00:22:44.153 fused_ordering(184) 00:22:44.153 fused_ordering(185) 00:22:44.153 fused_ordering(186) 00:22:44.153 fused_ordering(187) 00:22:44.153 fused_ordering(188) 00:22:44.153 fused_ordering(189) 00:22:44.153 fused_ordering(190) 00:22:44.153 fused_ordering(191) 00:22:44.153 fused_ordering(192) 00:22:44.153 fused_ordering(193) 00:22:44.153 fused_ordering(194) 00:22:44.153 fused_ordering(195) 00:22:44.153 fused_ordering(196) 00:22:44.153 fused_ordering(197) 00:22:44.153 fused_ordering(198) 00:22:44.153 fused_ordering(199) 00:22:44.153 fused_ordering(200) 00:22:44.153 fused_ordering(201) 00:22:44.153 fused_ordering(202) 00:22:44.153 fused_ordering(203) 00:22:44.153 fused_ordering(204) 00:22:44.154 fused_ordering(205) 00:22:44.720 fused_ordering(206) 00:22:44.720 fused_ordering(207) 00:22:44.720 fused_ordering(208) 00:22:44.720 fused_ordering(209) 00:22:44.720 fused_ordering(210) 00:22:44.720 fused_ordering(211) 00:22:44.720 fused_ordering(212) 00:22:44.720 fused_ordering(213) 00:22:44.720 fused_ordering(214) 00:22:44.720 fused_ordering(215) 00:22:44.720 fused_ordering(216) 00:22:44.720 fused_ordering(217) 00:22:44.720 fused_ordering(218) 00:22:44.720 fused_ordering(219) 00:22:44.720 fused_ordering(220) 00:22:44.720 fused_ordering(221) 00:22:44.720 fused_ordering(222) 00:22:44.720 fused_ordering(223) 00:22:44.720 fused_ordering(224) 00:22:44.720 fused_ordering(225) 00:22:44.720 fused_ordering(226) 00:22:44.720 fused_ordering(227) 00:22:44.720 fused_ordering(228) 00:22:44.720 fused_ordering(229) 00:22:44.720 fused_ordering(230) 00:22:44.720 fused_ordering(231) 00:22:44.720 fused_ordering(232) 00:22:44.720 fused_ordering(233) 00:22:44.720 fused_ordering(234) 00:22:44.720 fused_ordering(235) 00:22:44.720 fused_ordering(236) 00:22:44.720 fused_ordering(237) 00:22:44.720 fused_ordering(238) 00:22:44.720 fused_ordering(239) 00:22:44.720 fused_ordering(240) 00:22:44.720 fused_ordering(241) 00:22:44.720 fused_ordering(242) 00:22:44.720 fused_ordering(243) 00:22:44.720 fused_ordering(244) 00:22:44.720 fused_ordering(245) 00:22:44.720 fused_ordering(246) 00:22:44.720 fused_ordering(247) 00:22:44.720 fused_ordering(248) 00:22:44.720 fused_ordering(249) 00:22:44.720 fused_ordering(250) 00:22:44.720 fused_ordering(251) 00:22:44.720 fused_ordering(252) 00:22:44.720 fused_ordering(253) 00:22:44.720 fused_ordering(254) 00:22:44.720 fused_ordering(255) 00:22:44.720 fused_ordering(256) 00:22:44.720 fused_ordering(257) 00:22:44.720 fused_ordering(258) 00:22:44.720 fused_ordering(259) 00:22:44.720 fused_ordering(260) 00:22:44.720 fused_ordering(261) 00:22:44.720 fused_ordering(262) 00:22:44.720 fused_ordering(263) 00:22:44.720 fused_ordering(264) 00:22:44.720 fused_ordering(265) 00:22:44.720 fused_ordering(266) 00:22:44.720 fused_ordering(267) 00:22:44.720 fused_ordering(268) 00:22:44.720 fused_ordering(269) 00:22:44.720 fused_ordering(270) 00:22:44.720 fused_ordering(271) 00:22:44.720 fused_ordering(272) 00:22:44.720 fused_ordering(273) 00:22:44.720 fused_ordering(274) 00:22:44.720 fused_ordering(275) 00:22:44.720 fused_ordering(276) 00:22:44.720 fused_ordering(277) 00:22:44.720 fused_ordering(278) 00:22:44.720 fused_ordering(279) 00:22:44.720 fused_ordering(280) 00:22:44.720 fused_ordering(281) 00:22:44.720 fused_ordering(282) 00:22:44.720 fused_ordering(283) 00:22:44.720 fused_ordering(284) 00:22:44.720 fused_ordering(285) 00:22:44.720 fused_ordering(286) 00:22:44.720 fused_ordering(287) 00:22:44.720 fused_ordering(288) 00:22:44.720 fused_ordering(289) 00:22:44.720 fused_ordering(290) 00:22:44.720 fused_ordering(291) 00:22:44.720 fused_ordering(292) 00:22:44.720 fused_ordering(293) 00:22:44.720 fused_ordering(294) 00:22:44.720 fused_ordering(295) 00:22:44.720 fused_ordering(296) 00:22:44.720 fused_ordering(297) 00:22:44.720 fused_ordering(298) 00:22:44.720 fused_ordering(299) 00:22:44.720 fused_ordering(300) 00:22:44.720 fused_ordering(301) 00:22:44.720 fused_ordering(302) 00:22:44.720 fused_ordering(303) 00:22:44.720 fused_ordering(304) 00:22:44.720 fused_ordering(305) 00:22:44.720 fused_ordering(306) 00:22:44.720 fused_ordering(307) 00:22:44.720 fused_ordering(308) 00:22:44.720 fused_ordering(309) 00:22:44.720 fused_ordering(310) 00:22:44.720 fused_ordering(311) 00:22:44.720 fused_ordering(312) 00:22:44.720 fused_ordering(313) 00:22:44.720 fused_ordering(314) 00:22:44.720 fused_ordering(315) 00:22:44.720 fused_ordering(316) 00:22:44.720 fused_ordering(317) 00:22:44.720 fused_ordering(318) 00:22:44.720 fused_ordering(319) 00:22:44.720 fused_ordering(320) 00:22:44.720 fused_ordering(321) 00:22:44.720 fused_ordering(322) 00:22:44.720 fused_ordering(323) 00:22:44.720 fused_ordering(324) 00:22:44.720 fused_ordering(325) 00:22:44.720 fused_ordering(326) 00:22:44.720 fused_ordering(327) 00:22:44.720 fused_ordering(328) 00:22:44.720 fused_ordering(329) 00:22:44.720 fused_ordering(330) 00:22:44.720 fused_ordering(331) 00:22:44.720 fused_ordering(332) 00:22:44.720 fused_ordering(333) 00:22:44.720 fused_ordering(334) 00:22:44.720 fused_ordering(335) 00:22:44.720 fused_ordering(336) 00:22:44.720 fused_ordering(337) 00:22:44.721 fused_ordering(338) 00:22:44.721 fused_ordering(339) 00:22:44.721 fused_ordering(340) 00:22:44.721 fused_ordering(341) 00:22:44.721 fused_ordering(342) 00:22:44.721 fused_ordering(343) 00:22:44.721 fused_ordering(344) 00:22:44.721 fused_ordering(345) 00:22:44.721 fused_ordering(346) 00:22:44.721 fused_ordering(347) 00:22:44.721 fused_ordering(348) 00:22:44.721 fused_ordering(349) 00:22:44.721 fused_ordering(350) 00:22:44.721 fused_ordering(351) 00:22:44.721 fused_ordering(352) 00:22:44.721 fused_ordering(353) 00:22:44.721 fused_ordering(354) 00:22:44.721 fused_ordering(355) 00:22:44.721 fused_ordering(356) 00:22:44.721 fused_ordering(357) 00:22:44.721 fused_ordering(358) 00:22:44.721 fused_ordering(359) 00:22:44.721 fused_ordering(360) 00:22:44.721 fused_ordering(361) 00:22:44.721 fused_ordering(362) 00:22:44.721 fused_ordering(363) 00:22:44.721 fused_ordering(364) 00:22:44.721 fused_ordering(365) 00:22:44.721 fused_ordering(366) 00:22:44.721 fused_ordering(367) 00:22:44.721 fused_ordering(368) 00:22:44.721 fused_ordering(369) 00:22:44.721 fused_ordering(370) 00:22:44.721 fused_ordering(371) 00:22:44.721 fused_ordering(372) 00:22:44.721 fused_ordering(373) 00:22:44.721 fused_ordering(374) 00:22:44.721 fused_ordering(375) 00:22:44.721 fused_ordering(376) 00:22:44.721 fused_ordering(377) 00:22:44.721 fused_ordering(378) 00:22:44.721 fused_ordering(379) 00:22:44.721 fused_ordering(380) 00:22:44.721 fused_ordering(381) 00:22:44.721 fused_ordering(382) 00:22:44.721 fused_ordering(383) 00:22:44.721 fused_ordering(384) 00:22:44.721 fused_ordering(385) 00:22:44.721 fused_ordering(386) 00:22:44.721 fused_ordering(387) 00:22:44.721 fused_ordering(388) 00:22:44.721 fused_ordering(389) 00:22:44.721 fused_ordering(390) 00:22:44.721 fused_ordering(391) 00:22:44.721 fused_ordering(392) 00:22:44.721 fused_ordering(393) 00:22:44.721 fused_ordering(394) 00:22:44.721 fused_ordering(395) 00:22:44.721 fused_ordering(396) 00:22:44.721 fused_ordering(397) 00:22:44.721 fused_ordering(398) 00:22:44.721 fused_ordering(399) 00:22:44.721 fused_ordering(400) 00:22:44.721 fused_ordering(401) 00:22:44.721 fused_ordering(402) 00:22:44.721 fused_ordering(403) 00:22:44.721 fused_ordering(404) 00:22:44.721 fused_ordering(405) 00:22:44.721 fused_ordering(406) 00:22:44.721 fused_ordering(407) 00:22:44.721 fused_ordering(408) 00:22:44.721 fused_ordering(409) 00:22:44.721 fused_ordering(410) 00:22:45.287 fused_ordering(411) 00:22:45.287 fused_ordering(412) 00:22:45.287 fused_ordering(413) 00:22:45.287 fused_ordering(414) 00:22:45.287 fused_ordering(415) 00:22:45.287 fused_ordering(416) 00:22:45.287 fused_ordering(417) 00:22:45.287 fused_ordering(418) 00:22:45.287 fused_ordering(419) 00:22:45.287 fused_ordering(420) 00:22:45.287 fused_ordering(421) 00:22:45.287 fused_ordering(422) 00:22:45.287 fused_ordering(423) 00:22:45.287 fused_ordering(424) 00:22:45.287 fused_ordering(425) 00:22:45.287 fused_ordering(426) 00:22:45.287 fused_ordering(427) 00:22:45.287 fused_ordering(428) 00:22:45.287 fused_ordering(429) 00:22:45.287 fused_ordering(430) 00:22:45.287 fused_ordering(431) 00:22:45.287 fused_ordering(432) 00:22:45.287 fused_ordering(433) 00:22:45.287 fused_ordering(434) 00:22:45.287 fused_ordering(435) 00:22:45.287 fused_ordering(436) 00:22:45.287 fused_ordering(437) 00:22:45.287 fused_ordering(438) 00:22:45.287 fused_ordering(439) 00:22:45.287 fused_ordering(440) 00:22:45.287 fused_ordering(441) 00:22:45.287 fused_ordering(442) 00:22:45.287 fused_ordering(443) 00:22:45.287 fused_ordering(444) 00:22:45.287 fused_ordering(445) 00:22:45.287 fused_ordering(446) 00:22:45.287 fused_ordering(447) 00:22:45.287 fused_ordering(448) 00:22:45.287 fused_ordering(449) 00:22:45.287 fused_ordering(450) 00:22:45.287 fused_ordering(451) 00:22:45.287 fused_ordering(452) 00:22:45.287 fused_ordering(453) 00:22:45.287 fused_ordering(454) 00:22:45.287 fused_ordering(455) 00:22:45.287 fused_ordering(456) 00:22:45.287 fused_ordering(457) 00:22:45.287 fused_ordering(458) 00:22:45.287 fused_ordering(459) 00:22:45.287 fused_ordering(460) 00:22:45.287 fused_ordering(461) 00:22:45.287 fused_ordering(462) 00:22:45.287 fused_ordering(463) 00:22:45.287 fused_ordering(464) 00:22:45.287 fused_ordering(465) 00:22:45.287 fused_ordering(466) 00:22:45.287 fused_ordering(467) 00:22:45.287 fused_ordering(468) 00:22:45.287 fused_ordering(469) 00:22:45.287 fused_ordering(470) 00:22:45.287 fused_ordering(471) 00:22:45.287 fused_ordering(472) 00:22:45.287 fused_ordering(473) 00:22:45.287 fused_ordering(474) 00:22:45.287 fused_ordering(475) 00:22:45.287 fused_ordering(476) 00:22:45.287 fused_ordering(477) 00:22:45.287 fused_ordering(478) 00:22:45.287 fused_ordering(479) 00:22:45.287 fused_ordering(480) 00:22:45.287 fused_ordering(481) 00:22:45.287 fused_ordering(482) 00:22:45.287 fused_ordering(483) 00:22:45.287 fused_ordering(484) 00:22:45.287 fused_ordering(485) 00:22:45.287 fused_ordering(486) 00:22:45.287 fused_ordering(487) 00:22:45.287 fused_ordering(488) 00:22:45.287 fused_ordering(489) 00:22:45.287 fused_ordering(490) 00:22:45.287 fused_ordering(491) 00:22:45.287 fused_ordering(492) 00:22:45.287 fused_ordering(493) 00:22:45.287 fused_ordering(494) 00:22:45.287 fused_ordering(495) 00:22:45.287 fused_ordering(496) 00:22:45.287 fused_ordering(497) 00:22:45.287 fused_ordering(498) 00:22:45.287 fused_ordering(499) 00:22:45.287 fused_ordering(500) 00:22:45.287 fused_ordering(501) 00:22:45.287 fused_ordering(502) 00:22:45.287 fused_ordering(503) 00:22:45.287 fused_ordering(504) 00:22:45.287 fused_ordering(505) 00:22:45.287 fused_ordering(506) 00:22:45.287 fused_ordering(507) 00:22:45.287 fused_ordering(508) 00:22:45.287 fused_ordering(509) 00:22:45.287 fused_ordering(510) 00:22:45.287 fused_ordering(511) 00:22:45.287 fused_ordering(512) 00:22:45.287 fused_ordering(513) 00:22:45.287 fused_ordering(514) 00:22:45.287 fused_ordering(515) 00:22:45.287 fused_ordering(516) 00:22:45.287 fused_ordering(517) 00:22:45.287 fused_ordering(518) 00:22:45.287 fused_ordering(519) 00:22:45.287 fused_ordering(520) 00:22:45.287 fused_ordering(521) 00:22:45.287 fused_ordering(522) 00:22:45.287 fused_ordering(523) 00:22:45.287 fused_ordering(524) 00:22:45.287 fused_ordering(525) 00:22:45.287 fused_ordering(526) 00:22:45.287 fused_ordering(527) 00:22:45.287 fused_ordering(528) 00:22:45.287 fused_ordering(529) 00:22:45.287 fused_ordering(530) 00:22:45.287 fused_ordering(531) 00:22:45.287 fused_ordering(532) 00:22:45.287 fused_ordering(533) 00:22:45.287 fused_ordering(534) 00:22:45.287 fused_ordering(535) 00:22:45.287 fused_ordering(536) 00:22:45.287 fused_ordering(537) 00:22:45.287 fused_ordering(538) 00:22:45.287 fused_ordering(539) 00:22:45.287 fused_ordering(540) 00:22:45.287 fused_ordering(541) 00:22:45.287 fused_ordering(542) 00:22:45.287 fused_ordering(543) 00:22:45.287 fused_ordering(544) 00:22:45.287 fused_ordering(545) 00:22:45.287 fused_ordering(546) 00:22:45.287 fused_ordering(547) 00:22:45.287 fused_ordering(548) 00:22:45.287 fused_ordering(549) 00:22:45.287 fused_ordering(550) 00:22:45.287 fused_ordering(551) 00:22:45.287 fused_ordering(552) 00:22:45.287 fused_ordering(553) 00:22:45.287 fused_ordering(554) 00:22:45.287 fused_ordering(555) 00:22:45.287 fused_ordering(556) 00:22:45.287 fused_ordering(557) 00:22:45.287 fused_ordering(558) 00:22:45.287 fused_ordering(559) 00:22:45.287 fused_ordering(560) 00:22:45.287 fused_ordering(561) 00:22:45.287 fused_ordering(562) 00:22:45.287 fused_ordering(563) 00:22:45.287 fused_ordering(564) 00:22:45.287 fused_ordering(565) 00:22:45.287 fused_ordering(566) 00:22:45.287 fused_ordering(567) 00:22:45.287 fused_ordering(568) 00:22:45.287 fused_ordering(569) 00:22:45.287 fused_ordering(570) 00:22:45.287 fused_ordering(571) 00:22:45.287 fused_ordering(572) 00:22:45.287 fused_ordering(573) 00:22:45.287 fused_ordering(574) 00:22:45.287 fused_ordering(575) 00:22:45.287 fused_ordering(576) 00:22:45.287 fused_ordering(577) 00:22:45.287 fused_ordering(578) 00:22:45.287 fused_ordering(579) 00:22:45.287 fused_ordering(580) 00:22:45.287 fused_ordering(581) 00:22:45.287 fused_ordering(582) 00:22:45.287 fused_ordering(583) 00:22:45.287 fused_ordering(584) 00:22:45.287 fused_ordering(585) 00:22:45.287 fused_ordering(586) 00:22:45.287 fused_ordering(587) 00:22:45.287 fused_ordering(588) 00:22:45.287 fused_ordering(589) 00:22:45.287 fused_ordering(590) 00:22:45.287 fused_ordering(591) 00:22:45.287 fused_ordering(592) 00:22:45.287 fused_ordering(593) 00:22:45.287 fused_ordering(594) 00:22:45.287 fused_ordering(595) 00:22:45.287 fused_ordering(596) 00:22:45.287 fused_ordering(597) 00:22:45.288 fused_ordering(598) 00:22:45.288 fused_ordering(599) 00:22:45.288 fused_ordering(600) 00:22:45.288 fused_ordering(601) 00:22:45.288 fused_ordering(602) 00:22:45.288 fused_ordering(603) 00:22:45.288 fused_ordering(604) 00:22:45.288 fused_ordering(605) 00:22:45.288 fused_ordering(606) 00:22:45.288 fused_ordering(607) 00:22:45.288 fused_ordering(608) 00:22:45.288 fused_ordering(609) 00:22:45.288 fused_ordering(610) 00:22:45.288 fused_ordering(611) 00:22:45.288 fused_ordering(612) 00:22:45.288 fused_ordering(613) 00:22:45.288 fused_ordering(614) 00:22:45.288 fused_ordering(615) 00:22:45.854 fused_ordering(616) 00:22:45.854 fused_ordering(617) 00:22:45.854 fused_ordering(618) 00:22:45.854 fused_ordering(619) 00:22:45.854 fused_ordering(620) 00:22:45.854 fused_ordering(621) 00:22:45.854 fused_ordering(622) 00:22:45.854 fused_ordering(623) 00:22:45.854 fused_ordering(624) 00:22:45.854 fused_ordering(625) 00:22:45.854 fused_ordering(626) 00:22:45.854 fused_ordering(627) 00:22:45.854 fused_ordering(628) 00:22:45.854 fused_ordering(629) 00:22:45.854 fused_ordering(630) 00:22:45.854 fused_ordering(631) 00:22:45.854 fused_ordering(632) 00:22:45.854 fused_ordering(633) 00:22:45.854 fused_ordering(634) 00:22:45.854 fused_ordering(635) 00:22:45.854 fused_ordering(636) 00:22:45.854 fused_ordering(637) 00:22:45.854 fused_ordering(638) 00:22:45.854 fused_ordering(639) 00:22:45.854 fused_ordering(640) 00:22:45.854 fused_ordering(641) 00:22:45.854 fused_ordering(642) 00:22:45.854 fused_ordering(643) 00:22:45.854 fused_ordering(644) 00:22:45.854 fused_ordering(645) 00:22:45.854 fused_ordering(646) 00:22:45.854 fused_ordering(647) 00:22:45.854 fused_ordering(648) 00:22:45.854 fused_ordering(649) 00:22:45.854 fused_ordering(650) 00:22:45.854 fused_ordering(651) 00:22:45.854 fused_ordering(652) 00:22:45.854 fused_ordering(653) 00:22:45.854 fused_ordering(654) 00:22:45.854 fused_ordering(655) 00:22:45.854 fused_ordering(656) 00:22:45.854 fused_ordering(657) 00:22:45.854 fused_ordering(658) 00:22:45.854 fused_ordering(659) 00:22:45.854 fused_ordering(660) 00:22:45.854 fused_ordering(661) 00:22:45.854 fused_ordering(662) 00:22:45.854 fused_ordering(663) 00:22:45.854 fused_ordering(664) 00:22:45.854 fused_ordering(665) 00:22:45.854 fused_ordering(666) 00:22:45.854 fused_ordering(667) 00:22:45.854 fused_ordering(668) 00:22:45.854 fused_ordering(669) 00:22:45.854 fused_ordering(670) 00:22:45.854 fused_ordering(671) 00:22:45.854 fused_ordering(672) 00:22:45.854 fused_ordering(673) 00:22:45.854 fused_ordering(674) 00:22:45.854 fused_ordering(675) 00:22:45.854 fused_ordering(676) 00:22:45.854 fused_ordering(677) 00:22:45.854 fused_ordering(678) 00:22:45.854 fused_ordering(679) 00:22:45.854 fused_ordering(680) 00:22:45.854 fused_ordering(681) 00:22:45.854 fused_ordering(682) 00:22:45.854 fused_ordering(683) 00:22:45.854 fused_ordering(684) 00:22:45.854 fused_ordering(685) 00:22:45.854 fused_ordering(686) 00:22:45.854 fused_ordering(687) 00:22:45.854 fused_ordering(688) 00:22:45.854 fused_ordering(689) 00:22:45.854 fused_ordering(690) 00:22:45.854 fused_ordering(691) 00:22:45.854 fused_ordering(692) 00:22:45.854 fused_ordering(693) 00:22:45.854 fused_ordering(694) 00:22:45.854 fused_ordering(695) 00:22:45.854 fused_ordering(696) 00:22:45.854 fused_ordering(697) 00:22:45.854 fused_ordering(698) 00:22:45.854 fused_ordering(699) 00:22:45.854 fused_ordering(700) 00:22:45.854 fused_ordering(701) 00:22:45.854 fused_ordering(702) 00:22:45.854 fused_ordering(703) 00:22:45.854 fused_ordering(704) 00:22:45.854 fused_ordering(705) 00:22:45.854 fused_ordering(706) 00:22:45.854 fused_ordering(707) 00:22:45.854 fused_ordering(708) 00:22:45.854 fused_ordering(709) 00:22:45.854 fused_ordering(710) 00:22:45.854 fused_ordering(711) 00:22:45.854 fused_ordering(712) 00:22:45.854 fused_ordering(713) 00:22:45.854 fused_ordering(714) 00:22:45.854 fused_ordering(715) 00:22:45.854 fused_ordering(716) 00:22:45.854 fused_ordering(717) 00:22:45.854 fused_ordering(718) 00:22:45.854 fused_ordering(719) 00:22:45.854 fused_ordering(720) 00:22:45.854 fused_ordering(721) 00:22:45.854 fused_ordering(722) 00:22:45.854 fused_ordering(723) 00:22:45.854 fused_ordering(724) 00:22:45.854 fused_ordering(725) 00:22:45.854 fused_ordering(726) 00:22:45.854 fused_ordering(727) 00:22:45.854 fused_ordering(728) 00:22:45.854 fused_ordering(729) 00:22:45.854 fused_ordering(730) 00:22:45.854 fused_ordering(731) 00:22:45.854 fused_ordering(732) 00:22:45.854 fused_ordering(733) 00:22:45.854 fused_ordering(734) 00:22:45.854 fused_ordering(735) 00:22:45.854 fused_ordering(736) 00:22:45.854 fused_ordering(737) 00:22:45.854 fused_ordering(738) 00:22:45.854 fused_ordering(739) 00:22:45.854 fused_ordering(740) 00:22:45.854 fused_ordering(741) 00:22:45.854 fused_ordering(742) 00:22:45.855 fused_ordering(743) 00:22:45.855 fused_ordering(744) 00:22:45.855 fused_ordering(745) 00:22:45.855 fused_ordering(746) 00:22:45.855 fused_ordering(747) 00:22:45.855 fused_ordering(748) 00:22:45.855 fused_ordering(749) 00:22:45.855 fused_ordering(750) 00:22:45.855 fused_ordering(751) 00:22:45.855 fused_ordering(752) 00:22:45.855 fused_ordering(753) 00:22:45.855 fused_ordering(754) 00:22:45.855 fused_ordering(755) 00:22:45.855 fused_ordering(756) 00:22:45.855 fused_ordering(757) 00:22:45.855 fused_ordering(758) 00:22:45.855 fused_ordering(759) 00:22:45.855 fused_ordering(760) 00:22:45.855 fused_ordering(761) 00:22:45.855 fused_ordering(762) 00:22:45.855 fused_ordering(763) 00:22:45.855 fused_ordering(764) 00:22:45.855 fused_ordering(765) 00:22:45.855 fused_ordering(766) 00:22:45.855 fused_ordering(767) 00:22:45.855 fused_ordering(768) 00:22:45.855 fused_ordering(769) 00:22:45.855 fused_ordering(770) 00:22:45.855 fused_ordering(771) 00:22:45.855 fused_ordering(772) 00:22:45.855 fused_ordering(773) 00:22:45.855 fused_ordering(774) 00:22:45.855 fused_ordering(775) 00:22:45.855 fused_ordering(776) 00:22:45.855 fused_ordering(777) 00:22:45.855 fused_ordering(778) 00:22:45.855 fused_ordering(779) 00:22:45.855 fused_ordering(780) 00:22:45.855 fused_ordering(781) 00:22:45.855 fused_ordering(782) 00:22:45.855 fused_ordering(783) 00:22:45.855 fused_ordering(784) 00:22:45.855 fused_ordering(785) 00:22:45.855 fused_ordering(786) 00:22:45.855 fused_ordering(787) 00:22:45.855 fused_ordering(788) 00:22:45.855 fused_ordering(789) 00:22:45.855 fused_ordering(790) 00:22:45.855 fused_ordering(791) 00:22:45.855 fused_ordering(792) 00:22:45.855 fused_ordering(793) 00:22:45.855 fused_ordering(794) 00:22:45.855 fused_ordering(795) 00:22:45.855 fused_ordering(796) 00:22:45.855 fused_ordering(797) 00:22:45.855 fused_ordering(798) 00:22:45.855 fused_ordering(799) 00:22:45.855 fused_ordering(800) 00:22:45.855 fused_ordering(801) 00:22:45.855 fused_ordering(802) 00:22:45.855 fused_ordering(803) 00:22:45.855 fused_ordering(804) 00:22:45.855 fused_ordering(805) 00:22:45.855 fused_ordering(806) 00:22:45.855 fused_ordering(807) 00:22:45.855 fused_ordering(808) 00:22:45.855 fused_ordering(809) 00:22:45.855 fused_ordering(810) 00:22:45.855 fused_ordering(811) 00:22:45.855 fused_ordering(812) 00:22:45.855 fused_ordering(813) 00:22:45.855 fused_ordering(814) 00:22:45.855 fused_ordering(815) 00:22:45.855 fused_ordering(816) 00:22:45.855 fused_ordering(817) 00:22:45.855 fused_ordering(818) 00:22:45.855 fused_ordering(819) 00:22:45.855 fused_ordering(820) 00:22:46.421 fused_ordering(821) 00:22:46.421 fused_ordering(822) 00:22:46.421 fused_ordering(823) 00:22:46.421 fused_ordering(824) 00:22:46.421 fused_ordering(825) 00:22:46.421 fused_ordering(826) 00:22:46.421 fused_ordering(827) 00:22:46.421 fused_ordering(828) 00:22:46.421 fused_ordering(829) 00:22:46.421 fused_ordering(830) 00:22:46.422 fused_ordering(831) 00:22:46.422 fused_ordering(832) 00:22:46.422 fused_ordering(833) 00:22:46.422 fused_ordering(834) 00:22:46.422 fused_ordering(835) 00:22:46.422 fused_ordering(836) 00:22:46.422 fused_ordering(837) 00:22:46.422 fused_ordering(838) 00:22:46.422 fused_ordering(839) 00:22:46.422 fused_ordering(840) 00:22:46.422 fused_ordering(841) 00:22:46.422 fused_ordering(842) 00:22:46.422 fused_ordering(843) 00:22:46.422 fused_ordering(844) 00:22:46.422 fused_ordering(845) 00:22:46.422 fused_ordering(846) 00:22:46.422 fused_ordering(847) 00:22:46.422 fused_ordering(848) 00:22:46.422 fused_ordering(849) 00:22:46.422 fused_ordering(850) 00:22:46.422 fused_ordering(851) 00:22:46.422 fused_ordering(852) 00:22:46.422 fused_ordering(853) 00:22:46.422 fused_ordering(854) 00:22:46.422 fused_ordering(855) 00:22:46.422 fused_ordering(856) 00:22:46.422 fused_ordering(857) 00:22:46.422 fused_ordering(858) 00:22:46.422 fused_ordering(859) 00:22:46.422 fused_ordering(860) 00:22:46.422 fused_ordering(861) 00:22:46.422 fused_ordering(862) 00:22:46.422 fused_ordering(863) 00:22:46.422 fused_ordering(864) 00:22:46.422 fused_ordering(865) 00:22:46.422 fused_ordering(866) 00:22:46.422 fused_ordering(867) 00:22:46.422 fused_ordering(868) 00:22:46.422 fused_ordering(869) 00:22:46.422 fused_ordering(870) 00:22:46.422 fused_ordering(871) 00:22:46.422 fused_ordering(872) 00:22:46.422 fused_ordering(873) 00:22:46.422 fused_ordering(874) 00:22:46.422 fused_ordering(875) 00:22:46.422 fused_ordering(876) 00:22:46.422 fused_ordering(877) 00:22:46.422 fused_ordering(878) 00:22:46.422 fused_ordering(879) 00:22:46.422 fused_ordering(880) 00:22:46.422 fused_ordering(881) 00:22:46.422 fused_ordering(882) 00:22:46.422 fused_ordering(883) 00:22:46.422 fused_ordering(884) 00:22:46.422 fused_ordering(885) 00:22:46.422 fused_ordering(886) 00:22:46.422 fused_ordering(887) 00:22:46.422 fused_ordering(888) 00:22:46.422 fused_ordering(889) 00:22:46.422 fused_ordering(890) 00:22:46.422 fused_ordering(891) 00:22:46.422 fused_ordering(892) 00:22:46.422 fused_ordering(893) 00:22:46.422 fused_ordering(894) 00:22:46.422 fused_ordering(895) 00:22:46.422 fused_ordering(896) 00:22:46.422 fused_ordering(897) 00:22:46.422 fused_ordering(898) 00:22:46.422 fused_ordering(899) 00:22:46.422 fused_ordering(900) 00:22:46.422 fused_ordering(901) 00:22:46.422 fused_ordering(902) 00:22:46.422 fused_ordering(903) 00:22:46.422 fused_ordering(904) 00:22:46.422 fused_ordering(905) 00:22:46.422 fused_ordering(906) 00:22:46.422 fused_ordering(907) 00:22:46.422 fused_ordering(908) 00:22:46.422 fused_ordering(909) 00:22:46.422 fused_ordering(910) 00:22:46.422 fused_ordering(911) 00:22:46.422 fused_ordering(912) 00:22:46.422 fused_ordering(913) 00:22:46.422 fused_ordering(914) 00:22:46.422 fused_ordering(915) 00:22:46.422 fused_ordering(916) 00:22:46.422 fused_ordering(917) 00:22:46.422 fused_ordering(918) 00:22:46.422 fused_ordering(919) 00:22:46.422 fused_ordering(920) 00:22:46.422 fused_ordering(921) 00:22:46.422 fused_ordering(922) 00:22:46.422 fused_ordering(923) 00:22:46.422 fused_ordering(924) 00:22:46.422 fused_ordering(925) 00:22:46.422 fused_ordering(926) 00:22:46.422 fused_ordering(927) 00:22:46.422 fused_ordering(928) 00:22:46.422 fused_ordering(929) 00:22:46.422 fused_ordering(930) 00:22:46.422 fused_ordering(931) 00:22:46.422 fused_ordering(932) 00:22:46.422 fused_ordering(933) 00:22:46.422 fused_ordering(934) 00:22:46.422 fused_ordering(935) 00:22:46.422 fused_ordering(936) 00:22:46.422 fused_ordering(937) 00:22:46.422 fused_ordering(938) 00:22:46.422 fused_ordering(939) 00:22:46.422 fused_ordering(940) 00:22:46.422 fused_ordering(941) 00:22:46.422 fused_ordering(942) 00:22:46.422 fused_ordering(943) 00:22:46.422 fused_ordering(944) 00:22:46.422 fused_ordering(945) 00:22:46.422 fused_ordering(946) 00:22:46.422 fused_ordering(947) 00:22:46.422 fused_ordering(948) 00:22:46.422 fused_ordering(949) 00:22:46.422 fused_ordering(950) 00:22:46.422 fused_ordering(951) 00:22:46.422 fused_ordering(952) 00:22:46.422 fused_ordering(953) 00:22:46.422 fused_ordering(954) 00:22:46.422 fused_ordering(955) 00:22:46.422 fused_ordering(956) 00:22:46.422 fused_ordering(957) 00:22:46.422 fused_ordering(958) 00:22:46.422 fused_ordering(959) 00:22:46.422 fused_ordering(960) 00:22:46.422 fused_ordering(961) 00:22:46.422 fused_ordering(962) 00:22:46.422 fused_ordering(963) 00:22:46.422 fused_ordering(964) 00:22:46.422 fused_ordering(965) 00:22:46.422 fused_ordering(966) 00:22:46.422 fused_ordering(967) 00:22:46.422 fused_ordering(968) 00:22:46.422 fused_ordering(969) 00:22:46.422 fused_ordering(970) 00:22:46.422 fused_ordering(971) 00:22:46.422 fused_ordering(972) 00:22:46.422 fused_ordering(973) 00:22:46.422 fused_ordering(974) 00:22:46.422 fused_ordering(975) 00:22:46.422 fused_ordering(976) 00:22:46.422 fused_ordering(977) 00:22:46.422 fused_ordering(978) 00:22:46.422 fused_ordering(979) 00:22:46.422 fused_ordering(980) 00:22:46.422 fused_ordering(981) 00:22:46.422 fused_ordering(982) 00:22:46.422 fused_ordering(983) 00:22:46.422 fused_ordering(984) 00:22:46.422 fused_ordering(985) 00:22:46.422 fused_ordering(986) 00:22:46.422 fused_ordering(987) 00:22:46.422 fused_ordering(988) 00:22:46.422 fused_ordering(989) 00:22:46.422 fused_ordering(990) 00:22:46.422 fused_ordering(991) 00:22:46.422 fused_ordering(992) 00:22:46.422 fused_ordering(993) 00:22:46.422 fused_ordering(994) 00:22:46.422 fused_ordering(995) 00:22:46.422 fused_ordering(996) 00:22:46.422 fused_ordering(997) 00:22:46.422 fused_ordering(998) 00:22:46.422 fused_ordering(999) 00:22:46.422 fused_ordering(1000) 00:22:46.422 fused_ordering(1001) 00:22:46.422 fused_ordering(1002) 00:22:46.422 fused_ordering(1003) 00:22:46.422 fused_ordering(1004) 00:22:46.422 fused_ordering(1005) 00:22:46.422 fused_ordering(1006) 00:22:46.422 fused_ordering(1007) 00:22:46.422 fused_ordering(1008) 00:22:46.422 fused_ordering(1009) 00:22:46.422 fused_ordering(1010) 00:22:46.422 fused_ordering(1011) 00:22:46.422 fused_ordering(1012) 00:22:46.422 fused_ordering(1013) 00:22:46.422 fused_ordering(1014) 00:22:46.422 fused_ordering(1015) 00:22:46.422 fused_ordering(1016) 00:22:46.422 fused_ordering(1017) 00:22:46.422 fused_ordering(1018) 00:22:46.422 fused_ordering(1019) 00:22:46.422 fused_ordering(1020) 00:22:46.422 fused_ordering(1021) 00:22:46.422 fused_ordering(1022) 00:22:46.422 fused_ordering(1023) 00:22:46.422 08:48:41 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:22:46.422 08:48:41 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:22:46.422 08:48:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:46.422 08:48:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:22:46.422 08:48:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:46.422 08:48:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:22:46.422 08:48:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:46.422 08:48:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:46.422 rmmod nvme_tcp 00:22:46.422 rmmod nvme_fabrics 00:22:46.422 rmmod nvme_keyring 00:22:46.680 08:48:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:46.680 08:48:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:22:46.680 08:48:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:22:46.680 08:48:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 2225087 ']' 00:22:46.680 08:48:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 2225087 00:22:46.680 08:48:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@947 -- # '[' -z 2225087 ']' 00:22:46.680 08:48:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # kill -0 2225087 00:22:46.680 08:48:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # uname 00:22:46.680 08:48:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:46.680 08:48:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2225087 00:22:46.680 08:48:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:22:46.680 08:48:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:22:46.680 08:48:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2225087' 00:22:46.680 killing process with pid 2225087 00:22:46.680 08:48:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # kill 2225087 00:22:46.680 [2024-05-15 08:48:41.246411] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:46.680 08:48:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@971 -- # wait 2225087 00:22:46.942 08:48:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:46.942 08:48:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:46.942 08:48:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:46.942 08:48:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:46.942 08:48:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:46.942 08:48:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:46.942 08:48:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:46.942 08:48:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:48.881 08:48:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:48.881 00:22:48.881 real 0m8.125s 00:22:48.881 user 0m5.489s 00:22:48.881 sys 0m3.656s 00:22:48.881 08:48:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # xtrace_disable 00:22:48.881 08:48:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:22:48.881 ************************************ 00:22:48.881 END TEST nvmf_fused_ordering 00:22:48.881 ************************************ 00:22:48.881 08:48:43 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:22:48.881 08:48:43 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:22:48.881 08:48:43 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:22:48.881 08:48:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:48.881 ************************************ 00:22:48.881 START TEST nvmf_delete_subsystem 00:22:48.881 ************************************ 00:22:48.881 08:48:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:22:48.881 * Looking for test storage... 00:22:48.881 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:48.881 08:48:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:48.881 08:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:22:48.881 08:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:48.881 08:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:48.881 08:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:48.881 08:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:48.881 08:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:48.881 08:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:48.881 08:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:48.881 08:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:48.881 08:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:48.881 08:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:48.881 08:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:48.881 08:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:48.881 08:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:48.881 08:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:48.881 08:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:48.882 08:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:48.882 08:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:48.882 08:48:43 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:48.882 08:48:43 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:48.882 08:48:43 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:48.882 08:48:43 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.882 08:48:43 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.882 08:48:43 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.882 08:48:43 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:22:48.882 08:48:43 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.882 08:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:22:48.882 08:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:48.882 08:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:48.882 08:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:48.882 08:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:48.882 08:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:48.882 08:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:48.882 08:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:48.882 08:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:48.882 08:48:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:22:48.882 08:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:48.882 08:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:48.882 08:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:48.882 08:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:48.882 08:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:48.882 08:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:48.882 08:48:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:48.882 08:48:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:48.882 08:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:48.882 08:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:48.882 08:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:22:48.882 08:48:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:22:51.411 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:51.411 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:22:51.411 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:51.411 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:51.412 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:51.412 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:51.412 Found net devices under 0000:09:00.0: cvl_0_0 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:51.412 Found net devices under 0000:09:00.1: cvl_0_1 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:51.412 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:51.670 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:51.670 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:51.670 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:51.670 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:51.670 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:22:51.670 00:22:51.670 --- 10.0.0.2 ping statistics --- 00:22:51.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.670 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:22:51.670 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:51.670 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:51.670 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:22:51.670 00:22:51.670 --- 10.0.0.1 ping statistics --- 00:22:51.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.670 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:22:51.670 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:51.670 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:22:51.670 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:51.670 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:51.670 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:51.670 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:51.670 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:51.670 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:51.670 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:51.670 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:22:51.670 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:51.670 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@721 -- # xtrace_disable 00:22:51.671 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:22:51.671 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=2227835 00:22:51.671 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:51.671 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 2227835 00:22:51.671 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@828 -- # '[' -z 2227835 ']' 00:22:51.671 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:51.671 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:51.671 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:51.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:51.671 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:51.671 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:22:51.671 [2024-05-15 08:48:46.294249] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:22:51.671 [2024-05-15 08:48:46.294343] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:51.671 EAL: No free 2048 kB hugepages reported on node 1 00:22:51.671 [2024-05-15 08:48:46.368329] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:51.671 [2024-05-15 08:48:46.453989] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:51.671 [2024-05-15 08:48:46.454055] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:51.671 [2024-05-15 08:48:46.454083] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:51.671 [2024-05-15 08:48:46.454095] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:51.671 [2024-05-15 08:48:46.454105] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:51.671 [2024-05-15 08:48:46.454187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:51.671 [2024-05-15 08:48:46.454192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:51.929 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:51.929 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@861 -- # return 0 00:22:51.929 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:51.929 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@727 -- # xtrace_disable 00:22:51.929 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:22:51.929 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:51.929 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:51.929 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:51.929 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:22:51.929 [2024-05-15 08:48:46.599363] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:51.929 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:51.929 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:22:51.929 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:51.929 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:22:51.929 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:51.929 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:51.929 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:51.929 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:22:51.929 [2024-05-15 08:48:46.615339] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:51.929 [2024-05-15 08:48:46.615654] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:51.929 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:51.929 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:22:51.929 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:51.929 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:22:51.929 NULL1 00:22:51.929 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:51.929 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:22:51.929 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:51.929 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:22:51.929 Delay0 00:22:51.929 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:51.929 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:51.929 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:51.929 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:22:51.929 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:51.929 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2227867 00:22:51.929 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:22:51.929 08:48:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:22:51.929 EAL: No free 2048 kB hugepages reported on node 1 00:22:51.929 [2024-05-15 08:48:46.690295] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:54.456 08:48:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:54.456 08:48:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:54.456 08:48:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:22:54.456 Read completed with error (sct=0, sc=8) 00:22:54.456 Write completed with error (sct=0, sc=8) 00:22:54.456 Write completed with error (sct=0, sc=8) 00:22:54.456 starting I/O failed: -6 00:22:54.456 Read completed with error (sct=0, sc=8) 00:22:54.456 Read completed with error (sct=0, sc=8) 00:22:54.456 Read completed with error (sct=0, sc=8) 00:22:54.456 Write completed with error (sct=0, sc=8) 00:22:54.456 starting I/O failed: -6 00:22:54.456 Read completed with error (sct=0, sc=8) 00:22:54.456 Read completed with error (sct=0, sc=8) 00:22:54.456 Write completed with error (sct=0, sc=8) 00:22:54.456 Write completed with error (sct=0, sc=8) 00:22:54.456 starting I/O failed: -6 00:22:54.456 Read completed with error (sct=0, sc=8) 00:22:54.456 Write completed with error (sct=0, sc=8) 00:22:54.456 Read completed with error (sct=0, sc=8) 00:22:54.456 Read completed with error (sct=0, sc=8) 00:22:54.456 starting I/O failed: -6 00:22:54.456 Read completed with error (sct=0, sc=8) 00:22:54.456 Read completed with error (sct=0, sc=8) 00:22:54.456 Read completed with error (sct=0, sc=8) 00:22:54.456 Read completed with error (sct=0, sc=8) 00:22:54.456 starting I/O failed: -6 00:22:54.456 Read completed with error (sct=0, sc=8) 00:22:54.456 Read completed with error (sct=0, sc=8) 00:22:54.456 Read completed with error (sct=0, sc=8) 00:22:54.456 Read completed with error (sct=0, sc=8) 00:22:54.456 starting I/O failed: -6 00:22:54.456 Read completed with error (sct=0, sc=8) 00:22:54.456 Read completed with error (sct=0, sc=8) 00:22:54.456 Write completed with error (sct=0, sc=8) 00:22:54.456 Read completed with error (sct=0, sc=8) 00:22:54.456 starting I/O failed: -6 00:22:54.456 Read completed with error (sct=0, sc=8) 00:22:54.456 Read completed with error (sct=0, sc=8) 00:22:54.456 Read completed with error (sct=0, sc=8) 00:22:54.456 Read completed with error (sct=0, sc=8) 00:22:54.456 starting I/O failed: -6 00:22:54.456 Read completed with error (sct=0, sc=8) 00:22:54.456 Read completed with error (sct=0, sc=8) 00:22:54.456 Read completed with error (sct=0, sc=8) 00:22:54.456 Read completed with error (sct=0, sc=8) 00:22:54.456 starting I/O failed: -6 00:22:54.456 Read completed with error (sct=0, sc=8) 00:22:54.456 Write completed with error (sct=0, sc=8) 00:22:54.456 Read completed with error (sct=0, sc=8) 00:22:54.456 Read completed with error (sct=0, sc=8) 00:22:54.456 starting I/O failed: -6 00:22:54.456 Write completed with error (sct=0, sc=8) 00:22:54.456 [2024-05-15 08:48:48.861690] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b9180 is same with the state(5) to be set 00:22:54.456 Write completed with error (sct=0, sc=8) 00:22:54.456 Read completed with error (sct=0, sc=8) 00:22:54.456 Read completed with error (sct=0, sc=8) 00:22:54.456 Write completed with error (sct=0, sc=8) 00:22:54.456 Read completed with error (sct=0, sc=8) 00:22:54.456 Write completed with error (sct=0, sc=8) 00:22:54.456 Write completed with error (sct=0, sc=8) 00:22:54.456 Read completed with error (sct=0, sc=8) 00:22:54.456 Write completed with error (sct=0, sc=8) 00:22:54.456 Read completed with error (sct=0, sc=8) 00:22:54.456 Write completed with error (sct=0, sc=8) 00:22:54.456 Read completed with error (sct=0, sc=8) 00:22:54.456 Read completed with error (sct=0, sc=8) 00:22:54.456 Write completed with error (sct=0, sc=8) 00:22:54.456 Read completed with error (sct=0, sc=8) 00:22:54.456 Read completed with error (sct=0, sc=8) 00:22:54.456 Read completed with error (sct=0, sc=8) 00:22:54.456 Read completed with error (sct=0, sc=8) 00:22:54.456 Read completed with error (sct=0, sc=8) 00:22:54.456 Read completed with error (sct=0, sc=8) 00:22:54.456 Write completed with error (sct=0, sc=8) 00:22:54.456 Read completed with error (sct=0, sc=8) 00:22:54.456 Write completed with error (sct=0, sc=8) 00:22:54.456 Read completed with error (sct=0, sc=8) 00:22:54.456 Read completed with error (sct=0, sc=8) 00:22:54.456 Write completed with error (sct=0, sc=8) 00:22:54.456 Read completed with error (sct=0, sc=8) 00:22:54.456 Read completed with error (sct=0, sc=8) 00:22:54.456 Write completed with error (sct=0, sc=8) 00:22:54.456 Write completed with error (sct=0, sc=8) 00:22:54.456 Write completed with error (sct=0, sc=8) 00:22:54.456 Read completed with error (sct=0, sc=8) 00:22:54.456 Read completed with error (sct=0, sc=8) 00:22:54.456 Read completed with error (sct=0, sc=8) 00:22:54.456 Read completed with error (sct=0, sc=8) 00:22:54.456 Read completed with error (sct=0, sc=8) 00:22:54.456 Read completed with error (sct=0, sc=8) 00:22:54.456 Write completed with error (sct=0, sc=8) 00:22:54.456 Write completed with error (sct=0, sc=8) 00:22:54.457 Write completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 Write completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 Write completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 starting I/O failed: -6 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 Write completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 Write completed with error (sct=0, sc=8) 00:22:54.457 starting I/O failed: -6 00:22:54.457 Write completed with error (sct=0, sc=8) 00:22:54.457 Write completed with error (sct=0, sc=8) 00:22:54.457 Write completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 starting I/O failed: -6 00:22:54.457 Write completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 starting I/O failed: -6 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 Write completed with error (sct=0, sc=8) 00:22:54.457 Write completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 starting I/O failed: -6 00:22:54.457 Write completed with error (sct=0, sc=8) 00:22:54.457 Write completed with error (sct=0, sc=8) 00:22:54.457 Write completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 starting I/O failed: -6 00:22:54.457 Write completed with error (sct=0, sc=8) 00:22:54.457 Write completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 starting I/O failed: -6 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 starting I/O failed: -6 00:22:54.457 Write completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 starting I/O failed: -6 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 Write completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 starting I/O failed: -6 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 Write completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 Write completed with error (sct=0, sc=8) 00:22:54.457 starting I/O failed: -6 00:22:54.457 [2024-05-15 08:48:48.862640] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5c0c00bfe0 is same with the state(5) to be set 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 Write completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 Write completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 Write completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 Write completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 Write completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 Write completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 Write completed with error (sct=0, sc=8) 00:22:54.457 Write completed with error (sct=0, sc=8) 00:22:54.457 Write completed with error (sct=0, sc=8) 00:22:54.457 Write completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 Write completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 Read completed with error (sct=0, sc=8) 00:22:54.457 Write completed with error (sct=0, sc=8) 00:22:55.393 [2024-05-15 08:48:49.830140] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bc8b0 is same with the state(5) to be set 00:22:55.393 Read completed with error (sct=0, sc=8) 00:22:55.393 Read completed with error (sct=0, sc=8) 00:22:55.393 Write completed with error (sct=0, sc=8) 00:22:55.393 Read completed with error (sct=0, sc=8) 00:22:55.393 Read completed with error (sct=0, sc=8) 00:22:55.393 Read completed with error (sct=0, sc=8) 00:22:55.393 Read completed with error (sct=0, sc=8) 00:22:55.393 Read completed with error (sct=0, sc=8) 00:22:55.393 Read completed with error (sct=0, sc=8) 00:22:55.393 Read completed with error (sct=0, sc=8) 00:22:55.393 Read completed with error (sct=0, sc=8) 00:22:55.393 Read completed with error (sct=0, sc=8) 00:22:55.393 Read completed with error (sct=0, sc=8) 00:22:55.393 Read completed with error (sct=0, sc=8) 00:22:55.393 Read completed with error (sct=0, sc=8) 00:22:55.393 Write completed with error (sct=0, sc=8) 00:22:55.393 Write completed with error (sct=0, sc=8) 00:22:55.393 Read completed with error (sct=0, sc=8) 00:22:55.393 Read completed with error (sct=0, sc=8) 00:22:55.393 Read completed with error (sct=0, sc=8) 00:22:55.393 Read completed with error (sct=0, sc=8) 00:22:55.393 Read completed with error (sct=0, sc=8) 00:22:55.393 Read completed with error (sct=0, sc=8) 00:22:55.393 Write completed with error (sct=0, sc=8) 00:22:55.393 Read completed with error (sct=0, sc=8) 00:22:55.393 Write completed with error (sct=0, sc=8) 00:22:55.393 Write completed with error (sct=0, sc=8) 00:22:55.393 Write completed with error (sct=0, sc=8) 00:22:55.393 [2024-05-15 08:48:49.862872] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5c0c000c00 is same with the state(5) to be set 00:22:55.393 Read completed with error (sct=0, sc=8) 00:22:55.393 Read completed with error (sct=0, sc=8) 00:22:55.393 Read completed with error (sct=0, sc=8) 00:22:55.393 Read completed with error (sct=0, sc=8) 00:22:55.393 Read completed with error (sct=0, sc=8) 00:22:55.393 Read completed with error (sct=0, sc=8) 00:22:55.393 Read completed with error (sct=0, sc=8) 00:22:55.393 Read completed with error (sct=0, sc=8) 00:22:55.393 Read completed with error (sct=0, sc=8) 00:22:55.393 Read completed with error (sct=0, sc=8) 00:22:55.393 Read completed with error (sct=0, sc=8) 00:22:55.393 Write completed with error (sct=0, sc=8) 00:22:55.393 Read completed with error (sct=0, sc=8) 00:22:55.393 Read completed with error (sct=0, sc=8) 00:22:55.393 Write completed with error (sct=0, sc=8) 00:22:55.393 Read completed with error (sct=0, sc=8) 00:22:55.393 Read completed with error (sct=0, sc=8) 00:22:55.393 Write completed with error (sct=0, sc=8) 00:22:55.393 Write completed with error (sct=0, sc=8) 00:22:55.393 Read completed with error (sct=0, sc=8) 00:22:55.393 Write completed with error (sct=0, sc=8) 00:22:55.393 Write completed with error (sct=0, sc=8) 00:22:55.393 Read completed with error (sct=0, sc=8) 00:22:55.393 Write completed with error (sct=0, sc=8) 00:22:55.393 Read completed with error (sct=0, sc=8) 00:22:55.393 Write completed with error (sct=0, sc=8) 00:22:55.393 Write completed with error (sct=0, sc=8) 00:22:55.393 Read completed with error (sct=0, sc=8) 00:22:55.393 Read completed with error (sct=0, sc=8) 00:22:55.393 Read completed with error (sct=0, sc=8) 00:22:55.393 Write completed with error (sct=0, sc=8) 00:22:55.393 Read completed with error (sct=0, sc=8) 00:22:55.393 [2024-05-15 08:48:49.863091] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5c0c00c2f0 is same with the state(5) to be set 00:22:55.393 Write completed with error (sct=0, sc=8) 00:22:55.393 Read completed with error (sct=0, sc=8) 00:22:55.393 Read completed with error (sct=0, sc=8) 00:22:55.393 Write completed with error (sct=0, sc=8) 00:22:55.393 Write completed with error (sct=0, sc=8) 00:22:55.393 Read completed with error (sct=0, sc=8) 00:22:55.393 Write completed with error (sct=0, sc=8) 00:22:55.393 Write completed with error (sct=0, sc=8) 00:22:55.393 Read completed with error (sct=0, sc=8) 00:22:55.393 Write completed with error (sct=0, sc=8) 00:22:55.393 Read completed with error (sct=0, sc=8) 00:22:55.393 Read completed with error (sct=0, sc=8) 00:22:55.393 Write completed with error (sct=0, sc=8) 00:22:55.393 Read completed with error (sct=0, sc=8) 00:22:55.393 Read completed with error (sct=0, sc=8) 00:22:55.393 Read completed with error (sct=0, sc=8) 00:22:55.393 Read completed with error (sct=0, sc=8) 00:22:55.393 [2024-05-15 08:48:49.864204] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b9aa0 is same with the state(5) to be set 00:22:55.393 Read completed with error (sct=0, sc=8) 00:22:55.393 Read completed with error (sct=0, sc=8) 00:22:55.393 Write completed with error (sct=0, sc=8) 00:22:55.393 Read completed with error (sct=0, sc=8) 00:22:55.393 Write completed with error (sct=0, sc=8) 00:22:55.393 Write completed with error (sct=0, sc=8) 00:22:55.393 Write completed with error (sct=0, sc=8) 00:22:55.393 Read completed with error (sct=0, sc=8) 00:22:55.393 Write completed with error (sct=0, sc=8) 00:22:55.393 Read completed with error (sct=0, sc=8) 00:22:55.393 Read completed with error (sct=0, sc=8) 00:22:55.393 Read completed with error (sct=0, sc=8) 00:22:55.393 Read completed with error (sct=0, sc=8) 00:22:55.393 Read completed with error (sct=0, sc=8) 00:22:55.393 Read completed with error (sct=0, sc=8) 00:22:55.393 Read completed with error (sct=0, sc=8) 00:22:55.393 Write completed with error (sct=0, sc=8) 00:22:55.393 Read completed with error (sct=0, sc=8) 00:22:55.393 [2024-05-15 08:48:49.865093] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b9360 is same with the state(5) to be set 00:22:55.393 Initializing NVMe Controllers 00:22:55.393 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:55.393 Controller IO queue size 128, less than required. 00:22:55.393 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:55.393 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:22:55.393 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:22:55.393 Initialization complete. Launching workers. 00:22:55.393 ======================================================== 00:22:55.393 Latency(us) 00:22:55.393 Device Information : IOPS MiB/s Average min max 00:22:55.393 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 156.87 0.08 926834.19 420.55 1011331.77 00:22:55.393 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 164.31 0.08 960219.39 398.70 2001590.49 00:22:55.393 ======================================================== 00:22:55.393 Total : 321.18 0.16 943913.79 398.70 2001590.49 00:22:55.393 00:22:55.393 [2024-05-15 08:48:49.865948] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bc8b0 (9): Bad file descriptor 00:22:55.393 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:22:55.393 08:48:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:55.393 08:48:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:22:55.393 08:48:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2227867 00:22:55.393 08:48:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:22:55.651 08:48:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:22:55.651 08:48:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2227867 00:22:55.651 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2227867) - No such process 00:22:55.651 08:48:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2227867 00:22:55.651 08:48:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@649 -- # local es=0 00:22:55.651 08:48:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # valid_exec_arg wait 2227867 00:22:55.651 08:48:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@637 -- # local arg=wait 00:22:55.651 08:48:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:55.651 08:48:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # type -t wait 00:22:55.651 08:48:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:55.651 08:48:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # wait 2227867 00:22:55.651 08:48:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # es=1 00:22:55.651 08:48:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:55.651 08:48:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:55.651 08:48:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:55.651 08:48:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:22:55.651 08:48:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:55.651 08:48:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:22:55.651 08:48:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:55.651 08:48:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:55.651 08:48:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:55.651 08:48:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:22:55.651 [2024-05-15 08:48:50.388996] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:55.651 08:48:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:55.651 08:48:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:55.651 08:48:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:55.651 08:48:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:22:55.651 08:48:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:55.651 08:48:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2228268 00:22:55.651 08:48:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:22:55.651 08:48:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2228268 00:22:55.651 08:48:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:22:55.651 08:48:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:22:55.651 EAL: No free 2048 kB hugepages reported on node 1 00:22:55.942 [2024-05-15 08:48:50.452441] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:56.200 08:48:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:22:56.200 08:48:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2228268 00:22:56.200 08:48:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:22:56.765 08:48:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:22:56.765 08:48:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2228268 00:22:56.765 08:48:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:22:57.331 08:48:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:22:57.331 08:48:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2228268 00:22:57.331 08:48:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:22:57.896 08:48:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:22:57.896 08:48:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2228268 00:22:57.896 08:48:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:22:58.154 08:48:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:22:58.154 08:48:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2228268 00:22:58.154 08:48:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:22:58.719 08:48:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:22:58.719 08:48:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2228268 00:22:58.719 08:48:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:22:58.719 Initializing NVMe Controllers 00:22:58.719 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:58.719 Controller IO queue size 128, less than required. 00:22:58.719 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:58.719 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:22:58.719 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:22:58.719 Initialization complete. Launching workers. 00:22:58.719 ======================================================== 00:22:58.719 Latency(us) 00:22:58.719 Device Information : IOPS MiB/s Average min max 00:22:58.719 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004058.43 1000189.37 1011988.90 00:22:58.719 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004315.43 1000161.63 1012673.29 00:22:58.719 ======================================================== 00:22:58.719 Total : 256.00 0.12 1004186.93 1000161.63 1012673.29 00:22:58.719 00:22:59.284 08:48:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:22:59.284 08:48:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2228268 00:22:59.284 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2228268) - No such process 00:22:59.284 08:48:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2228268 00:22:59.284 08:48:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:22:59.284 08:48:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:22:59.284 08:48:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:59.284 08:48:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:22:59.284 08:48:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:59.284 08:48:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:22:59.284 08:48:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:59.284 08:48:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:59.284 rmmod nvme_tcp 00:22:59.284 rmmod nvme_fabrics 00:22:59.284 rmmod nvme_keyring 00:22:59.284 08:48:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:59.284 08:48:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:22:59.284 08:48:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:22:59.284 08:48:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 2227835 ']' 00:22:59.284 08:48:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 2227835 00:22:59.284 08:48:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@947 -- # '[' -z 2227835 ']' 00:22:59.284 08:48:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # kill -0 2227835 00:22:59.284 08:48:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # uname 00:22:59.284 08:48:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:59.284 08:48:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2227835 00:22:59.285 08:48:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:22:59.285 08:48:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:22:59.285 08:48:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2227835' 00:22:59.285 killing process with pid 2227835 00:22:59.285 08:48:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # kill 2227835 00:22:59.285 [2024-05-15 08:48:53.993316] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:59.285 08:48:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # wait 2227835 00:22:59.543 08:48:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:59.543 08:48:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:59.543 08:48:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:59.543 08:48:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:59.543 08:48:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:59.543 08:48:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:59.543 08:48:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:59.543 08:48:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:01.450 08:48:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:01.708 00:23:01.708 real 0m12.669s 00:23:01.708 user 0m27.794s 00:23:01.708 sys 0m3.188s 00:23:01.708 08:48:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # xtrace_disable 00:23:01.708 08:48:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:23:01.708 ************************************ 00:23:01.708 END TEST nvmf_delete_subsystem 00:23:01.708 ************************************ 00:23:01.708 08:48:56 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:23:01.708 08:48:56 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:23:01.708 08:48:56 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:23:01.709 08:48:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:01.709 ************************************ 00:23:01.709 START TEST nvmf_ns_masking 00:23:01.709 ************************************ 00:23:01.709 08:48:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1122 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:23:01.709 * Looking for test storage... 00:23:01.709 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:01.709 08:48:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:01.709 08:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:23:01.709 08:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:01.709 08:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:01.709 08:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:01.709 08:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:01.709 08:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:01.709 08:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:01.709 08:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:01.709 08:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:01.709 08:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:01.709 08:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:01.709 08:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:01.709 08:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:01.709 08:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:01.709 08:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:01.709 08:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:01.709 08:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:01.709 08:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:01.709 08:48:56 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:01.709 08:48:56 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:01.709 08:48:56 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:01.709 08:48:56 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.709 08:48:56 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.709 08:48:56 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.709 08:48:56 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:23:01.709 08:48:56 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.709 08:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:23:01.709 08:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:01.709 08:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:01.709 08:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:01.709 08:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:01.709 08:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:01.709 08:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:01.709 08:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:01.709 08:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:01.709 08:48:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:01.709 08:48:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:23:01.709 08:48:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:23:01.709 08:48:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:23:01.709 08:48:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:23:01.709 08:48:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=a7d12e60-b9e1-420d-89a6-4779438ce8b8 00:23:01.709 08:48:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:23:01.709 08:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:01.709 08:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:01.709 08:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:01.709 08:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:01.709 08:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:01.709 08:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:01.709 08:48:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:01.709 08:48:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:01.709 08:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:01.709 08:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:01.709 08:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:23:01.709 08:48:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:23:04.278 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:04.278 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:23:04.278 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:04.278 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:04.278 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:04.278 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:04.278 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:04.278 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:23:04.278 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:04.278 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:23:04.278 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:23:04.278 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:23:04.278 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:23:04.278 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:23:04.278 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:23:04.278 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:04.278 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:04.278 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:04.278 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:04.278 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:04.278 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:04.278 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:04.278 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:04.278 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:04.278 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:04.278 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:04.278 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:04.278 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:04.278 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:04.278 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:04.278 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:04.278 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:04.278 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:04.278 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:04.278 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:04.278 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:04.278 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:04.278 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:04.278 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:04.278 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:04.278 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:04.278 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:04.278 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:04.278 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:04.278 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:04.278 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:04.278 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:04.278 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:04.278 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:04.278 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:04.278 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:04.278 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:04.278 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:04.278 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:04.278 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:04.278 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:04.278 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:04.278 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:04.278 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:04.278 Found net devices under 0000:09:00.0: cvl_0_0 00:23:04.278 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:04.278 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:04.278 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:04.279 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:04.279 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:04.279 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:04.279 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:04.279 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:04.279 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:04.279 Found net devices under 0000:09:00.1: cvl_0_1 00:23:04.279 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:04.279 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:04.279 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:23:04.279 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:04.279 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:04.279 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:04.279 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:04.279 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:04.279 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:04.279 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:04.279 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:04.279 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:04.279 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:04.279 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:04.279 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:04.279 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:04.279 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:04.279 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:04.279 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:04.279 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:04.279 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:04.279 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:04.279 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:04.279 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:04.279 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:04.279 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:04.279 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:04.279 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:23:04.279 00:23:04.279 --- 10.0.0.2 ping statistics --- 00:23:04.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:04.279 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:23:04.279 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:04.279 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:04.279 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:23:04.279 00:23:04.279 --- 10.0.0.1 ping statistics --- 00:23:04.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:04.279 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:23:04.279 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:04.279 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:23:04.279 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:04.279 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:04.279 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:04.279 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:04.279 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:04.279 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:04.279 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:04.279 08:48:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:23:04.279 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:04.279 08:48:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@721 -- # xtrace_disable 00:23:04.279 08:48:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:23:04.279 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=2231019 00:23:04.279 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:04.279 08:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 2231019 00:23:04.279 08:48:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@828 -- # '[' -z 2231019 ']' 00:23:04.279 08:48:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:04.279 08:48:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:04.279 08:48:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:04.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:04.279 08:48:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:04.279 08:48:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:23:04.279 [2024-05-15 08:48:58.935102] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:23:04.279 [2024-05-15 08:48:58.935200] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:04.279 EAL: No free 2048 kB hugepages reported on node 1 00:23:04.279 [2024-05-15 08:48:59.009998] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:04.561 [2024-05-15 08:48:59.099587] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:04.561 [2024-05-15 08:48:59.099644] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:04.561 [2024-05-15 08:48:59.099672] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:04.561 [2024-05-15 08:48:59.099684] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:04.561 [2024-05-15 08:48:59.099694] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:04.561 [2024-05-15 08:48:59.099779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:04.561 [2024-05-15 08:48:59.099847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:04.561 [2024-05-15 08:48:59.099898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:04.561 [2024-05-15 08:48:59.099901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:04.562 08:48:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:04.562 08:48:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@861 -- # return 0 00:23:04.562 08:48:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:04.562 08:48:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@727 -- # xtrace_disable 00:23:04.562 08:48:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:23:04.562 08:48:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:04.562 08:48:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:04.819 [2024-05-15 08:48:59.475815] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:04.819 08:48:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:23:04.819 08:48:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:23:04.819 08:48:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:23:05.077 Malloc1 00:23:05.077 08:48:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:23:05.335 Malloc2 00:23:05.335 08:49:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:23:05.593 08:49:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:23:05.850 08:49:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:06.107 [2024-05-15 08:49:00.790094] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:06.107 [2024-05-15 08:49:00.790425] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:06.107 08:49:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:23:06.107 08:49:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a7d12e60-b9e1-420d-89a6-4779438ce8b8 -a 10.0.0.2 -s 4420 -i 4 00:23:06.365 08:49:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:23:06.365 08:49:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local i=0 00:23:06.365 08:49:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:23:06.365 08:49:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:23:06.365 08:49:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # sleep 2 00:23:08.261 08:49:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:23:08.261 08:49:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:23:08.261 08:49:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:23:08.261 08:49:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:23:08.261 08:49:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:23:08.261 08:49:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # return 0 00:23:08.261 08:49:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:23:08.261 08:49:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:23:08.262 08:49:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:23:08.262 08:49:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:23:08.262 08:49:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:23:08.262 08:49:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:23:08.262 08:49:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:23:08.262 [ 0]:0x1 00:23:08.262 08:49:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:23:08.262 08:49:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:23:08.520 08:49:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=39b2c9ce24354c3dbf32d790b7023af8 00:23:08.520 08:49:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 39b2c9ce24354c3dbf32d790b7023af8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:23:08.520 08:49:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:23:08.776 08:49:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:23:08.776 08:49:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:23:08.776 08:49:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:23:08.776 [ 0]:0x1 00:23:08.776 08:49:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:23:08.776 08:49:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:23:08.777 08:49:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=39b2c9ce24354c3dbf32d790b7023af8 00:23:08.777 08:49:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 39b2c9ce24354c3dbf32d790b7023af8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:23:08.777 08:49:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:23:08.777 08:49:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:23:08.777 08:49:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:23:08.777 [ 1]:0x2 00:23:08.777 08:49:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:23:08.777 08:49:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:23:08.777 08:49:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=4aa647cfe9204f26b05dca18fec6ed00 00:23:08.777 08:49:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 4aa647cfe9204f26b05dca18fec6ed00 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:23:08.777 08:49:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:23:08.777 08:49:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:08.777 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:08.777 08:49:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:09.034 08:49:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:23:09.291 08:49:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:23:09.291 08:49:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a7d12e60-b9e1-420d-89a6-4779438ce8b8 -a 10.0.0.2 -s 4420 -i 4 00:23:09.549 08:49:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:23:09.549 08:49:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local i=0 00:23:09.549 08:49:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:23:09.549 08:49:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # [[ -n 1 ]] 00:23:09.549 08:49:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # nvme_device_counter=1 00:23:09.549 08:49:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # sleep 2 00:23:11.453 08:49:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:23:11.453 08:49:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:23:11.453 08:49:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:23:11.453 08:49:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:23:11.453 08:49:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:23:11.453 08:49:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # return 0 00:23:11.453 08:49:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:23:11.453 08:49:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:23:11.453 08:49:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:23:11.453 08:49:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:23:11.453 08:49:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:23:11.453 08:49:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:23:11.453 08:49:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:23:11.453 08:49:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:23:11.453 08:49:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:11.453 08:49:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:23:11.453 08:49:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:11.453 08:49:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:23:11.453 08:49:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:23:11.453 08:49:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:23:11.453 08:49:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:23:11.453 08:49:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:23:11.453 08:49:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:23:11.453 08:49:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:23:11.453 08:49:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:23:11.453 08:49:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:11.453 08:49:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:11.453 08:49:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:11.453 08:49:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:23:11.453 08:49:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:23:11.453 08:49:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:23:11.453 [ 0]:0x2 00:23:11.711 08:49:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:23:11.711 08:49:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:23:11.711 08:49:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=4aa647cfe9204f26b05dca18fec6ed00 00:23:11.711 08:49:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 4aa647cfe9204f26b05dca18fec6ed00 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:23:11.711 08:49:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:23:11.969 08:49:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:23:11.969 08:49:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:23:11.969 08:49:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:23:11.969 [ 0]:0x1 00:23:11.969 08:49:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:23:11.969 08:49:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:23:11.969 08:49:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=39b2c9ce24354c3dbf32d790b7023af8 00:23:11.969 08:49:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 39b2c9ce24354c3dbf32d790b7023af8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:23:11.969 08:49:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:23:11.969 08:49:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:23:11.969 08:49:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:23:11.969 [ 1]:0x2 00:23:11.969 08:49:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:23:11.969 08:49:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:23:11.969 08:49:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=4aa647cfe9204f26b05dca18fec6ed00 00:23:11.969 08:49:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 4aa647cfe9204f26b05dca18fec6ed00 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:23:11.969 08:49:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:23:12.227 08:49:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:23:12.227 08:49:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:23:12.227 08:49:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:23:12.227 08:49:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:23:12.227 08:49:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:12.227 08:49:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:23:12.227 08:49:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:12.227 08:49:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:23:12.227 08:49:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:23:12.227 08:49:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:23:12.227 08:49:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:23:12.227 08:49:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:23:12.227 08:49:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:23:12.227 08:49:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:23:12.227 08:49:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:23:12.227 08:49:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:12.227 08:49:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:12.227 08:49:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:12.227 08:49:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:23:12.227 08:49:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:23:12.227 08:49:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:23:12.227 [ 0]:0x2 00:23:12.227 08:49:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:23:12.227 08:49:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:23:12.227 08:49:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=4aa647cfe9204f26b05dca18fec6ed00 00:23:12.227 08:49:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 4aa647cfe9204f26b05dca18fec6ed00 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:23:12.227 08:49:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:23:12.227 08:49:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:12.227 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:12.227 08:49:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:23:12.485 08:49:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:23:12.485 08:49:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a7d12e60-b9e1-420d-89a6-4779438ce8b8 -a 10.0.0.2 -s 4420 -i 4 00:23:12.743 08:49:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:23:12.743 08:49:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local i=0 00:23:12.743 08:49:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:23:12.743 08:49:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # [[ -n 2 ]] 00:23:12.743 08:49:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # nvme_device_counter=2 00:23:12.743 08:49:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # sleep 2 00:23:14.639 08:49:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:23:14.639 08:49:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:23:14.639 08:49:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:23:14.639 08:49:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # nvme_devices=2 00:23:14.639 08:49:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:23:14.639 08:49:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # return 0 00:23:14.639 08:49:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:23:14.639 08:49:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:23:14.639 08:49:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:23:14.639 08:49:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:23:14.639 08:49:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:23:14.639 08:49:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:23:14.639 08:49:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:23:14.639 [ 0]:0x1 00:23:14.639 08:49:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:23:14.639 08:49:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:23:14.639 08:49:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=39b2c9ce24354c3dbf32d790b7023af8 00:23:14.639 08:49:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 39b2c9ce24354c3dbf32d790b7023af8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:23:14.639 08:49:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:23:14.639 08:49:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:23:14.639 08:49:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:23:14.639 [ 1]:0x2 00:23:14.639 08:49:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:23:14.639 08:49:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:23:14.639 08:49:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=4aa647cfe9204f26b05dca18fec6ed00 00:23:14.639 08:49:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 4aa647cfe9204f26b05dca18fec6ed00 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:23:14.639 08:49:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:23:14.897 08:49:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:23:14.897 08:49:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:23:14.897 08:49:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:23:14.897 08:49:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:23:14.897 08:49:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:14.897 08:49:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:23:14.897 08:49:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:14.897 08:49:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:23:14.897 08:49:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:23:14.897 08:49:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:23:14.897 08:49:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:23:14.897 08:49:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:23:15.156 08:49:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:23:15.156 08:49:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:23:15.156 08:49:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:23:15.156 08:49:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:15.156 08:49:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:15.156 08:49:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:15.156 08:49:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:23:15.156 08:49:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:23:15.156 08:49:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:23:15.156 [ 0]:0x2 00:23:15.156 08:49:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:23:15.156 08:49:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:23:15.156 08:49:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=4aa647cfe9204f26b05dca18fec6ed00 00:23:15.156 08:49:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 4aa647cfe9204f26b05dca18fec6ed00 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:23:15.156 08:49:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:23:15.156 08:49:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:23:15.156 08:49:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:23:15.156 08:49:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:15.156 08:49:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:15.156 08:49:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:15.156 08:49:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:15.156 08:49:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:15.156 08:49:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:15.156 08:49:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:15.156 08:49:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:23:15.156 08:49:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:23:15.414 [2024-05-15 08:49:09.987435] nvmf_rpc.c:1781:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:23:15.414 request: 00:23:15.414 { 00:23:15.414 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.414 "nsid": 2, 00:23:15.414 "host": "nqn.2016-06.io.spdk:host1", 00:23:15.414 "method": "nvmf_ns_remove_host", 00:23:15.414 "req_id": 1 00:23:15.414 } 00:23:15.414 Got JSON-RPC error response 00:23:15.414 response: 00:23:15.414 { 00:23:15.414 "code": -32602, 00:23:15.414 "message": "Invalid parameters" 00:23:15.414 } 00:23:15.414 08:49:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:23:15.414 08:49:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:15.414 08:49:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:15.414 08:49:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:15.414 08:49:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:23:15.414 08:49:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:23:15.414 08:49:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:23:15.414 08:49:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:23:15.414 08:49:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:15.414 08:49:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:23:15.414 08:49:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:15.414 08:49:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:23:15.414 08:49:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:23:15.414 08:49:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:23:15.414 08:49:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:23:15.414 08:49:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:23:15.414 08:49:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:23:15.414 08:49:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:23:15.415 08:49:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:23:15.415 08:49:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:15.415 08:49:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:15.415 08:49:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:15.415 08:49:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:23:15.415 08:49:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:23:15.415 08:49:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:23:15.415 [ 0]:0x2 00:23:15.415 08:49:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:23:15.415 08:49:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:23:15.415 08:49:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=4aa647cfe9204f26b05dca18fec6ed00 00:23:15.415 08:49:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 4aa647cfe9204f26b05dca18fec6ed00 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:23:15.415 08:49:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:23:15.415 08:49:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:15.415 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:15.415 08:49:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:15.673 08:49:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:15.673 08:49:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:23:15.673 08:49:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:15.673 08:49:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:23:15.673 08:49:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:15.673 08:49:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:23:15.673 08:49:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:15.673 08:49:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:15.673 rmmod nvme_tcp 00:23:15.673 rmmod nvme_fabrics 00:23:15.673 rmmod nvme_keyring 00:23:15.673 08:49:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:15.673 08:49:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:23:15.673 08:49:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:23:15.673 08:49:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 2231019 ']' 00:23:15.673 08:49:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 2231019 00:23:15.673 08:49:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@947 -- # '[' -z 2231019 ']' 00:23:15.673 08:49:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # kill -0 2231019 00:23:15.673 08:49:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # uname 00:23:15.673 08:49:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:15.673 08:49:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2231019 00:23:15.673 08:49:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:23:15.673 08:49:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:23:15.673 08:49:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2231019' 00:23:15.673 killing process with pid 2231019 00:23:15.673 08:49:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # kill 2231019 00:23:15.673 [2024-05-15 08:49:10.449618] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:15.673 08:49:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@971 -- # wait 2231019 00:23:16.239 08:49:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:16.239 08:49:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:16.239 08:49:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:16.240 08:49:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:16.240 08:49:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:16.240 08:49:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:16.240 08:49:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:16.240 08:49:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:18.142 08:49:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:18.142 00:23:18.142 real 0m16.474s 00:23:18.142 user 0m49.965s 00:23:18.142 sys 0m3.942s 00:23:18.142 08:49:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # xtrace_disable 00:23:18.142 08:49:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:23:18.142 ************************************ 00:23:18.142 END TEST nvmf_ns_masking 00:23:18.142 ************************************ 00:23:18.142 08:49:12 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:23:18.142 08:49:12 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:23:18.142 08:49:12 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:23:18.142 08:49:12 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:23:18.142 08:49:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:18.142 ************************************ 00:23:18.142 START TEST nvmf_nvme_cli 00:23:18.142 ************************************ 00:23:18.142 08:49:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:23:18.142 * Looking for test storage... 00:23:18.142 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:18.142 08:49:12 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:18.142 08:49:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:23:18.142 08:49:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:18.142 08:49:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:18.142 08:49:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:18.142 08:49:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:18.142 08:49:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:18.142 08:49:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:18.142 08:49:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:18.142 08:49:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:18.142 08:49:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:18.142 08:49:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:18.142 08:49:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:18.142 08:49:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:18.142 08:49:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:18.142 08:49:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:18.142 08:49:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:18.142 08:49:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:18.142 08:49:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:18.142 08:49:12 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:18.142 08:49:12 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:18.142 08:49:12 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:18.142 08:49:12 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.143 08:49:12 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.143 08:49:12 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.143 08:49:12 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:23:18.143 08:49:12 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.143 08:49:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:23:18.143 08:49:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:18.143 08:49:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:18.143 08:49:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:18.143 08:49:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:18.143 08:49:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:18.143 08:49:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:18.143 08:49:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:18.143 08:49:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:18.143 08:49:12 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:18.143 08:49:12 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:18.143 08:49:12 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:23:18.143 08:49:12 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:23:18.143 08:49:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:18.143 08:49:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:18.143 08:49:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:18.143 08:49:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:18.143 08:49:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:18.143 08:49:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:18.143 08:49:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:18.143 08:49:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:18.143 08:49:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:18.143 08:49:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:18.143 08:49:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:23:18.143 08:49:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:23:20.716 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:20.716 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:23:20.716 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:20.716 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:20.716 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:20.716 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:20.716 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:20.716 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:23:20.716 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:20.716 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:23:20.716 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:23:20.716 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:23:20.716 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:23:20.716 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:23:20.716 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:23:20.716 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:20.716 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:20.716 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:20.716 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:20.716 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:20.716 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:20.716 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:20.716 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:20.716 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:20.716 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:20.716 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:20.716 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:20.716 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:20.716 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:20.716 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:20.716 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:20.716 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:20.716 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:20.716 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:20.716 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:20.716 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:20.717 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:20.717 Found net devices under 0000:09:00.0: cvl_0_0 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:20.717 Found net devices under 0000:09:00.1: cvl_0_1 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:20.717 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:20.717 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:23:20.717 00:23:20.717 --- 10.0.0.2 ping statistics --- 00:23:20.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.717 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:20.717 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:20.717 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:23:20.717 00:23:20.717 --- 10.0.0.1 ping statistics --- 00:23:20.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.717 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@721 -- # xtrace_disable 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=2234745 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 2234745 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@828 -- # '[' -z 2234745 ']' 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:20.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:20.717 08:49:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:23:20.717 [2024-05-15 08:49:15.466885] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:23:20.717 [2024-05-15 08:49:15.466963] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:20.717 EAL: No free 2048 kB hugepages reported on node 1 00:23:20.975 [2024-05-15 08:49:15.542556] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:20.975 [2024-05-15 08:49:15.629509] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:20.975 [2024-05-15 08:49:15.629576] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:20.975 [2024-05-15 08:49:15.629590] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:20.975 [2024-05-15 08:49:15.629600] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:20.976 [2024-05-15 08:49:15.629610] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:20.976 [2024-05-15 08:49:15.629728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:20.976 [2024-05-15 08:49:15.629794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:20.976 [2024-05-15 08:49:15.629845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:20.976 [2024-05-15 08:49:15.629842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:20.976 08:49:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:20.976 08:49:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@861 -- # return 0 00:23:20.976 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:20.976 08:49:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@727 -- # xtrace_disable 00:23:20.976 08:49:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:23:20.976 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:20.976 08:49:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:20.976 08:49:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:20.976 08:49:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:23:21.233 [2024-05-15 08:49:15.773759] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:21.233 08:49:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:21.233 08:49:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:21.233 08:49:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:21.233 08:49:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:23:21.233 Malloc0 00:23:21.233 08:49:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:21.233 08:49:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:21.233 08:49:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:21.233 08:49:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:23:21.233 Malloc1 00:23:21.233 08:49:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:21.233 08:49:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:23:21.234 08:49:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:21.234 08:49:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:23:21.234 08:49:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:21.234 08:49:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:21.234 08:49:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:21.234 08:49:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:23:21.234 08:49:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:21.234 08:49:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:21.234 08:49:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:21.234 08:49:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:23:21.234 08:49:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:21.234 08:49:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:21.234 08:49:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:21.234 08:49:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:23:21.234 [2024-05-15 08:49:15.854439] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:21.234 [2024-05-15 08:49:15.854763] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:21.234 08:49:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:21.234 08:49:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:21.234 08:49:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:21.234 08:49:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:23:21.234 08:49:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:21.234 08:49:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:23:21.234 00:23:21.234 Discovery Log Number of Records 2, Generation counter 2 00:23:21.234 =====Discovery Log Entry 0====== 00:23:21.234 trtype: tcp 00:23:21.234 adrfam: ipv4 00:23:21.234 subtype: current discovery subsystem 00:23:21.234 treq: not required 00:23:21.234 portid: 0 00:23:21.234 trsvcid: 4420 00:23:21.234 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:21.234 traddr: 10.0.0.2 00:23:21.234 eflags: explicit discovery connections, duplicate discovery information 00:23:21.234 sectype: none 00:23:21.234 =====Discovery Log Entry 1====== 00:23:21.234 trtype: tcp 00:23:21.234 adrfam: ipv4 00:23:21.234 subtype: nvme subsystem 00:23:21.234 treq: not required 00:23:21.234 portid: 0 00:23:21.234 trsvcid: 4420 00:23:21.234 subnqn: nqn.2016-06.io.spdk:cnode1 00:23:21.234 traddr: 10.0.0.2 00:23:21.234 eflags: none 00:23:21.234 sectype: none 00:23:21.234 08:49:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:23:21.234 08:49:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:23:21.234 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:23:21.234 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:23:21.234 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:23:21.234 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:23:21.234 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:23:21.234 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:23:21.234 08:49:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:23:21.234 08:49:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:23:21.234 08:49:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:21.798 08:49:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:23:21.798 08:49:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1195 -- # local i=0 00:23:21.798 08:49:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:23:21.798 08:49:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # [[ -n 2 ]] 00:23:21.798 08:49:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # nvme_device_counter=2 00:23:21.798 08:49:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # sleep 2 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # nvme_devices=2 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # return 0 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:23:24.331 /dev/nvme0n1 ]] 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:24.331 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # local i=0 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1228 -- # return 0 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:24.331 rmmod nvme_tcp 00:23:24.331 rmmod nvme_fabrics 00:23:24.331 rmmod nvme_keyring 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 2234745 ']' 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 2234745 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@947 -- # '[' -z 2234745 ']' 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # kill -0 2234745 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # uname 00:23:24.331 08:49:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:24.332 08:49:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2234745 00:23:24.332 08:49:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:23:24.332 08:49:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:23:24.332 08:49:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2234745' 00:23:24.332 killing process with pid 2234745 00:23:24.332 08:49:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # kill 2234745 00:23:24.332 [2024-05-15 08:49:18.707536] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:24.332 08:49:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@971 -- # wait 2234745 00:23:24.332 08:49:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:24.332 08:49:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:24.332 08:49:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:24.332 08:49:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:24.332 08:49:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:24.332 08:49:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:24.332 08:49:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:24.332 08:49:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:26.236 08:49:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:26.236 00:23:26.236 real 0m8.174s 00:23:26.236 user 0m13.881s 00:23:26.236 sys 0m2.358s 00:23:26.236 08:49:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # xtrace_disable 00:23:26.236 08:49:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:23:26.236 ************************************ 00:23:26.236 END TEST nvmf_nvme_cli 00:23:26.236 ************************************ 00:23:26.494 08:49:21 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:23:26.494 08:49:21 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:23:26.494 08:49:21 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:23:26.494 08:49:21 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:23:26.494 08:49:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:26.494 ************************************ 00:23:26.494 START TEST nvmf_vfio_user 00:23:26.494 ************************************ 00:23:26.494 08:49:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:23:26.494 * Looking for test storage... 00:23:26.494 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:26.494 08:49:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:26.494 08:49:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:23:26.494 08:49:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:26.494 08:49:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:26.494 08:49:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:26.494 08:49:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:26.494 08:49:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:26.494 08:49:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:26.494 08:49:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:26.494 08:49:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:26.494 08:49:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:26.494 08:49:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:26.494 08:49:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:26.494 08:49:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:26.494 08:49:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:26.494 08:49:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:26.494 08:49:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:26.494 08:49:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:26.494 08:49:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:26.494 08:49:21 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:26.494 08:49:21 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:26.494 08:49:21 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:26.494 08:49:21 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.494 08:49:21 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.495 08:49:21 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.495 08:49:21 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:23:26.495 08:49:21 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.495 08:49:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:23:26.495 08:49:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:26.495 08:49:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:26.495 08:49:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:26.495 08:49:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:26.495 08:49:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:26.495 08:49:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:26.495 08:49:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:26.495 08:49:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:26.495 08:49:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:26.495 08:49:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:26.495 08:49:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:23:26.495 08:49:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:26.495 08:49:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:23:26.495 08:49:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:23:26.495 08:49:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:23:26.495 08:49:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:23:26.495 08:49:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:23:26.495 08:49:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:23:26.495 08:49:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2235546 00:23:26.495 08:49:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:23:26.495 08:49:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2235546' 00:23:26.495 Process pid: 2235546 00:23:26.495 08:49:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:23:26.495 08:49:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2235546 00:23:26.495 08:49:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@828 -- # '[' -z 2235546 ']' 00:23:26.495 08:49:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:26.495 08:49:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:26.495 08:49:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:26.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:26.495 08:49:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:26.495 08:49:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:23:26.495 [2024-05-15 08:49:21.169583] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:23:26.495 [2024-05-15 08:49:21.169682] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:26.495 EAL: No free 2048 kB hugepages reported on node 1 00:23:26.495 [2024-05-15 08:49:21.236175] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:26.752 [2024-05-15 08:49:21.318678] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:26.752 [2024-05-15 08:49:21.318744] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:26.752 [2024-05-15 08:49:21.318764] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:26.753 [2024-05-15 08:49:21.318776] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:26.753 [2024-05-15 08:49:21.318786] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:26.753 [2024-05-15 08:49:21.318904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:26.753 [2024-05-15 08:49:21.318983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:26.753 [2024-05-15 08:49:21.319032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:26.753 [2024-05-15 08:49:21.319034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:26.753 08:49:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:26.753 08:49:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@861 -- # return 0 00:23:26.753 08:49:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:23:27.684 08:49:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:23:28.250 08:49:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:23:28.250 08:49:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:23:28.250 08:49:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:23:28.250 08:49:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:23:28.250 08:49:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:23:28.250 Malloc1 00:23:28.509 08:49:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:23:28.509 08:49:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:23:28.767 08:49:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:23:29.025 [2024-05-15 08:49:23.776679] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:29.025 08:49:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:23:29.025 08:49:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:23:29.025 08:49:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:23:29.283 Malloc2 00:23:29.283 08:49:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:23:29.540 08:49:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:23:29.797 08:49:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:23:30.055 08:49:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:23:30.055 08:49:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:23:30.055 08:49:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:23:30.055 08:49:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:23:30.055 08:49:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:23:30.055 08:49:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:23:30.055 [2024-05-15 08:49:24.828414] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:23:30.055 [2024-05-15 08:49:24.828453] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2236080 ] 00:23:30.055 EAL: No free 2048 kB hugepages reported on node 1 00:23:30.315 [2024-05-15 08:49:24.861866] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:23:30.315 [2024-05-15 08:49:24.870249] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:23:30.315 [2024-05-15 08:49:24.870288] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fb24c553000 00:23:30.315 [2024-05-15 08:49:24.873225] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:23:30.315 [2024-05-15 08:49:24.874248] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:23:30.315 [2024-05-15 08:49:24.875249] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:23:30.315 [2024-05-15 08:49:24.876256] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:23:30.315 [2024-05-15 08:49:24.877260] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:23:30.315 [2024-05-15 08:49:24.878268] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:23:30.315 [2024-05-15 08:49:24.879269] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:23:30.315 [2024-05-15 08:49:24.880276] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:23:30.315 [2024-05-15 08:49:24.881284] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:23:30.315 [2024-05-15 08:49:24.881304] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fb24b309000 00:23:30.315 [2024-05-15 08:49:24.882447] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:23:30.315 [2024-05-15 08:49:24.898142] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:23:30.315 [2024-05-15 08:49:24.898180] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:23:30.315 [2024-05-15 08:49:24.903424] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:23:30.315 [2024-05-15 08:49:24.903480] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:23:30.315 [2024-05-15 08:49:24.903593] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:23:30.315 [2024-05-15 08:49:24.903625] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:23:30.315 [2024-05-15 08:49:24.903636] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:23:30.315 [2024-05-15 08:49:24.904415] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:23:30.315 [2024-05-15 08:49:24.904442] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:23:30.315 [2024-05-15 08:49:24.904456] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:23:30.315 [2024-05-15 08:49:24.905419] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:23:30.315 [2024-05-15 08:49:24.905437] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:23:30.315 [2024-05-15 08:49:24.905451] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:23:30.315 [2024-05-15 08:49:24.906426] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:23:30.315 [2024-05-15 08:49:24.906446] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:30.315 [2024-05-15 08:49:24.907432] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:23:30.315 [2024-05-15 08:49:24.907450] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:23:30.315 [2024-05-15 08:49:24.907460] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:23:30.315 [2024-05-15 08:49:24.907471] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:30.315 [2024-05-15 08:49:24.907581] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:23:30.315 [2024-05-15 08:49:24.907590] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:30.315 [2024-05-15 08:49:24.907598] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:23:30.315 [2024-05-15 08:49:24.908444] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:23:30.315 [2024-05-15 08:49:24.909443] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:23:30.315 [2024-05-15 08:49:24.910453] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:23:30.315 [2024-05-15 08:49:24.911446] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:23:30.315 [2024-05-15 08:49:24.911554] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:30.316 [2024-05-15 08:49:24.912464] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:23:30.316 [2024-05-15 08:49:24.912482] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:30.316 [2024-05-15 08:49:24.912491] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:23:30.316 [2024-05-15 08:49:24.912531] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:23:30.316 [2024-05-15 08:49:24.912549] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:23:30.316 [2024-05-15 08:49:24.912584] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:23:30.316 [2024-05-15 08:49:24.912594] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:23:30.316 [2024-05-15 08:49:24.912617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:23:30.316 [2024-05-15 08:49:24.912677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:23:30.316 [2024-05-15 08:49:24.912695] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:23:30.316 [2024-05-15 08:49:24.912703] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:23:30.316 [2024-05-15 08:49:24.912711] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:23:30.316 [2024-05-15 08:49:24.912718] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:23:30.316 [2024-05-15 08:49:24.912726] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:23:30.316 [2024-05-15 08:49:24.912733] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:23:30.316 [2024-05-15 08:49:24.912741] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:23:30.316 [2024-05-15 08:49:24.912757] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:23:30.316 [2024-05-15 08:49:24.912776] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:23:30.316 [2024-05-15 08:49:24.912792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:23:30.316 [2024-05-15 08:49:24.912815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.316 [2024-05-15 08:49:24.912828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.316 [2024-05-15 08:49:24.912840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.316 [2024-05-15 08:49:24.912851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.316 [2024-05-15 08:49:24.912859] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:30.316 [2024-05-15 08:49:24.912871] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:30.316 [2024-05-15 08:49:24.912884] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:23:30.316 [2024-05-15 08:49:24.912895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:23:30.316 [2024-05-15 08:49:24.912906] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:23:30.316 [2024-05-15 08:49:24.912918] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:30.316 [2024-05-15 08:49:24.912929] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:23:30.316 [2024-05-15 08:49:24.912943] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:30.316 [2024-05-15 08:49:24.912957] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:23:30.316 [2024-05-15 08:49:24.912974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:23:30.316 [2024-05-15 08:49:24.913027] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:23:30.316 [2024-05-15 08:49:24.913043] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:30.316 [2024-05-15 08:49:24.913056] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:23:30.316 [2024-05-15 08:49:24.913064] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:23:30.316 [2024-05-15 08:49:24.913073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:23:30.316 [2024-05-15 08:49:24.913089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:23:30.316 [2024-05-15 08:49:24.913110] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:23:30.316 [2024-05-15 08:49:24.913126] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:23:30.316 [2024-05-15 08:49:24.913140] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:23:30.316 [2024-05-15 08:49:24.913151] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:23:30.316 [2024-05-15 08:49:24.913159] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:23:30.316 [2024-05-15 08:49:24.913169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:23:30.316 [2024-05-15 08:49:24.913186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:23:30.316 [2024-05-15 08:49:24.913229] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:30.316 [2024-05-15 08:49:24.913245] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:30.316 [2024-05-15 08:49:24.913258] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:23:30.316 [2024-05-15 08:49:24.913266] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:23:30.316 [2024-05-15 08:49:24.913276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:23:30.316 [2024-05-15 08:49:24.913292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:23:30.316 [2024-05-15 08:49:24.913313] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:30.316 [2024-05-15 08:49:24.913325] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:23:30.316 [2024-05-15 08:49:24.913339] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:23:30.316 [2024-05-15 08:49:24.913350] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:30.316 [2024-05-15 08:49:24.913362] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:23:30.316 [2024-05-15 08:49:24.913372] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:23:30.316 [2024-05-15 08:49:24.913380] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:23:30.316 [2024-05-15 08:49:24.913388] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:23:30.316 [2024-05-15 08:49:24.913422] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:23:30.316 [2024-05-15 08:49:24.913441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:23:30.316 [2024-05-15 08:49:24.913460] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:23:30.316 [2024-05-15 08:49:24.913472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:23:30.316 [2024-05-15 08:49:24.913488] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:23:30.316 [2024-05-15 08:49:24.913500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:23:30.316 [2024-05-15 08:49:24.913532] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:23:30.316 [2024-05-15 08:49:24.913543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:23:30.316 [2024-05-15 08:49:24.913561] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:23:30.316 [2024-05-15 08:49:24.913570] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:23:30.316 [2024-05-15 08:49:24.913591] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:23:30.316 [2024-05-15 08:49:24.913597] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:23:30.316 [2024-05-15 08:49:24.913607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:23:30.316 [2024-05-15 08:49:24.913618] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:23:30.316 [2024-05-15 08:49:24.913626] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:23:30.316 [2024-05-15 08:49:24.913635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:23:30.316 [2024-05-15 08:49:24.913645] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:23:30.316 [2024-05-15 08:49:24.913653] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:23:30.316 [2024-05-15 08:49:24.913662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:23:30.316 [2024-05-15 08:49:24.913677] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:23:30.316 [2024-05-15 08:49:24.913686] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:23:30.317 [2024-05-15 08:49:24.913695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:23:30.317 [2024-05-15 08:49:24.913706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:23:30.317 [2024-05-15 08:49:24.913728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:23:30.317 [2024-05-15 08:49:24.913745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:23:30.317 [2024-05-15 08:49:24.913760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:23:30.317 ===================================================== 00:23:30.317 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:23:30.317 ===================================================== 00:23:30.317 Controller Capabilities/Features 00:23:30.317 ================================ 00:23:30.317 Vendor ID: 4e58 00:23:30.317 Subsystem Vendor ID: 4e58 00:23:30.317 Serial Number: SPDK1 00:23:30.317 Model Number: SPDK bdev Controller 00:23:30.317 Firmware Version: 24.05 00:23:30.317 Recommended Arb Burst: 6 00:23:30.317 IEEE OUI Identifier: 8d 6b 50 00:23:30.317 Multi-path I/O 00:23:30.317 May have multiple subsystem ports: Yes 00:23:30.317 May have multiple controllers: Yes 00:23:30.317 Associated with SR-IOV VF: No 00:23:30.317 Max Data Transfer Size: 131072 00:23:30.317 Max Number of Namespaces: 32 00:23:30.317 Max Number of I/O Queues: 127 00:23:30.317 NVMe Specification Version (VS): 1.3 00:23:30.317 NVMe Specification Version (Identify): 1.3 00:23:30.317 Maximum Queue Entries: 256 00:23:30.317 Contiguous Queues Required: Yes 00:23:30.317 Arbitration Mechanisms Supported 00:23:30.317 Weighted Round Robin: Not Supported 00:23:30.317 Vendor Specific: Not Supported 00:23:30.317 Reset Timeout: 15000 ms 00:23:30.317 Doorbell Stride: 4 bytes 00:23:30.317 NVM Subsystem Reset: Not Supported 00:23:30.317 Command Sets Supported 00:23:30.317 NVM Command Set: Supported 00:23:30.317 Boot Partition: Not Supported 00:23:30.317 Memory Page Size Minimum: 4096 bytes 00:23:30.317 Memory Page Size Maximum: 4096 bytes 00:23:30.317 Persistent Memory Region: Not Supported 00:23:30.317 Optional Asynchronous Events Supported 00:23:30.317 Namespace Attribute Notices: Supported 00:23:30.317 Firmware Activation Notices: Not Supported 00:23:30.317 ANA Change Notices: Not Supported 00:23:30.317 PLE Aggregate Log Change Notices: Not Supported 00:23:30.317 LBA Status Info Alert Notices: Not Supported 00:23:30.317 EGE Aggregate Log Change Notices: Not Supported 00:23:30.317 Normal NVM Subsystem Shutdown event: Not Supported 00:23:30.317 Zone Descriptor Change Notices: Not Supported 00:23:30.317 Discovery Log Change Notices: Not Supported 00:23:30.317 Controller Attributes 00:23:30.317 128-bit Host Identifier: Supported 00:23:30.317 Non-Operational Permissive Mode: Not Supported 00:23:30.317 NVM Sets: Not Supported 00:23:30.317 Read Recovery Levels: Not Supported 00:23:30.317 Endurance Groups: Not Supported 00:23:30.317 Predictable Latency Mode: Not Supported 00:23:30.317 Traffic Based Keep ALive: Not Supported 00:23:30.317 Namespace Granularity: Not Supported 00:23:30.317 SQ Associations: Not Supported 00:23:30.317 UUID List: Not Supported 00:23:30.317 Multi-Domain Subsystem: Not Supported 00:23:30.317 Fixed Capacity Management: Not Supported 00:23:30.317 Variable Capacity Management: Not Supported 00:23:30.317 Delete Endurance Group: Not Supported 00:23:30.317 Delete NVM Set: Not Supported 00:23:30.317 Extended LBA Formats Supported: Not Supported 00:23:30.317 Flexible Data Placement Supported: Not Supported 00:23:30.317 00:23:30.317 Controller Memory Buffer Support 00:23:30.317 ================================ 00:23:30.317 Supported: No 00:23:30.317 00:23:30.317 Persistent Memory Region Support 00:23:30.317 ================================ 00:23:30.317 Supported: No 00:23:30.317 00:23:30.317 Admin Command Set Attributes 00:23:30.317 ============================ 00:23:30.317 Security Send/Receive: Not Supported 00:23:30.317 Format NVM: Not Supported 00:23:30.317 Firmware Activate/Download: Not Supported 00:23:30.317 Namespace Management: Not Supported 00:23:30.317 Device Self-Test: Not Supported 00:23:30.317 Directives: Not Supported 00:23:30.317 NVMe-MI: Not Supported 00:23:30.317 Virtualization Management: Not Supported 00:23:30.317 Doorbell Buffer Config: Not Supported 00:23:30.317 Get LBA Status Capability: Not Supported 00:23:30.317 Command & Feature Lockdown Capability: Not Supported 00:23:30.317 Abort Command Limit: 4 00:23:30.317 Async Event Request Limit: 4 00:23:30.317 Number of Firmware Slots: N/A 00:23:30.317 Firmware Slot 1 Read-Only: N/A 00:23:30.317 Firmware Activation Without Reset: N/A 00:23:30.317 Multiple Update Detection Support: N/A 00:23:30.317 Firmware Update Granularity: No Information Provided 00:23:30.317 Per-Namespace SMART Log: No 00:23:30.317 Asymmetric Namespace Access Log Page: Not Supported 00:23:30.317 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:23:30.317 Command Effects Log Page: Supported 00:23:30.317 Get Log Page Extended Data: Supported 00:23:30.317 Telemetry Log Pages: Not Supported 00:23:30.317 Persistent Event Log Pages: Not Supported 00:23:30.317 Supported Log Pages Log Page: May Support 00:23:30.317 Commands Supported & Effects Log Page: Not Supported 00:23:30.317 Feature Identifiers & Effects Log Page:May Support 00:23:30.317 NVMe-MI Commands & Effects Log Page: May Support 00:23:30.317 Data Area 4 for Telemetry Log: Not Supported 00:23:30.317 Error Log Page Entries Supported: 128 00:23:30.317 Keep Alive: Supported 00:23:30.317 Keep Alive Granularity: 10000 ms 00:23:30.317 00:23:30.317 NVM Command Set Attributes 00:23:30.317 ========================== 00:23:30.317 Submission Queue Entry Size 00:23:30.317 Max: 64 00:23:30.317 Min: 64 00:23:30.317 Completion Queue Entry Size 00:23:30.317 Max: 16 00:23:30.317 Min: 16 00:23:30.317 Number of Namespaces: 32 00:23:30.317 Compare Command: Supported 00:23:30.317 Write Uncorrectable Command: Not Supported 00:23:30.317 Dataset Management Command: Supported 00:23:30.317 Write Zeroes Command: Supported 00:23:30.317 Set Features Save Field: Not Supported 00:23:30.317 Reservations: Not Supported 00:23:30.317 Timestamp: Not Supported 00:23:30.317 Copy: Supported 00:23:30.317 Volatile Write Cache: Present 00:23:30.317 Atomic Write Unit (Normal): 1 00:23:30.317 Atomic Write Unit (PFail): 1 00:23:30.317 Atomic Compare & Write Unit: 1 00:23:30.317 Fused Compare & Write: Supported 00:23:30.317 Scatter-Gather List 00:23:30.317 SGL Command Set: Supported (Dword aligned) 00:23:30.317 SGL Keyed: Not Supported 00:23:30.317 SGL Bit Bucket Descriptor: Not Supported 00:23:30.317 SGL Metadata Pointer: Not Supported 00:23:30.317 Oversized SGL: Not Supported 00:23:30.317 SGL Metadata Address: Not Supported 00:23:30.317 SGL Offset: Not Supported 00:23:30.317 Transport SGL Data Block: Not Supported 00:23:30.317 Replay Protected Memory Block: Not Supported 00:23:30.317 00:23:30.317 Firmware Slot Information 00:23:30.317 ========================= 00:23:30.317 Active slot: 1 00:23:30.317 Slot 1 Firmware Revision: 24.05 00:23:30.317 00:23:30.317 00:23:30.317 Commands Supported and Effects 00:23:30.317 ============================== 00:23:30.317 Admin Commands 00:23:30.317 -------------- 00:23:30.317 Get Log Page (02h): Supported 00:23:30.317 Identify (06h): Supported 00:23:30.317 Abort (08h): Supported 00:23:30.317 Set Features (09h): Supported 00:23:30.317 Get Features (0Ah): Supported 00:23:30.317 Asynchronous Event Request (0Ch): Supported 00:23:30.317 Keep Alive (18h): Supported 00:23:30.317 I/O Commands 00:23:30.317 ------------ 00:23:30.317 Flush (00h): Supported LBA-Change 00:23:30.317 Write (01h): Supported LBA-Change 00:23:30.317 Read (02h): Supported 00:23:30.317 Compare (05h): Supported 00:23:30.317 Write Zeroes (08h): Supported LBA-Change 00:23:30.317 Dataset Management (09h): Supported LBA-Change 00:23:30.317 Copy (19h): Supported LBA-Change 00:23:30.317 Unknown (79h): Supported LBA-Change 00:23:30.317 Unknown (7Ah): Supported 00:23:30.317 00:23:30.317 Error Log 00:23:30.317 ========= 00:23:30.317 00:23:30.317 Arbitration 00:23:30.317 =========== 00:23:30.317 Arbitration Burst: 1 00:23:30.317 00:23:30.317 Power Management 00:23:30.317 ================ 00:23:30.317 Number of Power States: 1 00:23:30.317 Current Power State: Power State #0 00:23:30.317 Power State #0: 00:23:30.317 Max Power: 0.00 W 00:23:30.317 Non-Operational State: Operational 00:23:30.317 Entry Latency: Not Reported 00:23:30.317 Exit Latency: Not Reported 00:23:30.317 Relative Read Throughput: 0 00:23:30.317 Relative Read Latency: 0 00:23:30.317 Relative Write Throughput: 0 00:23:30.317 Relative Write Latency: 0 00:23:30.318 Idle Power: Not Reported 00:23:30.318 Active Power: Not Reported 00:23:30.318 Non-Operational Permissive Mode: Not Supported 00:23:30.318 00:23:30.318 Health Information 00:23:30.318 ================== 00:23:30.318 Critical Warnings: 00:23:30.318 Available Spare Space: OK 00:23:30.318 Temperature: OK 00:23:30.318 Device Reliability: OK 00:23:30.318 Read Only: No 00:23:30.318 Volatile Memory Backup: OK 00:23:30.318 Current Temperature: 0 Kelvin (-2[2024-05-15 08:49:24.913873] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:23:30.318 [2024-05-15 08:49:24.913889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:23:30.318 [2024-05-15 08:49:24.913927] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:23:30.318 [2024-05-15 08:49:24.913944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.318 [2024-05-15 08:49:24.913955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.318 [2024-05-15 08:49:24.913965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.318 [2024-05-15 08:49:24.913974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.318 [2024-05-15 08:49:24.914481] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:23:30.318 [2024-05-15 08:49:24.914518] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:23:30.318 [2024-05-15 08:49:24.915476] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:23:30.318 [2024-05-15 08:49:24.915564] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:23:30.318 [2024-05-15 08:49:24.915579] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:23:30.318 [2024-05-15 08:49:24.916486] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:23:30.318 [2024-05-15 08:49:24.916524] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:23:30.318 [2024-05-15 08:49:24.916594] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:23:30.318 [2024-05-15 08:49:24.920226] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:23:30.318 73 Celsius) 00:23:30.318 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:23:30.318 Available Spare: 0% 00:23:30.318 Available Spare Threshold: 0% 00:23:30.318 Life Percentage Used: 0% 00:23:30.318 Data Units Read: 0 00:23:30.318 Data Units Written: 0 00:23:30.318 Host Read Commands: 0 00:23:30.318 Host Write Commands: 0 00:23:30.318 Controller Busy Time: 0 minutes 00:23:30.318 Power Cycles: 0 00:23:30.318 Power On Hours: 0 hours 00:23:30.318 Unsafe Shutdowns: 0 00:23:30.318 Unrecoverable Media Errors: 0 00:23:30.318 Lifetime Error Log Entries: 0 00:23:30.318 Warning Temperature Time: 0 minutes 00:23:30.318 Critical Temperature Time: 0 minutes 00:23:30.318 00:23:30.318 Number of Queues 00:23:30.318 ================ 00:23:30.318 Number of I/O Submission Queues: 127 00:23:30.318 Number of I/O Completion Queues: 127 00:23:30.318 00:23:30.318 Active Namespaces 00:23:30.318 ================= 00:23:30.318 Namespace ID:1 00:23:30.318 Error Recovery Timeout: Unlimited 00:23:30.318 Command Set Identifier: NVM (00h) 00:23:30.318 Deallocate: Supported 00:23:30.318 Deallocated/Unwritten Error: Not Supported 00:23:30.318 Deallocated Read Value: Unknown 00:23:30.318 Deallocate in Write Zeroes: Not Supported 00:23:30.318 Deallocated Guard Field: 0xFFFF 00:23:30.318 Flush: Supported 00:23:30.318 Reservation: Supported 00:23:30.318 Namespace Sharing Capabilities: Multiple Controllers 00:23:30.318 Size (in LBAs): 131072 (0GiB) 00:23:30.318 Capacity (in LBAs): 131072 (0GiB) 00:23:30.318 Utilization (in LBAs): 131072 (0GiB) 00:23:30.318 NGUID: 5E5D3E8E0CEA40A88FD9F9F010C20A78 00:23:30.318 UUID: 5e5d3e8e-0cea-40a8-8fd9-f9f010c20a78 00:23:30.318 Thin Provisioning: Not Supported 00:23:30.318 Per-NS Atomic Units: Yes 00:23:30.318 Atomic Boundary Size (Normal): 0 00:23:30.318 Atomic Boundary Size (PFail): 0 00:23:30.318 Atomic Boundary Offset: 0 00:23:30.318 Maximum Single Source Range Length: 65535 00:23:30.318 Maximum Copy Length: 65535 00:23:30.318 Maximum Source Range Count: 1 00:23:30.318 NGUID/EUI64 Never Reused: No 00:23:30.318 Namespace Write Protected: No 00:23:30.318 Number of LBA Formats: 1 00:23:30.318 Current LBA Format: LBA Format #00 00:23:30.318 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:30.318 00:23:30.318 08:49:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:23:30.318 EAL: No free 2048 kB hugepages reported on node 1 00:23:30.576 [2024-05-15 08:49:25.150065] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:23:35.837 Initializing NVMe Controllers 00:23:35.837 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:23:35.837 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:23:35.837 Initialization complete. Launching workers. 00:23:35.837 ======================================================== 00:23:35.837 Latency(us) 00:23:35.837 Device Information : IOPS MiB/s Average min max 00:23:35.837 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 34976.60 136.63 3660.64 1145.70 9706.80 00:23:35.837 ======================================================== 00:23:35.837 Total : 34976.60 136.63 3660.64 1145.70 9706.80 00:23:35.837 00:23:35.837 [2024-05-15 08:49:30.174488] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:23:35.837 08:49:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:23:35.837 EAL: No free 2048 kB hugepages reported on node 1 00:23:35.837 [2024-05-15 08:49:30.416693] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:23:41.159 Initializing NVMe Controllers 00:23:41.159 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:23:41.159 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:23:41.159 Initialization complete. Launching workers. 00:23:41.159 ======================================================== 00:23:41.159 Latency(us) 00:23:41.159 Device Information : IOPS MiB/s Average min max 00:23:41.159 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15948.80 62.30 8032.30 4970.40 15975.60 00:23:41.159 ======================================================== 00:23:41.159 Total : 15948.80 62.30 8032.30 4970.40 15975.60 00:23:41.159 00:23:41.159 [2024-05-15 08:49:35.450894] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:23:41.159 08:49:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:23:41.159 EAL: No free 2048 kB hugepages reported on node 1 00:23:41.159 [2024-05-15 08:49:35.674971] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:23:46.422 [2024-05-15 08:49:40.740553] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:23:46.422 Initializing NVMe Controllers 00:23:46.422 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:23:46.422 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:23:46.422 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:23:46.422 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:23:46.422 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:23:46.422 Initialization complete. Launching workers. 00:23:46.422 Starting thread on core 2 00:23:46.422 Starting thread on core 3 00:23:46.422 Starting thread on core 1 00:23:46.422 08:49:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:23:46.422 EAL: No free 2048 kB hugepages reported on node 1 00:23:46.422 [2024-05-15 08:49:41.048706] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:23:49.702 [2024-05-15 08:49:44.209502] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:23:49.702 Initializing NVMe Controllers 00:23:49.702 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:23:49.702 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:23:49.702 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:23:49.702 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:23:49.702 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:23:49.702 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:23:49.702 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:23:49.702 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:23:49.702 Initialization complete. Launching workers. 00:23:49.702 Starting thread on core 1 with urgent priority queue 00:23:49.702 Starting thread on core 2 with urgent priority queue 00:23:49.702 Starting thread on core 3 with urgent priority queue 00:23:49.702 Starting thread on core 0 with urgent priority queue 00:23:49.702 SPDK bdev Controller (SPDK1 ) core 0: 4795.67 IO/s 20.85 secs/100000 ios 00:23:49.702 SPDK bdev Controller (SPDK1 ) core 1: 4955.00 IO/s 20.18 secs/100000 ios 00:23:49.702 SPDK bdev Controller (SPDK1 ) core 2: 4775.00 IO/s 20.94 secs/100000 ios 00:23:49.702 SPDK bdev Controller (SPDK1 ) core 3: 4669.00 IO/s 21.42 secs/100000 ios 00:23:49.702 ======================================================== 00:23:49.702 00:23:49.702 08:49:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:23:49.702 EAL: No free 2048 kB hugepages reported on node 1 00:23:49.960 [2024-05-15 08:49:44.530820] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:23:49.960 Initializing NVMe Controllers 00:23:49.960 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:23:49.960 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:23:49.960 Namespace ID: 1 size: 0GB 00:23:49.960 Initialization complete. 00:23:49.960 INFO: using host memory buffer for IO 00:23:49.960 Hello world! 00:23:49.960 [2024-05-15 08:49:44.564428] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:23:49.960 08:49:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:23:49.960 EAL: No free 2048 kB hugepages reported on node 1 00:23:50.217 [2024-05-15 08:49:44.874701] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:23:51.150 Initializing NVMe Controllers 00:23:51.150 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:23:51.150 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:23:51.150 Initialization complete. Launching workers. 00:23:51.150 submit (in ns) avg, min, max = 9549.4, 3554.4, 5996318.9 00:23:51.150 complete (in ns) avg, min, max = 23451.8, 2075.6, 4014968.9 00:23:51.150 00:23:51.150 Submit histogram 00:23:51.150 ================ 00:23:51.150 Range in us Cumulative Count 00:23:51.150 3.532 - 3.556: 0.0075% ( 1) 00:23:51.150 3.556 - 3.579: 0.0827% ( 10) 00:23:51.150 3.579 - 3.603: 3.5180% ( 457) 00:23:51.150 3.603 - 3.627: 11.3734% ( 1045) 00:23:51.150 3.627 - 3.650: 20.4540% ( 1208) 00:23:51.150 3.650 - 3.674: 29.4370% ( 1195) 00:23:51.150 3.674 - 3.698: 38.5101% ( 1207) 00:23:51.150 3.698 - 3.721: 45.9746% ( 993) 00:23:51.150 3.721 - 3.745: 51.3493% ( 715) 00:23:51.150 3.745 - 3.769: 55.9047% ( 606) 00:23:51.150 3.769 - 3.793: 59.8963% ( 531) 00:23:51.150 3.793 - 3.816: 62.7903% ( 385) 00:23:51.150 3.816 - 3.840: 65.5566% ( 368) 00:23:51.150 3.840 - 3.864: 69.3828% ( 509) 00:23:51.150 3.864 - 3.887: 73.6150% ( 563) 00:23:51.150 3.887 - 3.911: 78.0125% ( 585) 00:23:51.150 3.911 - 3.935: 81.8236% ( 507) 00:23:51.150 3.935 - 3.959: 84.3644% ( 338) 00:23:51.150 3.959 - 3.982: 86.5143% ( 286) 00:23:51.150 3.982 - 4.006: 88.4688% ( 260) 00:23:51.150 4.006 - 4.030: 89.9045% ( 191) 00:23:51.150 4.030 - 4.053: 91.1449% ( 165) 00:23:51.150 4.053 - 4.077: 92.0394% ( 119) 00:23:51.150 4.077 - 4.101: 92.9865% ( 126) 00:23:51.150 4.101 - 4.124: 93.9187% ( 124) 00:23:51.150 4.124 - 4.148: 94.7380% ( 109) 00:23:51.150 4.148 - 4.172: 95.3845% ( 86) 00:23:51.150 4.172 - 4.196: 95.7829% ( 53) 00:23:51.150 4.196 - 4.219: 96.1963% ( 55) 00:23:51.150 4.219 - 4.243: 96.3993% ( 27) 00:23:51.150 4.243 - 4.267: 96.6549% ( 34) 00:23:51.150 4.267 - 4.290: 96.8278% ( 23) 00:23:51.150 4.290 - 4.314: 96.9856% ( 21) 00:23:51.150 4.314 - 4.338: 97.0608% ( 10) 00:23:51.150 4.338 - 4.361: 97.1285% ( 9) 00:23:51.150 4.361 - 4.385: 97.1811% ( 7) 00:23:51.150 4.385 - 4.409: 97.2112% ( 4) 00:23:51.150 4.409 - 4.433: 97.2863% ( 10) 00:23:51.150 4.433 - 4.456: 97.3540% ( 9) 00:23:51.150 4.456 - 4.480: 97.3690% ( 2) 00:23:51.150 4.480 - 4.504: 97.4141% ( 6) 00:23:51.150 4.504 - 4.527: 97.4367% ( 3) 00:23:51.150 4.527 - 4.551: 97.4442% ( 1) 00:23:51.150 4.551 - 4.575: 97.4743% ( 4) 00:23:51.150 4.575 - 4.599: 97.5043% ( 4) 00:23:51.150 4.622 - 4.646: 97.5269% ( 3) 00:23:51.150 4.646 - 4.670: 97.5344% ( 1) 00:23:51.150 4.670 - 4.693: 97.5645% ( 4) 00:23:51.150 4.693 - 4.717: 97.5945% ( 4) 00:23:51.150 4.717 - 4.741: 97.6020% ( 1) 00:23:51.150 4.741 - 4.764: 97.6246% ( 3) 00:23:51.150 4.764 - 4.788: 97.6547% ( 4) 00:23:51.150 4.788 - 4.812: 97.6772% ( 3) 00:23:51.150 4.812 - 4.836: 97.6998% ( 3) 00:23:51.150 4.836 - 4.859: 97.7223% ( 3) 00:23:51.150 4.859 - 4.883: 97.7374% ( 2) 00:23:51.150 4.907 - 4.930: 97.7674% ( 4) 00:23:51.150 4.930 - 4.954: 97.8125% ( 6) 00:23:51.150 4.954 - 4.978: 97.8276% ( 2) 00:23:51.150 4.978 - 5.001: 97.8501% ( 3) 00:23:51.150 5.001 - 5.025: 97.8802% ( 4) 00:23:51.150 5.025 - 5.049: 97.8952% ( 2) 00:23:51.150 5.049 - 5.073: 97.9253% ( 4) 00:23:51.150 5.073 - 5.096: 97.9403% ( 2) 00:23:51.150 5.096 - 5.120: 97.9553% ( 2) 00:23:51.150 5.120 - 5.144: 97.9779% ( 3) 00:23:51.150 5.144 - 5.167: 97.9929% ( 2) 00:23:51.150 5.167 - 5.191: 98.0155% ( 3) 00:23:51.150 5.191 - 5.215: 98.0456% ( 4) 00:23:51.150 5.239 - 5.262: 98.0531% ( 1) 00:23:51.150 5.262 - 5.286: 98.0606% ( 1) 00:23:51.150 5.286 - 5.310: 98.0907% ( 4) 00:23:51.150 5.310 - 5.333: 98.0982% ( 1) 00:23:51.150 5.333 - 5.357: 98.1057% ( 1) 00:23:51.150 5.357 - 5.381: 98.1282% ( 3) 00:23:51.150 5.381 - 5.404: 98.1358% ( 1) 00:23:51.150 5.404 - 5.428: 98.1508% ( 2) 00:23:51.150 5.428 - 5.452: 98.1658% ( 2) 00:23:51.150 5.452 - 5.476: 98.1733% ( 1) 00:23:51.150 5.736 - 5.760: 98.1809% ( 1) 00:23:51.150 5.879 - 5.902: 98.1884% ( 1) 00:23:51.150 5.926 - 5.950: 98.2109% ( 3) 00:23:51.150 5.973 - 5.997: 98.2184% ( 1) 00:23:51.150 6.021 - 6.044: 98.2260% ( 1) 00:23:51.150 6.044 - 6.068: 98.2485% ( 3) 00:23:51.150 6.068 - 6.116: 98.2560% ( 1) 00:23:51.150 6.116 - 6.163: 98.2635% ( 1) 00:23:51.150 6.163 - 6.210: 98.2711% ( 1) 00:23:51.150 6.210 - 6.258: 98.2786% ( 1) 00:23:51.150 6.258 - 6.305: 98.2861% ( 1) 00:23:51.150 6.305 - 6.353: 98.2936% ( 1) 00:23:51.150 6.400 - 6.447: 98.3011% ( 1) 00:23:51.150 6.447 - 6.495: 98.3087% ( 1) 00:23:51.150 6.542 - 6.590: 98.3162% ( 1) 00:23:51.150 6.637 - 6.684: 98.3237% ( 1) 00:23:51.150 6.684 - 6.732: 98.3312% ( 1) 00:23:51.150 6.921 - 6.969: 98.3387% ( 1) 00:23:51.150 6.969 - 7.016: 98.3538% ( 2) 00:23:51.150 7.064 - 7.111: 98.3763% ( 3) 00:23:51.150 7.111 - 7.159: 98.3838% ( 1) 00:23:51.150 7.159 - 7.206: 98.4139% ( 4) 00:23:51.150 7.206 - 7.253: 98.4289% ( 2) 00:23:51.150 7.253 - 7.301: 98.4364% ( 1) 00:23:51.150 7.396 - 7.443: 98.4515% ( 2) 00:23:51.150 7.443 - 7.490: 98.4665% ( 2) 00:23:51.150 7.490 - 7.538: 98.4740% ( 1) 00:23:51.150 7.585 - 7.633: 98.4815% ( 1) 00:23:51.150 7.633 - 7.680: 98.4891% ( 1) 00:23:51.150 7.680 - 7.727: 98.5041% ( 2) 00:23:51.150 7.727 - 7.775: 98.5191% ( 2) 00:23:51.150 7.917 - 7.964: 98.5417% ( 3) 00:23:51.150 7.964 - 8.012: 98.5492% ( 1) 00:23:51.150 8.107 - 8.154: 98.5567% ( 1) 00:23:51.150 8.296 - 8.344: 98.5642% ( 1) 00:23:51.150 8.676 - 8.723: 98.5718% ( 1) 00:23:51.150 8.770 - 8.818: 98.5793% ( 1) 00:23:51.150 8.865 - 8.913: 98.5868% ( 1) 00:23:51.150 9.434 - 9.481: 98.5943% ( 1) 00:23:51.150 9.766 - 9.813: 98.6018% ( 1) 00:23:51.150 10.193 - 10.240: 98.6093% ( 1) 00:23:51.150 10.904 - 10.951: 98.6169% ( 1) 00:23:51.150 11.236 - 11.283: 98.6244% ( 1) 00:23:51.150 11.330 - 11.378: 98.6319% ( 1) 00:23:51.150 11.473 - 11.520: 98.6394% ( 1) 00:23:51.150 11.757 - 11.804: 98.6469% ( 1) 00:23:51.150 11.852 - 11.899: 98.6544% ( 1) 00:23:51.150 11.947 - 11.994: 98.6620% ( 1) 00:23:51.150 12.136 - 12.231: 98.6770% ( 2) 00:23:51.150 12.231 - 12.326: 98.6845% ( 1) 00:23:51.150 12.516 - 12.610: 98.6920% ( 1) 00:23:51.150 12.800 - 12.895: 98.6995% ( 1) 00:23:51.150 12.990 - 13.084: 98.7071% ( 1) 00:23:51.150 13.179 - 13.274: 98.7146% ( 1) 00:23:51.150 13.653 - 13.748: 98.7296% ( 2) 00:23:51.150 13.748 - 13.843: 98.7371% ( 1) 00:23:51.150 14.127 - 14.222: 98.7446% ( 1) 00:23:51.150 14.507 - 14.601: 98.7522% ( 1) 00:23:51.150 17.351 - 17.446: 98.7597% ( 1) 00:23:51.150 17.446 - 17.541: 98.7897% ( 4) 00:23:51.150 17.541 - 17.636: 98.8348% ( 6) 00:23:51.150 17.636 - 17.730: 98.8875% ( 7) 00:23:51.150 17.730 - 17.825: 98.9551% ( 9) 00:23:51.151 17.825 - 17.920: 98.9927% ( 5) 00:23:51.151 17.920 - 18.015: 99.0378% ( 6) 00:23:51.151 18.015 - 18.110: 99.0754% ( 5) 00:23:51.151 18.110 - 18.204: 99.1431% ( 9) 00:23:51.151 18.204 - 18.299: 99.2408% ( 13) 00:23:51.151 18.299 - 18.394: 99.2934% ( 7) 00:23:51.151 18.394 - 18.489: 99.3159% ( 3) 00:23:51.151 18.489 - 18.584: 99.3911% ( 10) 00:23:51.151 18.584 - 18.679: 99.4813% ( 12) 00:23:51.151 18.679 - 18.773: 99.5339% ( 7) 00:23:51.151 18.773 - 18.868: 99.5490% ( 2) 00:23:51.151 18.868 - 18.963: 99.5866% ( 5) 00:23:51.151 18.963 - 19.058: 99.6241% ( 5) 00:23:51.151 19.058 - 19.153: 99.6467% ( 3) 00:23:51.151 19.153 - 19.247: 99.6768% ( 4) 00:23:51.151 19.247 - 19.342: 99.7068% ( 4) 00:23:51.151 19.342 - 19.437: 99.7294% ( 3) 00:23:51.151 19.437 - 19.532: 99.7519% ( 3) 00:23:51.151 19.532 - 19.627: 99.7595% ( 1) 00:23:51.151 19.627 - 19.721: 99.7670% ( 1) 00:23:51.151 19.721 - 19.816: 99.7820% ( 2) 00:23:51.151 19.911 - 20.006: 99.7895% ( 1) 00:23:51.151 20.006 - 20.101: 99.7970% ( 1) 00:23:51.151 20.101 - 20.196: 99.8046% ( 1) 00:23:51.151 20.196 - 20.290: 99.8121% ( 1) 00:23:51.151 20.385 - 20.480: 99.8271% ( 2) 00:23:51.151 20.859 - 20.954: 99.8346% ( 1) 00:23:51.151 22.281 - 22.376: 99.8497% ( 2) 00:23:51.151 25.221 - 25.410: 99.8572% ( 1) 00:23:51.151 32.806 - 32.996: 99.8647% ( 1) 00:23:51.151 3980.705 - 4004.978: 99.9399% ( 10) 00:23:51.151 4004.978 - 4029.250: 99.9925% ( 7) 00:23:51.151 5995.330 - 6019.603: 100.0000% ( 1) 00:23:51.151 00:23:51.151 Complete histogram 00:23:51.151 ================== 00:23:51.151 Range in us Cumulative Count 00:23:51.151 2.074 - 2.086: 4.4201% ( 588) 00:23:51.151 2.086 - 2.098: 19.7625% ( 2041) 00:23:51.151 2.098 - 2.110: 22.5814% ( 375) 00:23:51.151 2.110 - 2.121: 40.2390% ( 2349) 00:23:51.151 2.121 - 2.133: 56.8218% ( 2206) 00:23:51.151 2.133 - 2.145: 59.0919% ( 302) 00:23:51.151 2.145 - 2.157: 63.8653% ( 635) 00:23:51.151 2.157 - 2.169: 68.3530% ( 597) 00:23:51.151 2.169 - 2.181: 69.5407% ( 158) 00:23:51.151 2.181 - 2.193: 75.6747% ( 816) 00:23:51.151 2.193 - 2.204: 80.0947% ( 588) 00:23:51.151 2.204 - 2.216: 80.8690% ( 103) 00:23:51.151 2.216 - 2.228: 82.9963% ( 283) 00:23:51.151 2.228 - 2.240: 85.6048% ( 347) 00:23:51.151 2.240 - 2.252: 86.5970% ( 132) 00:23:51.151 2.252 - 2.264: 89.7392% ( 418) 00:23:51.151 2.264 - 2.276: 92.4528% ( 361) 00:23:51.151 2.276 - 2.287: 93.1895% ( 98) 00:23:51.151 2.287 - 2.299: 93.7007% ( 68) 00:23:51.151 2.299 - 2.311: 94.2344% ( 71) 00:23:51.151 2.311 - 2.323: 94.5651% ( 44) 00:23:51.151 2.323 - 2.335: 95.0237% ( 61) 00:23:51.151 2.335 - 2.347: 95.4146% ( 52) 00:23:51.151 2.347 - 2.359: 95.5198% ( 14) 00:23:51.151 2.359 - 2.370: 95.7002% ( 24) 00:23:51.151 2.370 - 2.382: 95.9784% ( 37) 00:23:51.151 2.382 - 2.394: 96.1888% ( 28) 00:23:51.151 2.394 - 2.406: 96.5045% ( 42) 00:23:51.151 2.406 - 2.418: 96.9330% ( 57) 00:23:51.151 2.418 - 2.430: 97.2487% ( 42) 00:23:51.151 2.430 - 2.441: 97.4893% ( 32) 00:23:51.151 2.441 - 2.453: 97.6697% ( 24) 00:23:51.151 2.453 - 2.465: 97.8125% ( 19) 00:23:51.151 2.465 - 2.477: 97.8877% ( 10) 00:23:51.151 2.477 - 2.489: 98.0005% ( 15) 00:23:51.151 2.489 - 2.501: 98.1132% ( 15) 00:23:51.151 2.501 - 2.513: 98.1733% ( 8) 00:23:51.151 2.513 - 2.524: 98.2560% ( 11) 00:23:51.151 2.524 - 2.536: 98.3237% ( 9) 00:23:51.151 2.536 - 2.548: 98.3538% ( 4) 00:23:51.151 2.548 - 2.560: 98.3763% ( 3) 00:23:51.151 2.560 - 2.572: 98.4064% ( 4) 00:23:51.151 2.596 - 2.607: 98.4139% ( 1) 00:23:51.151 2.607 - 2.619: 98.4289% ( 2) 00:23:51.151 2.619 - 2.631: 98.4364% ( 1) 00:23:51.151 2.631 - 2.643: 98.4590% ( 3) 00:23:51.151 2.643 - 2.655: 98.4740% ( 2) 00:23:51.151 2.714 - 2.726: 98.4815% ( 1) 00:23:51.151 2.833 - 2.844: 98.4891% ( 1) 00:23:51.151 2.916 - 2.927: 98.4966% ( 1) 00:23:51.151 2.975 - 2.987: 98.5041% ( 1) 00:23:51.151 3.153 - 3.176: 98.5116% ( 1) 00:23:51.151 3.224 - 3.247: 98.5191% ( 1) 00:23:51.151 3.247 - 3.271: 98.5342% ( 2) 00:23:51.151 3.295 - 3.319: 98.5417% ( 1) 00:23:51.151 3.319 - 3.342: 98.5492% ( 1) 00:23:51.151 3.342 - 3.366: 98.5642% ( 2) 00:23:51.151 3.390 - 3.413: 98.5718% ( 1) 00:23:51.151 3.413 - 3.437: 98.6093% ( 5) 00:23:51.151 3.437 - 3.461: 98.6169% ( 1) 00:23:51.151 3.461 - 3.484: 98.6244% ( 1) 00:23:51.151 3.484 - 3.508: 98.6469% ( 3) 00:23:51.151 3.532 - 3.556: 98.6620% ( 2) 00:23:51.151 3.556 - 3.579: 98.6770% ( 2) 00:23:51.151 3.627 - 3.650: 98.6845% ( 1) 00:23:51.151 3.674 - 3.698: 98.6920% ( 1) 00:23:51.151 3.698 - 3.721: 98.6995% ( 1) 00:23:51.151 3.721 - 3.745: 98.7146% ( 2) 00:23:51.151 3.793 - 3.816: 98.7221% ( 1) 00:23:51.151 3.864 - 3.887: 98.7296% ( 1) 00:23:51.151 4.077 - 4.101: 98.7371% ( 1) 00:23:51.151 4.124 - 4.148: 98.7446% ( 1) 00:23:51.151 4.812 - 4.836: 98.7522% ( 1) 00:23:51.151 5.073 - 5.096: 98.7672% ( 2) 00:23:51.151 5.357 - 5.381: 98.7822% ( 2) 00:23:51.151 5.381 - 5.404: 98.7897% ( 1) 00:23:51.151 5.547 - 5.570: 9[2024-05-15 08:49:45.896880] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:23:51.151 8.7973% ( 1) 00:23:51.151 5.594 - 5.618: 98.8048% ( 1) 00:23:51.151 5.713 - 5.736: 98.8123% ( 1) 00:23:51.151 5.879 - 5.902: 98.8198% ( 1) 00:23:51.151 6.447 - 6.495: 98.8273% ( 1) 00:23:51.151 6.684 - 6.732: 98.8348% ( 1) 00:23:51.151 6.732 - 6.779: 98.8424% ( 1) 00:23:51.151 7.111 - 7.159: 98.8499% ( 1) 00:23:51.151 7.206 - 7.253: 98.8574% ( 1) 00:23:51.151 7.727 - 7.775: 98.8649% ( 1) 00:23:51.151 11.520 - 11.567: 98.8724% ( 1) 00:23:51.151 13.084 - 13.179: 98.8800% ( 1) 00:23:51.151 15.550 - 15.644: 98.9025% ( 3) 00:23:51.151 15.739 - 15.834: 98.9175% ( 2) 00:23:51.151 15.834 - 15.929: 98.9251% ( 1) 00:23:51.151 15.929 - 16.024: 98.9476% ( 3) 00:23:51.151 16.024 - 16.119: 98.9777% ( 4) 00:23:51.151 16.119 - 16.213: 98.9852% ( 1) 00:23:51.151 16.213 - 16.308: 99.0153% ( 4) 00:23:51.151 16.308 - 16.403: 99.0453% ( 4) 00:23:51.151 16.403 - 16.498: 99.0754% ( 4) 00:23:51.151 16.498 - 16.593: 99.1130% ( 5) 00:23:51.151 16.593 - 16.687: 99.1731% ( 8) 00:23:51.151 16.687 - 16.782: 99.2333% ( 8) 00:23:51.151 16.782 - 16.877: 99.2934% ( 8) 00:23:51.151 16.877 - 16.972: 99.3235% ( 4) 00:23:51.151 16.972 - 17.067: 99.3460% ( 3) 00:23:51.151 17.067 - 17.161: 99.3686% ( 3) 00:23:51.151 17.161 - 17.256: 99.3761% ( 1) 00:23:51.151 17.256 - 17.351: 99.3836% ( 1) 00:23:51.151 17.351 - 17.446: 99.4061% ( 3) 00:23:51.151 17.446 - 17.541: 99.4212% ( 2) 00:23:51.151 17.541 - 17.636: 99.4287% ( 1) 00:23:51.151 17.636 - 17.730: 99.4362% ( 1) 00:23:51.151 17.825 - 17.920: 99.4437% ( 1) 00:23:51.151 18.394 - 18.489: 99.4513% ( 1) 00:23:51.151 25.600 - 25.790: 99.4588% ( 1) 00:23:51.151 29.772 - 29.961: 99.4663% ( 1) 00:23:51.151 2002.489 - 2014.625: 99.4738% ( 1) 00:23:51.151 3932.160 - 3956.433: 99.4813% ( 1) 00:23:51.151 3980.705 - 4004.978: 99.8572% ( 50) 00:23:51.151 4004.978 - 4029.250: 100.0000% ( 19) 00:23:51.151 00:23:51.409 08:49:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:23:51.409 08:49:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:23:51.409 08:49:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:23:51.409 08:49:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:23:51.409 08:49:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:23:51.666 [ 00:23:51.666 { 00:23:51.666 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:51.666 "subtype": "Discovery", 00:23:51.666 "listen_addresses": [], 00:23:51.666 "allow_any_host": true, 00:23:51.666 "hosts": [] 00:23:51.666 }, 00:23:51.666 { 00:23:51.666 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:23:51.666 "subtype": "NVMe", 00:23:51.666 "listen_addresses": [ 00:23:51.666 { 00:23:51.666 "trtype": "VFIOUSER", 00:23:51.666 "adrfam": "IPv4", 00:23:51.666 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:23:51.666 "trsvcid": "0" 00:23:51.666 } 00:23:51.666 ], 00:23:51.666 "allow_any_host": true, 00:23:51.666 "hosts": [], 00:23:51.666 "serial_number": "SPDK1", 00:23:51.666 "model_number": "SPDK bdev Controller", 00:23:51.666 "max_namespaces": 32, 00:23:51.666 "min_cntlid": 1, 00:23:51.666 "max_cntlid": 65519, 00:23:51.666 "namespaces": [ 00:23:51.666 { 00:23:51.666 "nsid": 1, 00:23:51.666 "bdev_name": "Malloc1", 00:23:51.666 "name": "Malloc1", 00:23:51.666 "nguid": "5E5D3E8E0CEA40A88FD9F9F010C20A78", 00:23:51.666 "uuid": "5e5d3e8e-0cea-40a8-8fd9-f9f010c20a78" 00:23:51.666 } 00:23:51.666 ] 00:23:51.666 }, 00:23:51.666 { 00:23:51.666 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:23:51.666 "subtype": "NVMe", 00:23:51.666 "listen_addresses": [ 00:23:51.666 { 00:23:51.666 "trtype": "VFIOUSER", 00:23:51.666 "adrfam": "IPv4", 00:23:51.666 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:23:51.666 "trsvcid": "0" 00:23:51.666 } 00:23:51.666 ], 00:23:51.666 "allow_any_host": true, 00:23:51.666 "hosts": [], 00:23:51.666 "serial_number": "SPDK2", 00:23:51.666 "model_number": "SPDK bdev Controller", 00:23:51.666 "max_namespaces": 32, 00:23:51.666 "min_cntlid": 1, 00:23:51.666 "max_cntlid": 65519, 00:23:51.666 "namespaces": [ 00:23:51.666 { 00:23:51.666 "nsid": 1, 00:23:51.666 "bdev_name": "Malloc2", 00:23:51.666 "name": "Malloc2", 00:23:51.666 "nguid": "5B48CBBBFFC6479CB7FC5809BD986749", 00:23:51.666 "uuid": "5b48cbbb-ffc6-479c-b7fc-5809bd986749" 00:23:51.666 } 00:23:51.666 ] 00:23:51.666 } 00:23:51.666 ] 00:23:51.666 08:49:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:51.666 08:49:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2238571 00:23:51.666 08:49:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:23:51.666 08:49:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:23:51.666 08:49:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # local i=0 00:23:51.666 08:49:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1263 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:51.666 08:49:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:51.666 08:49:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # return 0 00:23:51.666 08:49:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:23:51.666 08:49:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:23:51.666 EAL: No free 2048 kB hugepages reported on node 1 00:23:51.666 [2024-05-15 08:49:46.411743] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:23:51.923 Malloc3 00:23:51.923 08:49:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:23:52.180 [2024-05-15 08:49:46.756364] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:23:52.180 08:49:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:23:52.180 Asynchronous Event Request test 00:23:52.180 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:23:52.180 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:23:52.180 Registering asynchronous event callbacks... 00:23:52.180 Starting namespace attribute notice tests for all controllers... 00:23:52.180 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:52.180 aer_cb - Changed Namespace 00:23:52.180 Cleaning up... 00:23:52.440 [ 00:23:52.440 { 00:23:52.440 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:52.440 "subtype": "Discovery", 00:23:52.440 "listen_addresses": [], 00:23:52.440 "allow_any_host": true, 00:23:52.440 "hosts": [] 00:23:52.440 }, 00:23:52.440 { 00:23:52.440 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:23:52.440 "subtype": "NVMe", 00:23:52.440 "listen_addresses": [ 00:23:52.440 { 00:23:52.440 "trtype": "VFIOUSER", 00:23:52.440 "adrfam": "IPv4", 00:23:52.440 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:23:52.440 "trsvcid": "0" 00:23:52.440 } 00:23:52.440 ], 00:23:52.440 "allow_any_host": true, 00:23:52.440 "hosts": [], 00:23:52.440 "serial_number": "SPDK1", 00:23:52.440 "model_number": "SPDK bdev Controller", 00:23:52.440 "max_namespaces": 32, 00:23:52.440 "min_cntlid": 1, 00:23:52.440 "max_cntlid": 65519, 00:23:52.440 "namespaces": [ 00:23:52.440 { 00:23:52.440 "nsid": 1, 00:23:52.440 "bdev_name": "Malloc1", 00:23:52.440 "name": "Malloc1", 00:23:52.440 "nguid": "5E5D3E8E0CEA40A88FD9F9F010C20A78", 00:23:52.440 "uuid": "5e5d3e8e-0cea-40a8-8fd9-f9f010c20a78" 00:23:52.440 }, 00:23:52.440 { 00:23:52.440 "nsid": 2, 00:23:52.440 "bdev_name": "Malloc3", 00:23:52.440 "name": "Malloc3", 00:23:52.440 "nguid": "9F9F6B5FD067461195859AC97BE636CE", 00:23:52.440 "uuid": "9f9f6b5f-d067-4611-9585-9ac97be636ce" 00:23:52.440 } 00:23:52.440 ] 00:23:52.440 }, 00:23:52.440 { 00:23:52.440 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:23:52.440 "subtype": "NVMe", 00:23:52.440 "listen_addresses": [ 00:23:52.440 { 00:23:52.440 "trtype": "VFIOUSER", 00:23:52.440 "adrfam": "IPv4", 00:23:52.440 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:23:52.440 "trsvcid": "0" 00:23:52.440 } 00:23:52.440 ], 00:23:52.440 "allow_any_host": true, 00:23:52.440 "hosts": [], 00:23:52.440 "serial_number": "SPDK2", 00:23:52.440 "model_number": "SPDK bdev Controller", 00:23:52.440 "max_namespaces": 32, 00:23:52.440 "min_cntlid": 1, 00:23:52.440 "max_cntlid": 65519, 00:23:52.440 "namespaces": [ 00:23:52.440 { 00:23:52.440 "nsid": 1, 00:23:52.440 "bdev_name": "Malloc2", 00:23:52.440 "name": "Malloc2", 00:23:52.440 "nguid": "5B48CBBBFFC6479CB7FC5809BD986749", 00:23:52.440 "uuid": "5b48cbbb-ffc6-479c-b7fc-5809bd986749" 00:23:52.440 } 00:23:52.440 ] 00:23:52.440 } 00:23:52.440 ] 00:23:52.440 08:49:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2238571 00:23:52.440 08:49:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:23:52.440 08:49:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:23:52.440 08:49:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:23:52.440 08:49:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:23:52.440 [2024-05-15 08:49:47.032209] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:23:52.440 [2024-05-15 08:49:47.032291] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2238614 ] 00:23:52.440 EAL: No free 2048 kB hugepages reported on node 1 00:23:52.440 [2024-05-15 08:49:47.065366] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:23:52.440 [2024-05-15 08:49:47.074504] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:23:52.440 [2024-05-15 08:49:47.074550] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f9273e03000 00:23:52.440 [2024-05-15 08:49:47.075509] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:23:52.440 [2024-05-15 08:49:47.076530] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:23:52.440 [2024-05-15 08:49:47.077552] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:23:52.440 [2024-05-15 08:49:47.078563] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:23:52.440 [2024-05-15 08:49:47.079597] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:23:52.440 [2024-05-15 08:49:47.080577] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:23:52.440 [2024-05-15 08:49:47.081587] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:23:52.440 [2024-05-15 08:49:47.082604] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:23:52.440 [2024-05-15 08:49:47.083616] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:23:52.440 [2024-05-15 08:49:47.083638] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f9272bb9000 00:23:52.440 [2024-05-15 08:49:47.084750] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:23:52.440 [2024-05-15 08:49:47.099529] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:23:52.440 [2024-05-15 08:49:47.099564] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:23:52.440 [2024-05-15 08:49:47.101652] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:23:52.440 [2024-05-15 08:49:47.101703] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:23:52.440 [2024-05-15 08:49:47.101795] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:23:52.440 [2024-05-15 08:49:47.101817] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:23:52.440 [2024-05-15 08:49:47.101828] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:23:52.440 [2024-05-15 08:49:47.102662] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:23:52.440 [2024-05-15 08:49:47.102682] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:23:52.440 [2024-05-15 08:49:47.102694] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:23:52.440 [2024-05-15 08:49:47.103673] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:23:52.440 [2024-05-15 08:49:47.103694] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:23:52.440 [2024-05-15 08:49:47.103708] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:23:52.440 [2024-05-15 08:49:47.104681] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:23:52.440 [2024-05-15 08:49:47.104701] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:52.440 [2024-05-15 08:49:47.105689] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:23:52.440 [2024-05-15 08:49:47.105709] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:23:52.440 [2024-05-15 08:49:47.105718] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:23:52.440 [2024-05-15 08:49:47.105729] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:52.440 [2024-05-15 08:49:47.105839] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:23:52.440 [2024-05-15 08:49:47.105847] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:52.440 [2024-05-15 08:49:47.105855] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:23:52.440 [2024-05-15 08:49:47.106699] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:23:52.440 [2024-05-15 08:49:47.107703] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:23:52.440 [2024-05-15 08:49:47.108710] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:23:52.440 [2024-05-15 08:49:47.109699] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:23:52.440 [2024-05-15 08:49:47.109782] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:52.441 [2024-05-15 08:49:47.110716] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:23:52.441 [2024-05-15 08:49:47.110736] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:52.441 [2024-05-15 08:49:47.110749] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:23:52.441 [2024-05-15 08:49:47.110773] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:23:52.441 [2024-05-15 08:49:47.110789] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:23:52.441 [2024-05-15 08:49:47.110812] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:23:52.441 [2024-05-15 08:49:47.110822] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:23:52.441 [2024-05-15 08:49:47.110842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:23:52.441 [2024-05-15 08:49:47.121233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:23:52.441 [2024-05-15 08:49:47.121258] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:23:52.441 [2024-05-15 08:49:47.121268] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:23:52.441 [2024-05-15 08:49:47.121276] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:23:52.441 [2024-05-15 08:49:47.121283] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:23:52.441 [2024-05-15 08:49:47.121292] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:23:52.441 [2024-05-15 08:49:47.121300] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:23:52.441 [2024-05-15 08:49:47.121308] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:23:52.441 [2024-05-15 08:49:47.121326] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:23:52.441 [2024-05-15 08:49:47.121347] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:23:52.441 [2024-05-15 08:49:47.129225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:23:52.441 [2024-05-15 08:49:47.129254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:23:52.441 [2024-05-15 08:49:47.129269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:23:52.441 [2024-05-15 08:49:47.129281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:23:52.441 [2024-05-15 08:49:47.129294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:23:52.441 [2024-05-15 08:49:47.129303] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:23:52.441 [2024-05-15 08:49:47.129315] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:52.441 [2024-05-15 08:49:47.129328] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:23:52.441 [2024-05-15 08:49:47.137240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:23:52.441 [2024-05-15 08:49:47.137263] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:23:52.441 [2024-05-15 08:49:47.137277] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:52.441 [2024-05-15 08:49:47.137289] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:23:52.441 [2024-05-15 08:49:47.137300] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:23:52.441 [2024-05-15 08:49:47.137314] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:23:52.441 [2024-05-15 08:49:47.145233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:23:52.441 [2024-05-15 08:49:47.145296] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:23:52.441 [2024-05-15 08:49:47.145313] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:23:52.441 [2024-05-15 08:49:47.145326] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:23:52.441 [2024-05-15 08:49:47.145335] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:23:52.441 [2024-05-15 08:49:47.145345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:23:52.441 [2024-05-15 08:49:47.153225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:23:52.441 [2024-05-15 08:49:47.153254] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:23:52.441 [2024-05-15 08:49:47.153269] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:23:52.441 [2024-05-15 08:49:47.153283] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:23:52.441 [2024-05-15 08:49:47.153295] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:23:52.441 [2024-05-15 08:49:47.153304] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:23:52.441 [2024-05-15 08:49:47.153314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:23:52.441 [2024-05-15 08:49:47.161224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:23:52.441 [2024-05-15 08:49:47.161247] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:52.441 [2024-05-15 08:49:47.161279] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:52.441 [2024-05-15 08:49:47.161293] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:23:52.441 [2024-05-15 08:49:47.161302] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:23:52.441 [2024-05-15 08:49:47.161312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:23:52.441 [2024-05-15 08:49:47.167323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:23:52.441 [2024-05-15 08:49:47.167351] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:52.441 [2024-05-15 08:49:47.167371] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:23:52.441 [2024-05-15 08:49:47.167387] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:23:52.441 [2024-05-15 08:49:47.167399] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:52.441 [2024-05-15 08:49:47.167408] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:23:52.441 [2024-05-15 08:49:47.167417] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:23:52.441 [2024-05-15 08:49:47.167425] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:23:52.441 [2024-05-15 08:49:47.167434] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:23:52.441 [2024-05-15 08:49:47.167464] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:23:52.441 [2024-05-15 08:49:47.177242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:23:52.441 [2024-05-15 08:49:47.177269] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:23:52.441 [2024-05-15 08:49:47.185242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:23:52.441 [2024-05-15 08:49:47.185275] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:23:52.441 [2024-05-15 08:49:47.193226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:23:52.441 [2024-05-15 08:49:47.193252] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:23:52.441 [2024-05-15 08:49:47.201227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:23:52.442 [2024-05-15 08:49:47.201253] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:23:52.442 [2024-05-15 08:49:47.201263] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:23:52.442 [2024-05-15 08:49:47.201270] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:23:52.442 [2024-05-15 08:49:47.201276] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:23:52.442 [2024-05-15 08:49:47.201286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:23:52.442 [2024-05-15 08:49:47.201297] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:23:52.442 [2024-05-15 08:49:47.201306] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:23:52.442 [2024-05-15 08:49:47.201315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:23:52.442 [2024-05-15 08:49:47.201326] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:23:52.442 [2024-05-15 08:49:47.201334] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:23:52.442 [2024-05-15 08:49:47.201343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:23:52.442 [2024-05-15 08:49:47.201363] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:23:52.442 [2024-05-15 08:49:47.201373] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:23:52.442 [2024-05-15 08:49:47.201382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:23:52.442 [2024-05-15 08:49:47.209228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:23:52.442 [2024-05-15 08:49:47.209255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:23:52.442 [2024-05-15 08:49:47.209271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:23:52.442 [2024-05-15 08:49:47.209289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:23:52.442 ===================================================== 00:23:52.442 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:23:52.442 ===================================================== 00:23:52.442 Controller Capabilities/Features 00:23:52.442 ================================ 00:23:52.442 Vendor ID: 4e58 00:23:52.442 Subsystem Vendor ID: 4e58 00:23:52.442 Serial Number: SPDK2 00:23:52.442 Model Number: SPDK bdev Controller 00:23:52.442 Firmware Version: 24.05 00:23:52.442 Recommended Arb Burst: 6 00:23:52.442 IEEE OUI Identifier: 8d 6b 50 00:23:52.442 Multi-path I/O 00:23:52.442 May have multiple subsystem ports: Yes 00:23:52.442 May have multiple controllers: Yes 00:23:52.442 Associated with SR-IOV VF: No 00:23:52.442 Max Data Transfer Size: 131072 00:23:52.442 Max Number of Namespaces: 32 00:23:52.442 Max Number of I/O Queues: 127 00:23:52.442 NVMe Specification Version (VS): 1.3 00:23:52.442 NVMe Specification Version (Identify): 1.3 00:23:52.442 Maximum Queue Entries: 256 00:23:52.442 Contiguous Queues Required: Yes 00:23:52.442 Arbitration Mechanisms Supported 00:23:52.442 Weighted Round Robin: Not Supported 00:23:52.442 Vendor Specific: Not Supported 00:23:52.442 Reset Timeout: 15000 ms 00:23:52.442 Doorbell Stride: 4 bytes 00:23:52.442 NVM Subsystem Reset: Not Supported 00:23:52.442 Command Sets Supported 00:23:52.442 NVM Command Set: Supported 00:23:52.442 Boot Partition: Not Supported 00:23:52.442 Memory Page Size Minimum: 4096 bytes 00:23:52.442 Memory Page Size Maximum: 4096 bytes 00:23:52.442 Persistent Memory Region: Not Supported 00:23:52.442 Optional Asynchronous Events Supported 00:23:52.442 Namespace Attribute Notices: Supported 00:23:52.442 Firmware Activation Notices: Not Supported 00:23:52.442 ANA Change Notices: Not Supported 00:23:52.442 PLE Aggregate Log Change Notices: Not Supported 00:23:52.442 LBA Status Info Alert Notices: Not Supported 00:23:52.442 EGE Aggregate Log Change Notices: Not Supported 00:23:52.442 Normal NVM Subsystem Shutdown event: Not Supported 00:23:52.442 Zone Descriptor Change Notices: Not Supported 00:23:52.442 Discovery Log Change Notices: Not Supported 00:23:52.442 Controller Attributes 00:23:52.442 128-bit Host Identifier: Supported 00:23:52.442 Non-Operational Permissive Mode: Not Supported 00:23:52.442 NVM Sets: Not Supported 00:23:52.442 Read Recovery Levels: Not Supported 00:23:52.442 Endurance Groups: Not Supported 00:23:52.442 Predictable Latency Mode: Not Supported 00:23:52.442 Traffic Based Keep ALive: Not Supported 00:23:52.442 Namespace Granularity: Not Supported 00:23:52.442 SQ Associations: Not Supported 00:23:52.442 UUID List: Not Supported 00:23:52.442 Multi-Domain Subsystem: Not Supported 00:23:52.442 Fixed Capacity Management: Not Supported 00:23:52.442 Variable Capacity Management: Not Supported 00:23:52.442 Delete Endurance Group: Not Supported 00:23:52.442 Delete NVM Set: Not Supported 00:23:52.442 Extended LBA Formats Supported: Not Supported 00:23:52.442 Flexible Data Placement Supported: Not Supported 00:23:52.442 00:23:52.442 Controller Memory Buffer Support 00:23:52.442 ================================ 00:23:52.442 Supported: No 00:23:52.442 00:23:52.442 Persistent Memory Region Support 00:23:52.442 ================================ 00:23:52.442 Supported: No 00:23:52.442 00:23:52.442 Admin Command Set Attributes 00:23:52.442 ============================ 00:23:52.442 Security Send/Receive: Not Supported 00:23:52.442 Format NVM: Not Supported 00:23:52.442 Firmware Activate/Download: Not Supported 00:23:52.442 Namespace Management: Not Supported 00:23:52.442 Device Self-Test: Not Supported 00:23:52.442 Directives: Not Supported 00:23:52.442 NVMe-MI: Not Supported 00:23:52.442 Virtualization Management: Not Supported 00:23:52.442 Doorbell Buffer Config: Not Supported 00:23:52.442 Get LBA Status Capability: Not Supported 00:23:52.442 Command & Feature Lockdown Capability: Not Supported 00:23:52.442 Abort Command Limit: 4 00:23:52.442 Async Event Request Limit: 4 00:23:52.442 Number of Firmware Slots: N/A 00:23:52.442 Firmware Slot 1 Read-Only: N/A 00:23:52.442 Firmware Activation Without Reset: N/A 00:23:52.442 Multiple Update Detection Support: N/A 00:23:52.442 Firmware Update Granularity: No Information Provided 00:23:52.442 Per-Namespace SMART Log: No 00:23:52.442 Asymmetric Namespace Access Log Page: Not Supported 00:23:52.442 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:23:52.442 Command Effects Log Page: Supported 00:23:52.442 Get Log Page Extended Data: Supported 00:23:52.442 Telemetry Log Pages: Not Supported 00:23:52.442 Persistent Event Log Pages: Not Supported 00:23:52.442 Supported Log Pages Log Page: May Support 00:23:52.442 Commands Supported & Effects Log Page: Not Supported 00:23:52.442 Feature Identifiers & Effects Log Page:May Support 00:23:52.442 NVMe-MI Commands & Effects Log Page: May Support 00:23:52.442 Data Area 4 for Telemetry Log: Not Supported 00:23:52.442 Error Log Page Entries Supported: 128 00:23:52.442 Keep Alive: Supported 00:23:52.442 Keep Alive Granularity: 10000 ms 00:23:52.442 00:23:52.442 NVM Command Set Attributes 00:23:52.442 ========================== 00:23:52.442 Submission Queue Entry Size 00:23:52.442 Max: 64 00:23:52.442 Min: 64 00:23:52.442 Completion Queue Entry Size 00:23:52.442 Max: 16 00:23:52.442 Min: 16 00:23:52.442 Number of Namespaces: 32 00:23:52.442 Compare Command: Supported 00:23:52.442 Write Uncorrectable Command: Not Supported 00:23:52.442 Dataset Management Command: Supported 00:23:52.442 Write Zeroes Command: Supported 00:23:52.442 Set Features Save Field: Not Supported 00:23:52.442 Reservations: Not Supported 00:23:52.442 Timestamp: Not Supported 00:23:52.442 Copy: Supported 00:23:52.443 Volatile Write Cache: Present 00:23:52.443 Atomic Write Unit (Normal): 1 00:23:52.443 Atomic Write Unit (PFail): 1 00:23:52.443 Atomic Compare & Write Unit: 1 00:23:52.443 Fused Compare & Write: Supported 00:23:52.443 Scatter-Gather List 00:23:52.443 SGL Command Set: Supported (Dword aligned) 00:23:52.443 SGL Keyed: Not Supported 00:23:52.443 SGL Bit Bucket Descriptor: Not Supported 00:23:52.443 SGL Metadata Pointer: Not Supported 00:23:52.443 Oversized SGL: Not Supported 00:23:52.443 SGL Metadata Address: Not Supported 00:23:52.443 SGL Offset: Not Supported 00:23:52.443 Transport SGL Data Block: Not Supported 00:23:52.443 Replay Protected Memory Block: Not Supported 00:23:52.443 00:23:52.443 Firmware Slot Information 00:23:52.443 ========================= 00:23:52.443 Active slot: 1 00:23:52.443 Slot 1 Firmware Revision: 24.05 00:23:52.443 00:23:52.443 00:23:52.443 Commands Supported and Effects 00:23:52.443 ============================== 00:23:52.443 Admin Commands 00:23:52.443 -------------- 00:23:52.443 Get Log Page (02h): Supported 00:23:52.443 Identify (06h): Supported 00:23:52.443 Abort (08h): Supported 00:23:52.443 Set Features (09h): Supported 00:23:52.443 Get Features (0Ah): Supported 00:23:52.443 Asynchronous Event Request (0Ch): Supported 00:23:52.443 Keep Alive (18h): Supported 00:23:52.443 I/O Commands 00:23:52.443 ------------ 00:23:52.443 Flush (00h): Supported LBA-Change 00:23:52.443 Write (01h): Supported LBA-Change 00:23:52.443 Read (02h): Supported 00:23:52.443 Compare (05h): Supported 00:23:52.443 Write Zeroes (08h): Supported LBA-Change 00:23:52.443 Dataset Management (09h): Supported LBA-Change 00:23:52.443 Copy (19h): Supported LBA-Change 00:23:52.443 Unknown (79h): Supported LBA-Change 00:23:52.443 Unknown (7Ah): Supported 00:23:52.443 00:23:52.443 Error Log 00:23:52.443 ========= 00:23:52.443 00:23:52.443 Arbitration 00:23:52.443 =========== 00:23:52.443 Arbitration Burst: 1 00:23:52.443 00:23:52.443 Power Management 00:23:52.443 ================ 00:23:52.443 Number of Power States: 1 00:23:52.443 Current Power State: Power State #0 00:23:52.443 Power State #0: 00:23:52.443 Max Power: 0.00 W 00:23:52.443 Non-Operational State: Operational 00:23:52.443 Entry Latency: Not Reported 00:23:52.443 Exit Latency: Not Reported 00:23:52.443 Relative Read Throughput: 0 00:23:52.443 Relative Read Latency: 0 00:23:52.443 Relative Write Throughput: 0 00:23:52.443 Relative Write Latency: 0 00:23:52.443 Idle Power: Not Reported 00:23:52.443 Active Power: Not Reported 00:23:52.443 Non-Operational Permissive Mode: Not Supported 00:23:52.443 00:23:52.443 Health Information 00:23:52.443 ================== 00:23:52.443 Critical Warnings: 00:23:52.443 Available Spare Space: OK 00:23:52.443 Temperature: OK 00:23:52.443 Device Reliability: OK 00:23:52.443 Read Only: No 00:23:52.443 Volatile Memory Backup: OK 00:23:52.443 Current Temperature: 0 Kelvin (-2[2024-05-15 08:49:47.209404] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:23:52.443 [2024-05-15 08:49:47.217242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:23:52.443 [2024-05-15 08:49:47.217297] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:23:52.443 [2024-05-15 08:49:47.217314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.443 [2024-05-15 08:49:47.217325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.443 [2024-05-15 08:49:47.217335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.443 [2024-05-15 08:49:47.217345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.443 [2024-05-15 08:49:47.217430] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:23:52.443 [2024-05-15 08:49:47.217450] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:23:52.443 [2024-05-15 08:49:47.218433] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:23:52.443 [2024-05-15 08:49:47.218503] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:23:52.443 [2024-05-15 08:49:47.218519] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:23:52.443 [2024-05-15 08:49:47.219443] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:23:52.443 [2024-05-15 08:49:47.219466] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:23:52.443 [2024-05-15 08:49:47.219517] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:23:52.443 [2024-05-15 08:49:47.220712] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:23:52.700 73 Celsius) 00:23:52.700 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:23:52.700 Available Spare: 0% 00:23:52.700 Available Spare Threshold: 0% 00:23:52.700 Life Percentage Used: 0% 00:23:52.700 Data Units Read: 0 00:23:52.700 Data Units Written: 0 00:23:52.700 Host Read Commands: 0 00:23:52.700 Host Write Commands: 0 00:23:52.700 Controller Busy Time: 0 minutes 00:23:52.700 Power Cycles: 0 00:23:52.700 Power On Hours: 0 hours 00:23:52.700 Unsafe Shutdowns: 0 00:23:52.700 Unrecoverable Media Errors: 0 00:23:52.700 Lifetime Error Log Entries: 0 00:23:52.700 Warning Temperature Time: 0 minutes 00:23:52.700 Critical Temperature Time: 0 minutes 00:23:52.700 00:23:52.700 Number of Queues 00:23:52.700 ================ 00:23:52.700 Number of I/O Submission Queues: 127 00:23:52.700 Number of I/O Completion Queues: 127 00:23:52.700 00:23:52.700 Active Namespaces 00:23:52.700 ================= 00:23:52.700 Namespace ID:1 00:23:52.700 Error Recovery Timeout: Unlimited 00:23:52.700 Command Set Identifier: NVM (00h) 00:23:52.700 Deallocate: Supported 00:23:52.700 Deallocated/Unwritten Error: Not Supported 00:23:52.700 Deallocated Read Value: Unknown 00:23:52.700 Deallocate in Write Zeroes: Not Supported 00:23:52.700 Deallocated Guard Field: 0xFFFF 00:23:52.700 Flush: Supported 00:23:52.700 Reservation: Supported 00:23:52.700 Namespace Sharing Capabilities: Multiple Controllers 00:23:52.700 Size (in LBAs): 131072 (0GiB) 00:23:52.700 Capacity (in LBAs): 131072 (0GiB) 00:23:52.700 Utilization (in LBAs): 131072 (0GiB) 00:23:52.700 NGUID: 5B48CBBBFFC6479CB7FC5809BD986749 00:23:52.700 UUID: 5b48cbbb-ffc6-479c-b7fc-5809bd986749 00:23:52.700 Thin Provisioning: Not Supported 00:23:52.700 Per-NS Atomic Units: Yes 00:23:52.700 Atomic Boundary Size (Normal): 0 00:23:52.700 Atomic Boundary Size (PFail): 0 00:23:52.700 Atomic Boundary Offset: 0 00:23:52.700 Maximum Single Source Range Length: 65535 00:23:52.700 Maximum Copy Length: 65535 00:23:52.700 Maximum Source Range Count: 1 00:23:52.700 NGUID/EUI64 Never Reused: No 00:23:52.700 Namespace Write Protected: No 00:23:52.700 Number of LBA Formats: 1 00:23:52.700 Current LBA Format: LBA Format #00 00:23:52.700 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:52.700 00:23:52.701 08:49:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:23:52.701 EAL: No free 2048 kB hugepages reported on node 1 00:23:52.701 [2024-05-15 08:49:47.448051] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:23:57.959 Initializing NVMe Controllers 00:23:57.959 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:23:57.959 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:23:57.959 Initialization complete. Launching workers. 00:23:57.959 ======================================================== 00:23:57.959 Latency(us) 00:23:57.959 Device Information : IOPS MiB/s Average min max 00:23:57.959 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34734.14 135.68 3684.54 1152.66 7346.02 00:23:57.959 ======================================================== 00:23:57.959 Total : 34734.14 135.68 3684.54 1152.66 7346.02 00:23:57.959 00:23:57.959 [2024-05-15 08:49:52.552559] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:23:57.959 08:49:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:23:57.959 EAL: No free 2048 kB hugepages reported on node 1 00:23:58.217 [2024-05-15 08:49:52.790267] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:24:03.494 Initializing NVMe Controllers 00:24:03.494 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:24:03.494 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:24:03.494 Initialization complete. Launching workers. 00:24:03.494 ======================================================== 00:24:03.494 Latency(us) 00:24:03.494 Device Information : IOPS MiB/s Average min max 00:24:03.494 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31929.37 124.72 4008.05 1217.87 10315.04 00:24:03.494 ======================================================== 00:24:03.494 Total : 31929.37 124.72 4008.05 1217.87 10315.04 00:24:03.494 00:24:03.494 [2024-05-15 08:49:57.809267] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:24:03.494 08:49:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:24:03.494 EAL: No free 2048 kB hugepages reported on node 1 00:24:03.494 [2024-05-15 08:49:58.031144] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:24:08.794 [2024-05-15 08:50:03.155366] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:24:08.794 Initializing NVMe Controllers 00:24:08.794 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:24:08.794 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:24:08.794 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:24:08.794 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:24:08.794 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:24:08.794 Initialization complete. Launching workers. 00:24:08.794 Starting thread on core 2 00:24:08.794 Starting thread on core 3 00:24:08.794 Starting thread on core 1 00:24:08.794 08:50:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:24:08.794 EAL: No free 2048 kB hugepages reported on node 1 00:24:08.794 [2024-05-15 08:50:03.471718] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:24:12.078 [2024-05-15 08:50:06.537623] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:24:12.078 Initializing NVMe Controllers 00:24:12.078 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:24:12.078 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:24:12.078 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:24:12.078 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:24:12.078 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:24:12.078 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:24:12.078 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:24:12.078 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:24:12.078 Initialization complete. Launching workers. 00:24:12.078 Starting thread on core 1 with urgent priority queue 00:24:12.078 Starting thread on core 2 with urgent priority queue 00:24:12.078 Starting thread on core 3 with urgent priority queue 00:24:12.078 Starting thread on core 0 with urgent priority queue 00:24:12.078 SPDK bdev Controller (SPDK2 ) core 0: 4512.00 IO/s 22.16 secs/100000 ios 00:24:12.078 SPDK bdev Controller (SPDK2 ) core 1: 4451.67 IO/s 22.46 secs/100000 ios 00:24:12.078 SPDK bdev Controller (SPDK2 ) core 2: 4603.00 IO/s 21.72 secs/100000 ios 00:24:12.078 SPDK bdev Controller (SPDK2 ) core 3: 4714.67 IO/s 21.21 secs/100000 ios 00:24:12.078 ======================================================== 00:24:12.078 00:24:12.078 08:50:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:24:12.078 EAL: No free 2048 kB hugepages reported on node 1 00:24:12.078 [2024-05-15 08:50:06.852758] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:24:12.078 Initializing NVMe Controllers 00:24:12.078 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:24:12.078 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:24:12.078 Namespace ID: 1 size: 0GB 00:24:12.078 Initialization complete. 00:24:12.078 INFO: using host memory buffer for IO 00:24:12.078 Hello world! 00:24:12.078 [2024-05-15 08:50:06.861807] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:24:12.336 08:50:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:24:12.336 EAL: No free 2048 kB hugepages reported on node 1 00:24:12.593 [2024-05-15 08:50:07.171244] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:24:13.751 Initializing NVMe Controllers 00:24:13.751 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:24:13.751 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:24:13.751 Initialization complete. Launching workers. 00:24:13.751 submit (in ns) avg, min, max = 7765.6, 3548.9, 4018816.7 00:24:13.751 complete (in ns) avg, min, max = 24793.1, 2048.9, 4087050.0 00:24:13.751 00:24:13.751 Submit histogram 00:24:13.751 ================ 00:24:13.751 Range in us Cumulative Count 00:24:13.751 3.532 - 3.556: 0.0297% ( 4) 00:24:13.751 3.556 - 3.579: 3.3405% ( 446) 00:24:13.751 3.579 - 3.603: 11.6027% ( 1113) 00:24:13.751 3.603 - 3.627: 23.2722% ( 1572) 00:24:13.751 3.627 - 3.650: 33.6129% ( 1393) 00:24:13.751 3.650 - 3.674: 40.8730% ( 978) 00:24:13.751 3.674 - 3.698: 47.2051% ( 853) 00:24:13.751 3.698 - 3.721: 53.2477% ( 814) 00:24:13.751 3.721 - 3.745: 58.1842% ( 665) 00:24:13.751 3.745 - 3.769: 61.9182% ( 503) 00:24:13.751 3.769 - 3.793: 64.6797% ( 372) 00:24:13.751 3.793 - 3.816: 67.5154% ( 382) 00:24:13.751 3.816 - 3.840: 70.8633% ( 451) 00:24:13.751 3.840 - 3.864: 75.5772% ( 635) 00:24:13.751 3.864 - 3.887: 79.7862% ( 567) 00:24:13.751 3.887 - 3.911: 83.0525% ( 440) 00:24:13.752 3.911 - 3.935: 85.5987% ( 343) 00:24:13.752 3.935 - 3.959: 87.5882% ( 268) 00:24:13.752 3.959 - 3.982: 89.4217% ( 247) 00:24:13.752 3.982 - 4.006: 90.8173% ( 188) 00:24:13.752 4.006 - 4.030: 91.8492% ( 139) 00:24:13.752 4.030 - 4.053: 92.6657% ( 110) 00:24:13.752 4.053 - 4.077: 93.6456% ( 132) 00:24:13.752 4.077 - 4.101: 94.7888% ( 154) 00:24:13.752 4.101 - 4.124: 95.5757% ( 106) 00:24:13.752 4.124 - 4.148: 95.9914% ( 56) 00:24:13.752 4.148 - 4.172: 96.3477% ( 48) 00:24:13.752 4.172 - 4.196: 96.5778% ( 31) 00:24:13.752 4.196 - 4.219: 96.8599% ( 38) 00:24:13.752 4.219 - 4.243: 97.0158% ( 21) 00:24:13.752 4.243 - 4.267: 97.1346% ( 16) 00:24:13.752 4.267 - 4.290: 97.2905% ( 21) 00:24:13.752 4.290 - 4.314: 97.4167% ( 17) 00:24:13.752 4.314 - 4.338: 97.5058% ( 12) 00:24:13.752 4.338 - 4.361: 97.5874% ( 11) 00:24:13.752 4.361 - 4.385: 97.6320% ( 6) 00:24:13.752 4.385 - 4.409: 97.6988% ( 9) 00:24:13.752 4.409 - 4.433: 97.7285% ( 4) 00:24:13.752 4.433 - 4.456: 97.7878% ( 8) 00:24:13.752 4.456 - 4.480: 97.8175% ( 4) 00:24:13.752 4.480 - 4.504: 97.8250% ( 1) 00:24:13.752 4.504 - 4.527: 97.8398% ( 2) 00:24:13.752 4.527 - 4.551: 97.8547% ( 2) 00:24:13.752 4.551 - 4.575: 97.8621% ( 1) 00:24:13.752 4.599 - 4.622: 97.8695% ( 1) 00:24:13.752 4.622 - 4.646: 97.8769% ( 1) 00:24:13.752 4.646 - 4.670: 97.8843% ( 1) 00:24:13.752 4.741 - 4.764: 97.9066% ( 3) 00:24:13.752 4.764 - 4.788: 97.9215% ( 2) 00:24:13.752 4.788 - 4.812: 97.9512% ( 4) 00:24:13.752 4.836 - 4.859: 97.9660% ( 2) 00:24:13.752 4.859 - 4.883: 97.9883% ( 3) 00:24:13.752 4.883 - 4.907: 98.0180% ( 4) 00:24:13.752 4.907 - 4.930: 98.0774% ( 8) 00:24:13.752 4.930 - 4.954: 98.1219% ( 6) 00:24:13.752 4.954 - 4.978: 98.1367% ( 2) 00:24:13.752 4.978 - 5.001: 98.1664% ( 4) 00:24:13.752 5.001 - 5.025: 98.2258% ( 8) 00:24:13.752 5.025 - 5.049: 98.2629% ( 5) 00:24:13.752 5.049 - 5.073: 98.2852% ( 3) 00:24:13.752 5.073 - 5.096: 98.3297% ( 6) 00:24:13.752 5.096 - 5.120: 98.3520% ( 3) 00:24:13.752 5.120 - 5.144: 98.3817% ( 4) 00:24:13.752 5.144 - 5.167: 98.4188% ( 5) 00:24:13.752 5.167 - 5.191: 98.4485% ( 4) 00:24:13.752 5.191 - 5.215: 98.4708% ( 3) 00:24:13.752 5.215 - 5.239: 98.4931% ( 3) 00:24:13.752 5.262 - 5.286: 98.5376% ( 6) 00:24:13.752 5.286 - 5.310: 98.5450% ( 1) 00:24:13.752 5.310 - 5.333: 98.5673% ( 3) 00:24:13.752 5.381 - 5.404: 98.5747% ( 1) 00:24:13.752 5.428 - 5.452: 98.5821% ( 1) 00:24:13.752 5.452 - 5.476: 98.5896% ( 1) 00:24:13.752 5.665 - 5.689: 98.5970% ( 1) 00:24:13.752 5.855 - 5.879: 98.6044% ( 1) 00:24:13.752 5.926 - 5.950: 98.6118% ( 1) 00:24:13.752 5.950 - 5.973: 98.6193% ( 1) 00:24:13.752 5.997 - 6.021: 98.6267% ( 1) 00:24:13.752 6.021 - 6.044: 98.6341% ( 1) 00:24:13.752 6.044 - 6.068: 98.6415% ( 1) 00:24:13.752 6.116 - 6.163: 98.6638% ( 3) 00:24:13.752 6.163 - 6.210: 98.6712% ( 1) 00:24:13.752 6.305 - 6.353: 98.6786% ( 1) 00:24:13.752 6.353 - 6.400: 98.6861% ( 1) 00:24:13.752 6.400 - 6.447: 98.7009% ( 2) 00:24:13.752 6.447 - 6.495: 98.7083% ( 1) 00:24:13.752 6.590 - 6.637: 98.7158% ( 1) 00:24:13.752 6.637 - 6.684: 98.7232% ( 1) 00:24:13.752 6.684 - 6.732: 98.7380% ( 2) 00:24:13.752 6.779 - 6.827: 98.7455% ( 1) 00:24:13.752 6.827 - 6.874: 98.7603% ( 2) 00:24:13.752 6.969 - 7.016: 98.7677% ( 1) 00:24:13.752 7.016 - 7.064: 98.7826% ( 2) 00:24:13.752 7.064 - 7.111: 98.7900% ( 1) 00:24:13.752 7.253 - 7.301: 98.7974% ( 1) 00:24:13.752 7.538 - 7.585: 98.8048% ( 1) 00:24:13.752 7.585 - 7.633: 98.8123% ( 1) 00:24:13.752 7.633 - 7.680: 98.8197% ( 1) 00:24:13.752 7.680 - 7.727: 98.8271% ( 1) 00:24:13.752 7.727 - 7.775: 98.8494% ( 3) 00:24:13.752 7.822 - 7.870: 98.8642% ( 2) 00:24:13.752 7.870 - 7.917: 98.8717% ( 1) 00:24:13.752 7.917 - 7.964: 98.8791% ( 1) 00:24:13.752 7.964 - 8.012: 98.8865% ( 1) 00:24:13.752 8.012 - 8.059: 98.8939% ( 1) 00:24:13.752 8.201 - 8.249: 98.9013% ( 1) 00:24:13.752 8.249 - 8.296: 98.9088% ( 1) 00:24:13.752 8.296 - 8.344: 98.9162% ( 1) 00:24:13.752 8.344 - 8.391: 98.9236% ( 1) 00:24:13.752 8.439 - 8.486: 98.9459% ( 3) 00:24:13.752 8.628 - 8.676: 98.9533% ( 1) 00:24:13.752 9.387 - 9.434: 98.9607% ( 1) 00:24:13.752 9.908 - 9.956: 98.9682% ( 1) 00:24:13.752 10.098 - 10.145: 98.9756% ( 1) 00:24:13.752 10.382 - 10.430: 98.9830% ( 1) 00:24:13.752 10.572 - 10.619: 98.9904% ( 1) 00:24:13.752 11.236 - 11.283: 98.9978% ( 1) 00:24:13.752 11.520 - 11.567: 99.0053% ( 1) 00:24:13.752 11.567 - 11.615: 99.0127% ( 1) 00:24:13.752 11.710 - 11.757: 99.0201% ( 1) 00:24:13.752 11.899 - 11.947: 99.0275% ( 1) 00:24:13.752 12.990 - 13.084: 99.0350% ( 1) 00:24:13.752 13.748 - 13.843: 99.0424% ( 1) 00:24:13.752 13.843 - 13.938: 99.0498% ( 1) 00:24:13.752 15.360 - 15.455: 99.0572% ( 1) 00:24:13.752 16.972 - 17.067: 99.0721% ( 2) 00:24:13.752 17.067 - 17.161: 99.0795% ( 1) 00:24:13.752 17.256 - 17.351: 99.0944% ( 2) 00:24:13.752 17.351 - 17.446: 99.1092% ( 2) 00:24:13.752 17.446 - 17.541: 99.1240% ( 2) 00:24:13.752 17.541 - 17.636: 99.1612% ( 5) 00:24:13.752 17.636 - 17.730: 99.1983% ( 5) 00:24:13.752 17.730 - 17.825: 99.2428% ( 6) 00:24:13.752 17.825 - 17.920: 99.2799% ( 5) 00:24:13.752 17.920 - 18.015: 99.3245% ( 6) 00:24:13.752 18.015 - 18.110: 99.3616% ( 5) 00:24:13.752 18.110 - 18.204: 99.4061% ( 6) 00:24:13.752 18.204 - 18.299: 99.5101% ( 14) 00:24:13.752 18.299 - 18.394: 99.5620% ( 7) 00:24:13.752 18.394 - 18.489: 99.5917% ( 4) 00:24:13.752 18.489 - 18.584: 99.6511% ( 8) 00:24:13.752 18.584 - 18.679: 99.7105% ( 8) 00:24:13.752 18.679 - 18.773: 99.7402% ( 4) 00:24:13.752 18.773 - 18.868: 99.7476% ( 1) 00:24:13.752 18.868 - 18.963: 99.7625% ( 2) 00:24:13.752 18.963 - 19.058: 99.7847% ( 3) 00:24:13.752 19.058 - 19.153: 99.7921% ( 1) 00:24:13.752 19.153 - 19.247: 99.8144% ( 3) 00:24:13.752 19.247 - 19.342: 99.8293% ( 2) 00:24:13.752 19.437 - 19.532: 99.8367% ( 1) 00:24:13.752 19.532 - 19.627: 99.8590% ( 3) 00:24:13.752 19.627 - 19.721: 99.8664% ( 1) 00:24:13.752 19.721 - 19.816: 99.8812% ( 2) 00:24:13.752 21.333 - 21.428: 99.8886% ( 1) 00:24:13.752 23.704 - 23.799: 99.8961% ( 1) 00:24:13.752 24.083 - 24.178: 99.9035% ( 1) 00:24:13.752 3980.705 - 4004.978: 99.9852% ( 11) 00:24:13.752 4004.978 - 4029.250: 100.0000% ( 2) 00:24:13.752 00:24:13.752 Complete histogram 00:24:13.752 ================== 00:24:13.752 Range in us Cumulative Count 00:24:13.752 2.039 - 2.050: 0.0594% ( 8) 00:24:13.752 2.050 - 2.062: 10.1923% ( 1365) 00:24:13.752 2.062 - 2.074: 25.0984% ( 2008) 00:24:13.752 2.074 - 2.086: 28.1865% ( 416) 00:24:13.752 2.086 - 2.098: 50.4417% ( 2998) 00:24:13.752 2.098 - 2.110: 60.0846% ( 1299) 00:24:13.752 2.110 - 2.121: 61.9330% ( 249) 00:24:13.752 2.121 - 2.133: 68.6289% ( 902) 00:24:13.752 2.133 - 2.145: 71.4795% ( 384) 00:24:13.752 2.145 - 2.157: 73.3502% ( 252) 00:24:13.752 2.157 - 2.169: 80.8626% ( 1012) 00:24:13.752 2.169 - 2.181: 82.7259% ( 251) 00:24:13.752 2.181 - 2.193: 83.6612% ( 126) 00:24:13.752 2.193 - 2.204: 86.2594% ( 350) 00:24:13.752 2.204 - 2.216: 88.1301% ( 252) 00:24:13.752 2.216 - 2.228: 89.2361% ( 149) 00:24:13.752 2.228 - 2.240: 92.1387% ( 391) 00:24:13.752 2.240 - 2.252: 93.5640% ( 192) 00:24:13.752 2.252 - 2.264: 93.9574% ( 53) 00:24:13.752 2.264 - 2.276: 94.3583% ( 54) 00:24:13.752 2.276 - 2.287: 94.6997% ( 46) 00:24:13.752 2.287 - 2.299: 94.9224% ( 30) 00:24:13.752 2.299 - 2.311: 95.2342% ( 42) 00:24:13.752 2.311 - 2.323: 95.5311% ( 40) 00:24:13.752 2.323 - 2.335: 95.6722% ( 19) 00:24:13.752 2.335 - 2.347: 95.8429% ( 23) 00:24:13.752 2.347 - 2.359: 96.1324% ( 39) 00:24:13.752 2.359 - 2.370: 96.5110% ( 51) 00:24:13.752 2.370 - 2.382: 96.7931% ( 38) 00:24:13.752 2.382 - 2.394: 97.1123% ( 43) 00:24:13.752 2.394 - 2.406: 97.4315% ( 43) 00:24:13.752 2.406 - 2.418: 97.6245% ( 26) 00:24:13.752 2.418 - 2.430: 97.8843% ( 35) 00:24:13.752 2.430 - 2.441: 98.0328% ( 20) 00:24:13.752 2.441 - 2.453: 98.1442% ( 15) 00:24:13.752 2.453 - 2.465: 98.2035% ( 8) 00:24:13.752 2.465 - 2.477: 98.2481% ( 6) 00:24:13.752 2.477 - 2.489: 98.3149% ( 9) 00:24:13.752 2.489 - 2.501: 98.3669% ( 7) 00:24:13.752 2.501 - 2.513: 98.3817% ( 2) 00:24:13.752 2.513 - 2.524: 98.3891% ( 1) 00:24:13.752 2.524 - 2.536: 98.4114% ( 3) 00:24:13.752 2.536 - 2.548: 98.4262% ( 2) 00:24:13.752 2.548 - 2.560: 98.4337% ( 1) 00:24:13.752 2.560 - 2.572: 98.4411% ( 1) 00:24:13.752 2.572 - 2.584: 98.4485% ( 1) 00:24:13.752 2.596 - 2.607: 98.4782% ( 4) 00:24:13.752 2.619 - 2.631: 98.4856% ( 1) 00:24:13.752 2.631 - 2.643: 98.4931% ( 1) 00:24:13.752 2.667 - 2.679: 98.5079% ( 2) 00:24:13.752 2.679 - 2.690: 98.5153% ( 1) 00:24:13.752 2.690 - 2.702: 98.5228% ( 1) 00:24:13.752 2.773 - 2.785: 98.5302% ( 1) 00:24:13.752 2.844 - 2.856: 98.5376% ( 1) 00:24:13.752 2.939 - 2.951: 98.5450% ( 1) 00:24:13.752 3.129 - 3.153: 98.5524% ( 1) 00:24:13.752 3.319 - 3.342: 98.5599% ( 1) 00:24:13.752 3.342 - 3.366: 98.5673% ( 1) 00:24:13.752 3.366 - 3.390: 98.5747% ( 1) 00:24:13.752 3.390 - 3.413: 98.5896% ( 2) 00:24:13.752 3.484 - 3.508: 98.6044% ( 2) 00:24:13.752 3.532 - 3.556: 98.6118% ( 1) 00:24:13.752 3.556 - 3.579: 98.6193% ( 1) 00:24:13.752 3.579 - 3.603: 98.6267% ( 1) 00:24:13.752 3.603 - 3.627: 98.6415% ( 2) 00:24:13.752 3.627 - 3.650: 98.6489% ( 1) 00:24:13.752 3.698 - 3.721: 98.6564% ( 1) 00:24:13.752 3.721 - 3.745: 98.6712% ( 2) 00:24:13.752 3.793 - 3.816: 98.6786% ( 1) 00:24:13.752 3.816 - 3.840: 98.6861% ( 1) 00:24:13.752 3.840 - 3.864: 98.6935% ( 1) 00:24:13.752 3.864 - 3.887: 98.7009% ( 1) 00:24:13.752 3.887 - 3.911: 98.7083% ( 1) 00:24:13.752 3.982 - 4.006: 98.7158% ( 1) 00:24:13.752 4.148 - 4.172: 98.7232% ( 1) 00:24:13.752 4.622 - 4.646: 98.7306% ( 1) 00:24:13.752 4.741 - 4.764: 98.7380% ( 1) 00:24:13.752 4.907 - 4.930: 98.7455% ( 1) 00:24:13.752 4.978 - 5.001: 98.7529% ( 1) 00:24:13.752 5.096 - 5.120: 98.7603% ( 1) 00:24:13.752 5.215 - 5.239: 98.7677% ( 1) 00:24:13.752 5.286 - 5.310: 98.7826% ( 2) 00:24:13.752 5.499 - 5.523: 98.7900% ( 1) 00:24:13.752 5.713 - 5.736: 98.8048% ( 2) 00:24:13.752 5.807 - 5.831: 98.8123% ( 1) 00:24:13.752 5.879 - 5.902: 98.8197% ( 1) 00:24:13.752 5.902 - 5.926: 98.8271% ( 1) 00:24:13.752 5.973 - 5.997: 98.8345% ( 1) 00:24:13.752 6.068 - 6.116: 98.8494% ( 2) 00:24:13.752 6.163 - 6.210: 98.8568% ( 1) 00:24:13.752 6.210 - 6.258: 98.8642% ( 1) 00:24:13.752 6.447 - 6.495: 98.8791% ( 2) 00:24:13.752 6.495 - 6.542: 98.8865% ( 1) 00:24:13.752 6.637 - 6.684: 9[2024-05-15 08:50:08.268002] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:24:13.752 8.8939% ( 1) 00:24:13.752 6.732 - 6.779: 98.9013% ( 1) 00:24:13.752 7.159 - 7.206: 98.9088% ( 1) 00:24:13.752 7.680 - 7.727: 98.9162% ( 1) 00:24:13.753 7.822 - 7.870: 98.9236% ( 1) 00:24:13.753 11.567 - 11.615: 98.9310% ( 1) 00:24:13.753 15.834 - 15.929: 98.9607% ( 4) 00:24:13.753 15.929 - 16.024: 98.9830% ( 3) 00:24:13.753 16.024 - 16.119: 98.9904% ( 1) 00:24:13.753 16.119 - 16.213: 99.0350% ( 6) 00:24:13.753 16.213 - 16.308: 99.0869% ( 7) 00:24:13.753 16.308 - 16.403: 99.1092% ( 3) 00:24:13.753 16.498 - 16.593: 99.1463% ( 5) 00:24:13.753 16.593 - 16.687: 99.1909% ( 6) 00:24:13.753 16.687 - 16.782: 99.2205% ( 4) 00:24:13.753 16.782 - 16.877: 99.2577% ( 5) 00:24:13.753 16.877 - 16.972: 99.2725% ( 2) 00:24:13.753 16.972 - 17.067: 99.2948% ( 3) 00:24:13.753 17.067 - 17.161: 99.3096% ( 2) 00:24:13.753 17.256 - 17.351: 99.3319% ( 3) 00:24:13.753 17.351 - 17.446: 99.3616% ( 4) 00:24:13.753 17.446 - 17.541: 99.3690% ( 1) 00:24:13.753 17.636 - 17.730: 99.3839% ( 2) 00:24:13.753 17.730 - 17.825: 99.3987% ( 2) 00:24:13.753 18.015 - 18.110: 99.4136% ( 2) 00:24:13.753 18.110 - 18.204: 99.4210% ( 1) 00:24:13.753 18.204 - 18.299: 99.4284% ( 1) 00:24:13.753 25.979 - 26.169: 99.4358% ( 1) 00:24:13.753 3980.705 - 4004.978: 99.8293% ( 53) 00:24:13.753 4004.978 - 4029.250: 99.9926% ( 22) 00:24:13.753 4077.796 - 4102.068: 100.0000% ( 1) 00:24:13.753 00:24:13.753 08:50:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:24:13.753 08:50:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:24:13.753 08:50:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:24:13.753 08:50:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:24:13.753 08:50:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:24:14.009 [ 00:24:14.009 { 00:24:14.009 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:14.009 "subtype": "Discovery", 00:24:14.009 "listen_addresses": [], 00:24:14.009 "allow_any_host": true, 00:24:14.009 "hosts": [] 00:24:14.009 }, 00:24:14.009 { 00:24:14.009 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:24:14.009 "subtype": "NVMe", 00:24:14.009 "listen_addresses": [ 00:24:14.009 { 00:24:14.009 "trtype": "VFIOUSER", 00:24:14.009 "adrfam": "IPv4", 00:24:14.009 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:24:14.009 "trsvcid": "0" 00:24:14.009 } 00:24:14.009 ], 00:24:14.009 "allow_any_host": true, 00:24:14.009 "hosts": [], 00:24:14.009 "serial_number": "SPDK1", 00:24:14.009 "model_number": "SPDK bdev Controller", 00:24:14.009 "max_namespaces": 32, 00:24:14.009 "min_cntlid": 1, 00:24:14.009 "max_cntlid": 65519, 00:24:14.009 "namespaces": [ 00:24:14.009 { 00:24:14.009 "nsid": 1, 00:24:14.009 "bdev_name": "Malloc1", 00:24:14.009 "name": "Malloc1", 00:24:14.009 "nguid": "5E5D3E8E0CEA40A88FD9F9F010C20A78", 00:24:14.009 "uuid": "5e5d3e8e-0cea-40a8-8fd9-f9f010c20a78" 00:24:14.009 }, 00:24:14.009 { 00:24:14.009 "nsid": 2, 00:24:14.009 "bdev_name": "Malloc3", 00:24:14.009 "name": "Malloc3", 00:24:14.009 "nguid": "9F9F6B5FD067461195859AC97BE636CE", 00:24:14.009 "uuid": "9f9f6b5f-d067-4611-9585-9ac97be636ce" 00:24:14.009 } 00:24:14.009 ] 00:24:14.009 }, 00:24:14.009 { 00:24:14.009 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:24:14.009 "subtype": "NVMe", 00:24:14.009 "listen_addresses": [ 00:24:14.009 { 00:24:14.009 "trtype": "VFIOUSER", 00:24:14.009 "adrfam": "IPv4", 00:24:14.009 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:24:14.009 "trsvcid": "0" 00:24:14.009 } 00:24:14.009 ], 00:24:14.009 "allow_any_host": true, 00:24:14.009 "hosts": [], 00:24:14.009 "serial_number": "SPDK2", 00:24:14.009 "model_number": "SPDK bdev Controller", 00:24:14.009 "max_namespaces": 32, 00:24:14.009 "min_cntlid": 1, 00:24:14.009 "max_cntlid": 65519, 00:24:14.009 "namespaces": [ 00:24:14.009 { 00:24:14.009 "nsid": 1, 00:24:14.009 "bdev_name": "Malloc2", 00:24:14.009 "name": "Malloc2", 00:24:14.009 "nguid": "5B48CBBBFFC6479CB7FC5809BD986749", 00:24:14.009 "uuid": "5b48cbbb-ffc6-479c-b7fc-5809bd986749" 00:24:14.009 } 00:24:14.009 ] 00:24:14.009 } 00:24:14.009 ] 00:24:14.009 08:50:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:14.009 08:50:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2241135 00:24:14.009 08:50:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:24:14.009 08:50:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:24:14.009 08:50:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # local i=0 00:24:14.009 08:50:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1263 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:14.009 08:50:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:14.009 08:50:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # return 0 00:24:14.009 08:50:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:24:14.009 08:50:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:24:14.009 EAL: No free 2048 kB hugepages reported on node 1 00:24:14.009 [2024-05-15 08:50:08.781696] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:24:14.266 Malloc4 00:24:14.266 08:50:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:24:14.524 [2024-05-15 08:50:09.119167] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:24:14.524 08:50:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:24:14.524 Asynchronous Event Request test 00:24:14.524 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:24:14.524 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:24:14.524 Registering asynchronous event callbacks... 00:24:14.524 Starting namespace attribute notice tests for all controllers... 00:24:14.524 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:14.524 aer_cb - Changed Namespace 00:24:14.524 Cleaning up... 00:24:14.782 [ 00:24:14.782 { 00:24:14.782 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:14.782 "subtype": "Discovery", 00:24:14.782 "listen_addresses": [], 00:24:14.782 "allow_any_host": true, 00:24:14.782 "hosts": [] 00:24:14.782 }, 00:24:14.782 { 00:24:14.782 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:24:14.782 "subtype": "NVMe", 00:24:14.782 "listen_addresses": [ 00:24:14.782 { 00:24:14.782 "trtype": "VFIOUSER", 00:24:14.782 "adrfam": "IPv4", 00:24:14.782 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:24:14.782 "trsvcid": "0" 00:24:14.782 } 00:24:14.782 ], 00:24:14.782 "allow_any_host": true, 00:24:14.782 "hosts": [], 00:24:14.782 "serial_number": "SPDK1", 00:24:14.782 "model_number": "SPDK bdev Controller", 00:24:14.782 "max_namespaces": 32, 00:24:14.782 "min_cntlid": 1, 00:24:14.782 "max_cntlid": 65519, 00:24:14.782 "namespaces": [ 00:24:14.782 { 00:24:14.782 "nsid": 1, 00:24:14.782 "bdev_name": "Malloc1", 00:24:14.782 "name": "Malloc1", 00:24:14.782 "nguid": "5E5D3E8E0CEA40A88FD9F9F010C20A78", 00:24:14.782 "uuid": "5e5d3e8e-0cea-40a8-8fd9-f9f010c20a78" 00:24:14.782 }, 00:24:14.782 { 00:24:14.782 "nsid": 2, 00:24:14.782 "bdev_name": "Malloc3", 00:24:14.782 "name": "Malloc3", 00:24:14.782 "nguid": "9F9F6B5FD067461195859AC97BE636CE", 00:24:14.782 "uuid": "9f9f6b5f-d067-4611-9585-9ac97be636ce" 00:24:14.782 } 00:24:14.782 ] 00:24:14.782 }, 00:24:14.782 { 00:24:14.782 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:24:14.782 "subtype": "NVMe", 00:24:14.782 "listen_addresses": [ 00:24:14.782 { 00:24:14.782 "trtype": "VFIOUSER", 00:24:14.782 "adrfam": "IPv4", 00:24:14.782 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:24:14.782 "trsvcid": "0" 00:24:14.782 } 00:24:14.782 ], 00:24:14.782 "allow_any_host": true, 00:24:14.782 "hosts": [], 00:24:14.782 "serial_number": "SPDK2", 00:24:14.782 "model_number": "SPDK bdev Controller", 00:24:14.782 "max_namespaces": 32, 00:24:14.782 "min_cntlid": 1, 00:24:14.782 "max_cntlid": 65519, 00:24:14.782 "namespaces": [ 00:24:14.782 { 00:24:14.782 "nsid": 1, 00:24:14.782 "bdev_name": "Malloc2", 00:24:14.782 "name": "Malloc2", 00:24:14.782 "nguid": "5B48CBBBFFC6479CB7FC5809BD986749", 00:24:14.782 "uuid": "5b48cbbb-ffc6-479c-b7fc-5809bd986749" 00:24:14.782 }, 00:24:14.782 { 00:24:14.782 "nsid": 2, 00:24:14.782 "bdev_name": "Malloc4", 00:24:14.782 "name": "Malloc4", 00:24:14.782 "nguid": "9EA0CB8EA47B4AD09684FFAA3A831317", 00:24:14.782 "uuid": "9ea0cb8e-a47b-4ad0-9684-ffaa3a831317" 00:24:14.782 } 00:24:14.782 ] 00:24:14.782 } 00:24:14.782 ] 00:24:14.782 08:50:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2241135 00:24:14.782 08:50:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:24:14.782 08:50:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2235546 00:24:14.782 08:50:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@947 -- # '[' -z 2235546 ']' 00:24:14.782 08:50:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # kill -0 2235546 00:24:14.782 08:50:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # uname 00:24:14.782 08:50:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:24:14.782 08:50:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2235546 00:24:14.782 08:50:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:24:14.782 08:50:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:24:14.782 08:50:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2235546' 00:24:14.782 killing process with pid 2235546 00:24:14.782 08:50:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # kill 2235546 00:24:14.782 [2024-05-15 08:50:09.453893] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:14.782 08:50:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@971 -- # wait 2235546 00:24:15.041 08:50:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:24:15.041 08:50:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:24:15.041 08:50:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:24:15.041 08:50:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:24:15.041 08:50:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:24:15.041 08:50:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2241279 00:24:15.041 08:50:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:24:15.041 08:50:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2241279' 00:24:15.041 Process pid: 2241279 00:24:15.041 08:50:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:24:15.041 08:50:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2241279 00:24:15.041 08:50:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@828 -- # '[' -z 2241279 ']' 00:24:15.041 08:50:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:15.041 08:50:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local max_retries=100 00:24:15.041 08:50:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:15.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:15.041 08:50:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@837 -- # xtrace_disable 00:24:15.041 08:50:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:24:15.300 [2024-05-15 08:50:09.845160] thread.c:2937:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:24:15.300 [2024-05-15 08:50:09.846223] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:24:15.300 [2024-05-15 08:50:09.846299] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:15.300 EAL: No free 2048 kB hugepages reported on node 1 00:24:15.300 [2024-05-15 08:50:09.919005] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:15.300 [2024-05-15 08:50:10.008683] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:15.300 [2024-05-15 08:50:10.008741] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:15.300 [2024-05-15 08:50:10.008758] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:15.300 [2024-05-15 08:50:10.008771] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:15.300 [2024-05-15 08:50:10.008783] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:15.300 [2024-05-15 08:50:10.008878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:15.300 [2024-05-15 08:50:10.008931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:15.300 [2024-05-15 08:50:10.008988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:15.300 [2024-05-15 08:50:10.008990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:15.558 [2024-05-15 08:50:10.117779] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:24:15.558 [2024-05-15 08:50:10.118040] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:24:15.558 [2024-05-15 08:50:10.118330] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:24:15.558 [2024-05-15 08:50:10.118948] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:15.558 [2024-05-15 08:50:10.119186] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:24:15.558 08:50:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:24:15.558 08:50:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@861 -- # return 0 00:24:15.558 08:50:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:24:16.490 08:50:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:24:16.748 08:50:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:24:16.748 08:50:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:24:16.748 08:50:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:24:16.748 08:50:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:24:16.748 08:50:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:17.007 Malloc1 00:24:17.008 08:50:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:24:17.266 08:50:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:24:17.524 08:50:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:24:17.781 [2024-05-15 08:50:12.525595] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:17.781 08:50:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:24:17.781 08:50:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:24:17.781 08:50:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:24:18.346 Malloc2 00:24:18.346 08:50:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:24:18.346 08:50:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:24:18.602 08:50:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:24:19.167 08:50:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:24:19.167 08:50:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2241279 00:24:19.167 08:50:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@947 -- # '[' -z 2241279 ']' 00:24:19.167 08:50:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # kill -0 2241279 00:24:19.167 08:50:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # uname 00:24:19.167 08:50:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:24:19.167 08:50:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2241279 00:24:19.167 08:50:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:24:19.167 08:50:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:24:19.167 08:50:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2241279' 00:24:19.167 killing process with pid 2241279 00:24:19.167 08:50:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # kill 2241279 00:24:19.167 [2024-05-15 08:50:13.680957] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:19.167 08:50:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@971 -- # wait 2241279 00:24:19.167 08:50:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:24:19.167 08:50:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:24:19.167 00:24:19.167 real 0m52.886s 00:24:19.167 user 3m28.984s 00:24:19.167 sys 0m4.604s 00:24:19.167 08:50:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # xtrace_disable 00:24:19.167 08:50:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:24:19.167 ************************************ 00:24:19.167 END TEST nvmf_vfio_user 00:24:19.167 ************************************ 00:24:19.426 08:50:13 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:24:19.426 08:50:13 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:24:19.426 08:50:13 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:24:19.426 08:50:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:19.426 ************************************ 00:24:19.426 START TEST nvmf_vfio_user_nvme_compliance 00:24:19.426 ************************************ 00:24:19.426 08:50:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:24:19.426 * Looking for test storage... 00:24:19.426 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:24:19.426 08:50:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:19.426 08:50:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:24:19.426 08:50:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:19.426 08:50:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:19.426 08:50:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:19.426 08:50:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:19.426 08:50:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:19.426 08:50:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:19.426 08:50:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:19.426 08:50:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:19.426 08:50:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:19.426 08:50:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:19.426 08:50:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:19.426 08:50:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:19.426 08:50:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:19.426 08:50:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:19.426 08:50:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:19.426 08:50:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:19.426 08:50:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:19.426 08:50:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:19.426 08:50:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:19.426 08:50:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:19.426 08:50:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.426 08:50:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.426 08:50:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.426 08:50:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:24:19.426 08:50:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.426 08:50:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:24:19.426 08:50:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:19.426 08:50:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:19.426 08:50:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:19.426 08:50:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:19.426 08:50:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:19.426 08:50:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:19.426 08:50:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:19.426 08:50:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:19.426 08:50:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:19.426 08:50:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:19.426 08:50:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:24:19.426 08:50:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:24:19.426 08:50:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:24:19.426 08:50:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2241876 00:24:19.426 08:50:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:24:19.426 08:50:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2241876' 00:24:19.426 Process pid: 2241876 00:24:19.426 08:50:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:24:19.426 08:50:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2241876 00:24:19.426 08:50:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@828 -- # '[' -z 2241876 ']' 00:24:19.426 08:50:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:19.426 08:50:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local max_retries=100 00:24:19.426 08:50:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:19.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:19.426 08:50:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@837 -- # xtrace_disable 00:24:19.426 08:50:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:24:19.426 [2024-05-15 08:50:14.108932] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:24:19.426 [2024-05-15 08:50:14.109013] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:19.426 EAL: No free 2048 kB hugepages reported on node 1 00:24:19.426 [2024-05-15 08:50:14.176159] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:19.684 [2024-05-15 08:50:14.256937] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:19.684 [2024-05-15 08:50:14.256988] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:19.684 [2024-05-15 08:50:14.257023] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:19.684 [2024-05-15 08:50:14.257036] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:19.684 [2024-05-15 08:50:14.257047] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:19.684 [2024-05-15 08:50:14.257130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:19.684 [2024-05-15 08:50:14.257197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:19.684 [2024-05-15 08:50:14.257200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:19.684 08:50:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:24:19.684 08:50:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@861 -- # return 0 00:24:19.684 08:50:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:24:20.616 08:50:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:24:20.616 08:50:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:24:20.616 08:50:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:24:20.616 08:50:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:20.616 08:50:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:24:20.616 08:50:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:20.616 08:50:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:24:20.616 08:50:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:24:20.616 08:50:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:20.616 08:50:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:24:20.874 malloc0 00:24:20.874 08:50:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:20.874 08:50:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:24:20.874 08:50:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:20.874 08:50:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:24:20.874 08:50:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:20.874 08:50:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:24:20.874 08:50:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:20.874 08:50:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:24:20.874 08:50:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:20.874 08:50:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:24:20.874 08:50:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:20.874 08:50:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:24:20.874 [2024-05-15 08:50:15.444926] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:20.874 08:50:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:20.874 08:50:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:24:20.874 EAL: No free 2048 kB hugepages reported on node 1 00:24:20.874 00:24:20.874 00:24:20.874 CUnit - A unit testing framework for C - Version 2.1-3 00:24:20.874 http://cunit.sourceforge.net/ 00:24:20.874 00:24:20.874 00:24:20.874 Suite: nvme_compliance 00:24:20.874 Test: admin_identify_ctrlr_verify_dptr ...[2024-05-15 08:50:15.619818] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:24:20.874 [2024-05-15 08:50:15.621297] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:24:20.875 [2024-05-15 08:50:15.621323] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:24:20.875 [2024-05-15 08:50:15.621351] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:24:20.875 [2024-05-15 08:50:15.624847] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:24:20.875 passed 00:24:21.134 Test: admin_identify_ctrlr_verify_fused ...[2024-05-15 08:50:15.708474] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:24:21.134 [2024-05-15 08:50:15.713519] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:24:21.134 passed 00:24:21.134 Test: admin_identify_ns ...[2024-05-15 08:50:15.798733] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:24:21.134 [2024-05-15 08:50:15.858235] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:24:21.134 [2024-05-15 08:50:15.866244] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:24:21.134 [2024-05-15 08:50:15.887357] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:24:21.134 passed 00:24:21.435 Test: admin_get_features_mandatory_features ...[2024-05-15 08:50:15.972589] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:24:21.435 [2024-05-15 08:50:15.975613] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:24:21.435 passed 00:24:21.435 Test: admin_get_features_optional_features ...[2024-05-15 08:50:16.059129] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:24:21.435 [2024-05-15 08:50:16.062147] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:24:21.435 passed 00:24:21.435 Test: admin_set_features_number_of_queues ...[2024-05-15 08:50:16.144673] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:24:21.694 [2024-05-15 08:50:16.253341] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:24:21.694 passed 00:24:21.694 Test: admin_get_log_page_mandatory_logs ...[2024-05-15 08:50:16.335921] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:24:21.694 [2024-05-15 08:50:16.338944] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:24:21.694 passed 00:24:21.694 Test: admin_get_log_page_with_lpo ...[2024-05-15 08:50:16.418755] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:24:21.951 [2024-05-15 08:50:16.490245] ctrlr.c:2654:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:24:21.951 [2024-05-15 08:50:16.503300] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:24:21.951 passed 00:24:21.951 Test: fabric_property_get ...[2024-05-15 08:50:16.585884] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:24:21.951 [2024-05-15 08:50:16.587155] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:24:21.951 [2024-05-15 08:50:16.588907] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:24:21.951 passed 00:24:21.951 Test: admin_delete_io_sq_use_admin_qid ...[2024-05-15 08:50:16.670421] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:24:21.951 [2024-05-15 08:50:16.671688] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:24:21.951 [2024-05-15 08:50:16.674445] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:24:21.951 passed 00:24:22.209 Test: admin_delete_io_sq_delete_sq_twice ...[2024-05-15 08:50:16.759124] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:24:22.209 [2024-05-15 08:50:16.840229] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:24:22.209 [2024-05-15 08:50:16.856229] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:24:22.209 [2024-05-15 08:50:16.861332] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:24:22.209 passed 00:24:22.209 Test: admin_delete_io_cq_use_admin_qid ...[2024-05-15 08:50:16.945957] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:24:22.209 [2024-05-15 08:50:16.947224] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:24:22.209 [2024-05-15 08:50:16.950980] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:24:22.209 passed 00:24:22.466 Test: admin_delete_io_cq_delete_cq_first ...[2024-05-15 08:50:17.033723] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:24:22.466 [2024-05-15 08:50:17.109241] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:24:22.466 [2024-05-15 08:50:17.133224] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:24:22.466 [2024-05-15 08:50:17.138344] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:24:22.466 passed 00:24:22.466 Test: admin_create_io_cq_verify_iv_pc ...[2024-05-15 08:50:17.221863] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:24:22.466 [2024-05-15 08:50:17.223123] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:24:22.466 [2024-05-15 08:50:17.223177] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:24:22.466 [2024-05-15 08:50:17.224893] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:24:22.466 passed 00:24:22.723 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-05-15 08:50:17.305813] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:24:22.723 [2024-05-15 08:50:17.398232] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:24:22.723 [2024-05-15 08:50:17.406227] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:24:22.723 [2024-05-15 08:50:17.414226] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:24:22.723 [2024-05-15 08:50:17.422256] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:24:22.723 [2024-05-15 08:50:17.451354] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:24:22.723 passed 00:24:22.979 Test: admin_create_io_sq_verify_pc ...[2024-05-15 08:50:17.532879] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:24:22.979 [2024-05-15 08:50:17.552238] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:24:22.979 [2024-05-15 08:50:17.570210] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:24:22.979 passed 00:24:22.979 Test: admin_create_io_qp_max_qps ...[2024-05-15 08:50:17.650776] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:24:24.355 [2024-05-15 08:50:18.749230] nvme_ctrlr.c:5330:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:24:24.355 [2024-05-15 08:50:19.126041] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:24:24.615 passed 00:24:24.615 Test: admin_create_io_sq_shared_cq ...[2024-05-15 08:50:19.209282] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:24:24.615 [2024-05-15 08:50:19.343222] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:24:24.615 [2024-05-15 08:50:19.380313] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:24:24.873 passed 00:24:24.873 00:24:24.873 Run Summary: Type Total Ran Passed Failed Inactive 00:24:24.873 suites 1 1 n/a 0 0 00:24:24.873 tests 18 18 18 0 0 00:24:24.873 asserts 360 360 360 0 n/a 00:24:24.873 00:24:24.873 Elapsed time = 1.556 seconds 00:24:24.873 08:50:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2241876 00:24:24.873 08:50:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@947 -- # '[' -z 2241876 ']' 00:24:24.873 08:50:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # kill -0 2241876 00:24:24.873 08:50:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # uname 00:24:24.873 08:50:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:24:24.873 08:50:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2241876 00:24:24.873 08:50:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:24:24.873 08:50:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:24:24.873 08:50:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2241876' 00:24:24.873 killing process with pid 2241876 00:24:24.873 08:50:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # kill 2241876 00:24:24.873 [2024-05-15 08:50:19.461326] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:24.873 08:50:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@971 -- # wait 2241876 00:24:25.130 08:50:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:24:25.130 08:50:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:24:25.130 00:24:25.130 real 0m5.701s 00:24:25.130 user 0m16.033s 00:24:25.130 sys 0m0.583s 00:24:25.130 08:50:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # xtrace_disable 00:24:25.130 08:50:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:24:25.130 ************************************ 00:24:25.130 END TEST nvmf_vfio_user_nvme_compliance 00:24:25.130 ************************************ 00:24:25.130 08:50:19 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:24:25.130 08:50:19 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:24:25.130 08:50:19 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:24:25.130 08:50:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:25.130 ************************************ 00:24:25.130 START TEST nvmf_vfio_user_fuzz 00:24:25.130 ************************************ 00:24:25.130 08:50:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:24:25.130 * Looking for test storage... 00:24:25.130 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:25.130 08:50:19 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:25.130 08:50:19 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:24:25.130 08:50:19 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:25.130 08:50:19 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:25.130 08:50:19 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:25.130 08:50:19 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:25.130 08:50:19 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:25.130 08:50:19 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:25.130 08:50:19 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:25.130 08:50:19 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:25.130 08:50:19 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:25.130 08:50:19 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:25.130 08:50:19 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:25.130 08:50:19 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:25.130 08:50:19 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:25.130 08:50:19 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:25.130 08:50:19 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:25.130 08:50:19 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:25.130 08:50:19 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:25.131 08:50:19 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:25.131 08:50:19 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:25.131 08:50:19 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:25.131 08:50:19 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.131 08:50:19 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.131 08:50:19 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.131 08:50:19 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:24:25.131 08:50:19 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.131 08:50:19 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:24:25.131 08:50:19 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:25.131 08:50:19 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:25.131 08:50:19 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:25.131 08:50:19 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:25.131 08:50:19 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:25.131 08:50:19 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:25.131 08:50:19 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:25.131 08:50:19 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:25.131 08:50:19 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:25.131 08:50:19 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:25.131 08:50:19 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:24:25.131 08:50:19 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:24:25.131 08:50:19 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:24:25.131 08:50:19 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:24:25.131 08:50:19 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:24:25.131 08:50:19 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2242601 00:24:25.131 08:50:19 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:25.131 08:50:19 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2242601' 00:24:25.131 Process pid: 2242601 00:24:25.131 08:50:19 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:24:25.131 08:50:19 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2242601 00:24:25.131 08:50:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@828 -- # '[' -z 2242601 ']' 00:24:25.131 08:50:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:25.131 08:50:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local max_retries=100 00:24:25.131 08:50:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:25.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:25.131 08:50:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@837 -- # xtrace_disable 00:24:25.131 08:50:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:25.388 08:50:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:24:25.388 08:50:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@861 -- # return 0 00:24:25.388 08:50:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:24:26.760 08:50:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:24:26.760 08:50:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:26.760 08:50:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:26.760 08:50:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:26.760 08:50:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:24:26.760 08:50:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:24:26.760 08:50:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:26.760 08:50:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:26.760 malloc0 00:24:26.760 08:50:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:26.760 08:50:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:24:26.760 08:50:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:26.760 08:50:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:26.760 08:50:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:26.760 08:50:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:24:26.760 08:50:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:26.760 08:50:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:26.760 08:50:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:26.760 08:50:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:24:26.760 08:50:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:26.760 08:50:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:26.760 08:50:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:26.760 08:50:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:24:26.760 08:50:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:24:58.822 Fuzzing completed. Shutting down the fuzz application 00:24:58.822 00:24:58.822 Dumping successful admin opcodes: 00:24:58.822 8, 9, 10, 24, 00:24:58.822 Dumping successful io opcodes: 00:24:58.822 0, 00:24:58.822 NS: 0x200003a1ef00 I/O qp, Total commands completed: 642617, total successful commands: 2493, random_seed: 1432209024 00:24:58.822 NS: 0x200003a1ef00 admin qp, Total commands completed: 146607, total successful commands: 1190, random_seed: 2624903360 00:24:58.822 08:50:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:24:58.822 08:50:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:58.822 08:50:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:58.822 08:50:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:58.822 08:50:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2242601 00:24:58.822 08:50:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@947 -- # '[' -z 2242601 ']' 00:24:58.822 08:50:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # kill -0 2242601 00:24:58.822 08:50:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # uname 00:24:58.822 08:50:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:24:58.822 08:50:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2242601 00:24:58.822 08:50:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:24:58.822 08:50:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:24:58.822 08:50:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2242601' 00:24:58.822 killing process with pid 2242601 00:24:58.822 08:50:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # kill 2242601 00:24:58.822 08:50:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@971 -- # wait 2242601 00:24:58.822 08:50:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:24:58.822 08:50:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:24:58.822 00:24:58.822 real 0m32.242s 00:24:58.822 user 0m33.436s 00:24:58.822 sys 0m25.467s 00:24:58.822 08:50:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # xtrace_disable 00:24:58.822 08:50:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:58.822 ************************************ 00:24:58.822 END TEST nvmf_vfio_user_fuzz 00:24:58.822 ************************************ 00:24:58.822 08:50:52 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:24:58.822 08:50:52 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:24:58.822 08:50:52 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:24:58.822 08:50:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:58.822 ************************************ 00:24:58.822 START TEST nvmf_host_management 00:24:58.822 ************************************ 00:24:58.822 08:50:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:24:58.822 * Looking for test storage... 00:24:58.823 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:58.823 08:50:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:58.823 08:50:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:24:58.823 08:50:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:58.823 08:50:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:58.823 08:50:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:58.823 08:50:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:58.823 08:50:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:58.823 08:50:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:58.823 08:50:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:58.823 08:50:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:58.823 08:50:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:58.823 08:50:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:58.823 08:50:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:58.823 08:50:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:58.823 08:50:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:58.823 08:50:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:58.823 08:50:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:58.823 08:50:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:58.823 08:50:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:58.823 08:50:52 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:58.823 08:50:52 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:58.823 08:50:52 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:58.823 08:50:52 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.823 08:50:52 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.823 08:50:52 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.823 08:50:52 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:24:58.823 08:50:52 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.823 08:50:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:24:58.823 08:50:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:58.823 08:50:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:58.823 08:50:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:58.823 08:50:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:58.823 08:50:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:58.823 08:50:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:58.823 08:50:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:58.823 08:50:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:58.823 08:50:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:58.823 08:50:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:58.823 08:50:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:24:58.823 08:50:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:58.823 08:50:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:58.823 08:50:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:58.823 08:50:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:58.823 08:50:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:58.823 08:50:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:58.823 08:50:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:58.823 08:50:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:58.823 08:50:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:58.823 08:50:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:58.823 08:50:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:24:58.823 08:50:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:59.757 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:59.757 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:24:59.757 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:59.757 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:59.757 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:59.757 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:59.757 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:59.757 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:24:59.757 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:59.757 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:24:59.757 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:24:59.757 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:24:59.757 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:24:59.757 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:24:59.757 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:24:59.757 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:59.757 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:59.757 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:59.757 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:59.757 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:59.757 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:59.757 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:59.757 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:59.757 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:59.757 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:59.757 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:59.757 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:59.757 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:59.757 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:59.758 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:59.758 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:59.758 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:59.758 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:59.758 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:59.758 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:59.758 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:59.758 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:59.758 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:59.758 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:59.758 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:59.758 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:59.758 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:59.758 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:59.758 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:59.758 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:59.758 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:59.758 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:59.758 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:59.758 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:59.758 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:59.758 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:59.758 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:59.758 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:59.758 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:59.758 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:59.758 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:59.758 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:59.758 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:59.758 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:59.758 Found net devices under 0000:09:00.0: cvl_0_0 00:24:59.758 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:59.758 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:59.758 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:59.758 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:59.758 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:59.758 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:59.758 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:59.758 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:59.758 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:59.758 Found net devices under 0000:09:00.1: cvl_0_1 00:24:59.758 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:59.758 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:59.758 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:24:59.758 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:59.758 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:59.758 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:59.758 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:59.758 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:59.758 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:59.758 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:59.758 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:59.758 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:59.758 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:59.758 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:59.758 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:59.758 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:59.758 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:59.758 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:59.758 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:59.758 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:59.758 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:59.758 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:59.758 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:00.017 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:00.017 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:00.017 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:00.017 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:00.017 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:25:00.017 00:25:00.017 --- 10.0.0.2 ping statistics --- 00:25:00.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.017 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:25:00.017 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:00.017 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:00.017 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:25:00.017 00:25:00.017 --- 10.0.0.1 ping statistics --- 00:25:00.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.017 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:25:00.017 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:00.018 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:25:00.018 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:00.018 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:00.018 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:00.018 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:00.018 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:00.018 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:00.018 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:00.018 08:50:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:25:00.018 08:50:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:25:00.018 08:50:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:25:00.018 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:00.018 08:50:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@721 -- # xtrace_disable 00:25:00.018 08:50:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:25:00.018 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=2248325 00:25:00.018 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:00.018 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 2248325 00:25:00.018 08:50:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@828 -- # '[' -z 2248325 ']' 00:25:00.018 08:50:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:00.018 08:50:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local max_retries=100 00:25:00.018 08:50:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:00.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:00.018 08:50:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@837 -- # xtrace_disable 00:25:00.018 08:50:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:25:00.018 [2024-05-15 08:50:54.633941] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:25:00.018 [2024-05-15 08:50:54.634035] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:00.018 EAL: No free 2048 kB hugepages reported on node 1 00:25:00.018 [2024-05-15 08:50:54.709014] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:00.018 [2024-05-15 08:50:54.796170] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:00.018 [2024-05-15 08:50:54.796234] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:00.018 [2024-05-15 08:50:54.796264] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:00.018 [2024-05-15 08:50:54.796276] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:00.018 [2024-05-15 08:50:54.796286] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:00.018 [2024-05-15 08:50:54.796370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:00.018 [2024-05-15 08:50:54.796434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:00.018 [2024-05-15 08:50:54.796479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:25:00.018 [2024-05-15 08:50:54.796481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:00.277 08:50:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:25:00.277 08:50:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@861 -- # return 0 00:25:00.277 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:00.277 08:50:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@727 -- # xtrace_disable 00:25:00.277 08:50:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:25:00.278 08:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:00.278 08:50:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:00.278 08:50:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:00.278 08:50:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:25:00.278 [2024-05-15 08:50:54.949938] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:00.278 08:50:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:00.278 08:50:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:25:00.278 08:50:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@721 -- # xtrace_disable 00:25:00.278 08:50:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:25:00.278 08:50:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:00.278 08:50:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:25:00.278 08:50:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:25:00.278 08:50:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:00.278 08:50:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:25:00.278 Malloc0 00:25:00.278 [2024-05-15 08:50:55.008410] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:00.278 [2024-05-15 08:50:55.008724] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:00.278 08:50:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:00.278 08:50:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:25:00.278 08:50:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@727 -- # xtrace_disable 00:25:00.278 08:50:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:25:00.278 08:50:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2248376 00:25:00.278 08:50:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2248376 /var/tmp/bdevperf.sock 00:25:00.278 08:50:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@828 -- # '[' -z 2248376 ']' 00:25:00.278 08:50:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:00.278 08:50:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:00.278 08:50:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:25:00.278 08:50:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local max_retries=100 00:25:00.278 08:50:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:00.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:00.278 08:50:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:25:00.278 08:50:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@837 -- # xtrace_disable 00:25:00.278 08:50:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:25:00.278 08:50:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:25:00.278 08:50:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:00.278 08:50:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:00.278 { 00:25:00.278 "params": { 00:25:00.278 "name": "Nvme$subsystem", 00:25:00.278 "trtype": "$TEST_TRANSPORT", 00:25:00.278 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:00.278 "adrfam": "ipv4", 00:25:00.278 "trsvcid": "$NVMF_PORT", 00:25:00.278 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:00.278 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:00.278 "hdgst": ${hdgst:-false}, 00:25:00.278 "ddgst": ${ddgst:-false} 00:25:00.278 }, 00:25:00.278 "method": "bdev_nvme_attach_controller" 00:25:00.278 } 00:25:00.278 EOF 00:25:00.278 )") 00:25:00.278 08:50:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:25:00.278 08:50:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:25:00.278 08:50:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:25:00.278 08:50:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:00.278 "params": { 00:25:00.278 "name": "Nvme0", 00:25:00.278 "trtype": "tcp", 00:25:00.278 "traddr": "10.0.0.2", 00:25:00.278 "adrfam": "ipv4", 00:25:00.278 "trsvcid": "4420", 00:25:00.278 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:00.278 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:00.278 "hdgst": false, 00:25:00.278 "ddgst": false 00:25:00.278 }, 00:25:00.278 "method": "bdev_nvme_attach_controller" 00:25:00.278 }' 00:25:00.537 [2024-05-15 08:50:55.077558] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:25:00.537 [2024-05-15 08:50:55.077646] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2248376 ] 00:25:00.537 EAL: No free 2048 kB hugepages reported on node 1 00:25:00.537 [2024-05-15 08:50:55.149359] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:00.537 [2024-05-15 08:50:55.235706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:00.796 Running I/O for 10 seconds... 00:25:00.796 08:50:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:25:00.796 08:50:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@861 -- # return 0 00:25:00.796 08:50:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:00.796 08:50:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:00.796 08:50:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:25:00.796 08:50:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:00.796 08:50:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:00.796 08:50:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:25:00.796 08:50:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:00.796 08:50:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:25:00.796 08:50:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:25:00.796 08:50:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:25:00.796 08:50:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:25:00.796 08:50:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:25:00.796 08:50:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:25:00.796 08:50:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:25:00.796 08:50:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:00.796 08:50:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:25:00.796 08:50:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:00.796 08:50:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:25:00.796 08:50:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:25:00.796 08:50:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:25:01.056 08:50:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:25:01.056 08:50:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:25:01.056 08:50:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:25:01.056 08:50:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:25:01.056 08:50:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:01.056 08:50:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:25:01.056 08:50:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:01.056 08:50:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:25:01.056 08:50:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:25:01.056 08:50:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:25:01.056 08:50:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:25:01.056 08:50:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:25:01.056 08:50:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:25:01.056 08:50:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:01.056 08:50:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:25:01.056 [2024-05-15 08:50:55.819352] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec68b0 is same with the state(5) to be set 00:25:01.056 [2024-05-15 08:50:55.819413] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec68b0 is same with the state(5) to be set 00:25:01.056 [2024-05-15 08:50:55.819429] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec68b0 is same with the state(5) to be set 00:25:01.056 [2024-05-15 08:50:55.819442] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec68b0 is same with the state(5) to be set 00:25:01.056 [2024-05-15 08:50:55.819454] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec68b0 is same with the state(5) to be set 00:25:01.056 [2024-05-15 08:50:55.819466] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec68b0 is same with the state(5) to be set 00:25:01.056 [2024-05-15 08:50:55.819479] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec68b0 is same with the state(5) to be set 00:25:01.056 [2024-05-15 08:50:55.819491] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec68b0 is same with the state(5) to be set 00:25:01.056 [2024-05-15 08:50:55.819503] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec68b0 is same with the state(5) to be set 00:25:01.056 [2024-05-15 08:50:55.819524] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec68b0 is same with the state(5) to be set 00:25:01.056 [2024-05-15 08:50:55.819536] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec68b0 is same with the state(5) to be set 00:25:01.056 [2024-05-15 08:50:55.819548] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec68b0 is same with the state(5) to be set 00:25:01.056 [2024-05-15 08:50:55.819560] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec68b0 is same with the state(5) to be set 00:25:01.056 [2024-05-15 08:50:55.819581] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec68b0 is same with the state(5) to be set 00:25:01.056 [2024-05-15 08:50:55.819593] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec68b0 is same with the state(5) to be set 00:25:01.056 [2024-05-15 08:50:55.819605] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec68b0 is same with the state(5) to be set 00:25:01.056 [2024-05-15 08:50:55.819617] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec68b0 is same with the state(5) to be set 00:25:01.056 [2024-05-15 08:50:55.819630] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec68b0 is same with the state(5) to be set 00:25:01.056 [2024-05-15 08:50:55.819642] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec68b0 is same with the state(5) to be set 00:25:01.056 [2024-05-15 08:50:55.819665] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec68b0 is same with the state(5) to be set 00:25:01.056 [2024-05-15 08:50:55.819678] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec68b0 is same with the state(5) to be set 00:25:01.056 [2024-05-15 08:50:55.819691] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec68b0 is same with the state(5) to be set 00:25:01.056 [2024-05-15 08:50:55.819703] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec68b0 is same with the state(5) to be set 00:25:01.056 [2024-05-15 08:50:55.819715] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec68b0 is same with the state(5) to be set 00:25:01.056 [2024-05-15 08:50:55.819727] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec68b0 is same with the state(5) to be set 00:25:01.056 [2024-05-15 08:50:55.819739] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec68b0 is same with the state(5) to be set 00:25:01.056 [2024-05-15 08:50:55.819751] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec68b0 is same with the state(5) to be set 00:25:01.056 [2024-05-15 08:50:55.819763] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec68b0 is same with the state(5) to be set 00:25:01.056 [2024-05-15 08:50:55.819775] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec68b0 is same with the state(5) to be set 00:25:01.056 [2024-05-15 08:50:55.819787] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec68b0 is same with the state(5) to be set 00:25:01.056 [2024-05-15 08:50:55.819799] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec68b0 is same with the state(5) to be set 00:25:01.056 [2024-05-15 08:50:55.819811] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec68b0 is same with the state(5) to be set 00:25:01.056 [2024-05-15 08:50:55.819823] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec68b0 is same with the state(5) to be set 00:25:01.056 [2024-05-15 08:50:55.819835] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec68b0 is same with the state(5) to be set 00:25:01.056 [2024-05-15 08:50:55.819847] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec68b0 is same with the state(5) to be set 00:25:01.056 [2024-05-15 08:50:55.819860] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec68b0 is same with the state(5) to be set 00:25:01.056 [2024-05-15 08:50:55.819872] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec68b0 is same with the state(5) to be set 00:25:01.056 [2024-05-15 08:50:55.819884] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec68b0 is same with the state(5) to be set 00:25:01.056 [2024-05-15 08:50:55.819895] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec68b0 is same with the state(5) to be set 00:25:01.056 [2024-05-15 08:50:55.819907] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec68b0 is same with the state(5) to be set 00:25:01.056 [2024-05-15 08:50:55.819919] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec68b0 is same with the state(5) to be set 00:25:01.056 [2024-05-15 08:50:55.819931] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec68b0 is same with the state(5) to be set 00:25:01.056 08:50:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:01.056 08:50:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:25:01.056 08:50:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:01.056 08:50:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:25:01.056 [2024-05-15 08:50:55.826544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:68224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.056 [2024-05-15 08:50:55.826607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.056 [2024-05-15 08:50:55.826640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:68352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.056 [2024-05-15 08:50:55.826656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.056 [2024-05-15 08:50:55.826673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:68480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.056 [2024-05-15 08:50:55.826688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.056 [2024-05-15 08:50:55.826703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:68608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.056 [2024-05-15 08:50:55.826717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.057 [2024-05-15 08:50:55.826733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:68736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.057 [2024-05-15 08:50:55.826747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.057 [2024-05-15 08:50:55.826762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:68864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.057 [2024-05-15 08:50:55.826776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.057 [2024-05-15 08:50:55.826792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:68992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.057 [2024-05-15 08:50:55.826806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.057 [2024-05-15 08:50:55.826822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:69120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.057 [2024-05-15 08:50:55.826835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.057 [2024-05-15 08:50:55.826851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:69248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.057 [2024-05-15 08:50:55.826864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.057 [2024-05-15 08:50:55.826880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:69376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.057 [2024-05-15 08:50:55.826894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.057 [2024-05-15 08:50:55.826910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:69504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.057 [2024-05-15 08:50:55.826924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.057 [2024-05-15 08:50:55.826939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:69632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.057 [2024-05-15 08:50:55.826953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.057 [2024-05-15 08:50:55.826969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:69760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.057 [2024-05-15 08:50:55.826983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.057 [2024-05-15 08:50:55.827002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:69888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.057 [2024-05-15 08:50:55.827017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.057 [2024-05-15 08:50:55.827033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.057 [2024-05-15 08:50:55.827048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.057 [2024-05-15 08:50:55.827064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.057 [2024-05-15 08:50:55.827078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.057 [2024-05-15 08:50:55.827093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:70272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.057 [2024-05-15 08:50:55.827107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.057 [2024-05-15 08:50:55.827123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.057 [2024-05-15 08:50:55.827136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.057 [2024-05-15 08:50:55.827152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.057 [2024-05-15 08:50:55.827165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.057 [2024-05-15 08:50:55.827181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.057 [2024-05-15 08:50:55.827194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.057 [2024-05-15 08:50:55.827210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.057 [2024-05-15 08:50:55.827232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.057 [2024-05-15 08:50:55.827248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:70912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.057 [2024-05-15 08:50:55.827262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.057 [2024-05-15 08:50:55.827286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:71040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.057 [2024-05-15 08:50:55.827300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.057 [2024-05-15 08:50:55.827315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:71168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.057 [2024-05-15 08:50:55.827329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.057 [2024-05-15 08:50:55.827344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:71296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.057 [2024-05-15 08:50:55.827359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.057 [2024-05-15 08:50:55.827374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:71424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.057 [2024-05-15 08:50:55.827392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.057 [2024-05-15 08:50:55.827408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:71552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.057 [2024-05-15 08:50:55.827422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.057 [2024-05-15 08:50:55.827437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:71680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.057 [2024-05-15 08:50:55.827451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.057 [2024-05-15 08:50:55.827467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:71808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.057 [2024-05-15 08:50:55.827481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.057 [2024-05-15 08:50:55.827496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:71936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.057 [2024-05-15 08:50:55.827509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.057 [2024-05-15 08:50:55.827532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:72064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.057 [2024-05-15 08:50:55.827546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.057 [2024-05-15 08:50:55.827562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:72192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.057 [2024-05-15 08:50:55.827577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.057 [2024-05-15 08:50:55.827593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:72320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.057 [2024-05-15 08:50:55.827607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.057 [2024-05-15 08:50:55.827622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:72448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.057 [2024-05-15 08:50:55.827637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.057 [2024-05-15 08:50:55.827652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.057 [2024-05-15 08:50:55.827666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.057 [2024-05-15 08:50:55.827681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:72704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.057 [2024-05-15 08:50:55.827695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.057 [2024-05-15 08:50:55.827711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:72832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.057 [2024-05-15 08:50:55.827725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.057 [2024-05-15 08:50:55.827740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:72960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.057 [2024-05-15 08:50:55.827754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.057 [2024-05-15 08:50:55.827769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:73088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.057 [2024-05-15 08:50:55.827787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.057 [2024-05-15 08:50:55.827804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:73216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.057 [2024-05-15 08:50:55.827818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.057 [2024-05-15 08:50:55.827833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:73344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.057 [2024-05-15 08:50:55.827847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.057 [2024-05-15 08:50:55.827863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:73472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.057 [2024-05-15 08:50:55.827877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.057 [2024-05-15 08:50:55.827892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:73600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.057 [2024-05-15 08:50:55.827906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.057 [2024-05-15 08:50:55.827922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.057 [2024-05-15 08:50:55.827936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.057 [2024-05-15 08:50:55.827951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.057 [2024-05-15 08:50:55.827965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.057 [2024-05-15 08:50:55.827980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.057 [2024-05-15 08:50:55.827994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.057 [2024-05-15 08:50:55.828010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.057 [2024-05-15 08:50:55.828023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.057 [2024-05-15 08:50:55.828039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.057 [2024-05-15 08:50:55.828053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.057 [2024-05-15 08:50:55.828068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.057 [2024-05-15 08:50:55.828082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.057 [2024-05-15 08:50:55.828097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.057 [2024-05-15 08:50:55.828111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.057 [2024-05-15 08:50:55.828126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.057 [2024-05-15 08:50:55.828140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.057 [2024-05-15 08:50:55.828159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.057 [2024-05-15 08:50:55.828173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.057 [2024-05-15 08:50:55.828188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.057 [2024-05-15 08:50:55.828202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.057 [2024-05-15 08:50:55.828223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.057 [2024-05-15 08:50:55.828239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.057 [2024-05-15 08:50:55.828255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.057 [2024-05-15 08:50:55.828277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.058 [2024-05-15 08:50:55.828292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.058 [2024-05-15 08:50:55.828306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.058 [2024-05-15 08:50:55.828321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.058 [2024-05-15 08:50:55.828335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.058 [2024-05-15 08:50:55.828350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.058 [2024-05-15 08:50:55.828364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.058 [2024-05-15 08:50:55.828379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.058 [2024-05-15 08:50:55.828393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.058 [2024-05-15 08:50:55.828408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.058 [2024-05-15 08:50:55.828422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.058 [2024-05-15 08:50:55.828437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.058 [2024-05-15 08:50:55.828451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.058 [2024-05-15 08:50:55.828466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.058 [2024-05-15 08:50:55.828480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.058 [2024-05-15 08:50:55.828495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.058 [2024-05-15 08:50:55.828508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.058 [2024-05-15 08:50:55.828525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.058 [2024-05-15 08:50:55.828543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.058 [2024-05-15 08:50:55.828643] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x13341f0 was disconnected and freed. reset controller. 00:25:01.058 [2024-05-15 08:50:55.828717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:01.058 [2024-05-15 08:50:55.828739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.058 [2024-05-15 08:50:55.828755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:01.058 [2024-05-15 08:50:55.828768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.058 [2024-05-15 08:50:55.828789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:01.058 [2024-05-15 08:50:55.828813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.058 [2024-05-15 08:50:55.828836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:01.058 [2024-05-15 08:50:55.828851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.058 [2024-05-15 08:50:55.828865] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339dc0 is same with the state(5) to be set 00:25:01.058 [2024-05-15 08:50:55.829976] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:01.058 08:50:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:01.058 task offset: 68224 on job bdev=Nvme0n1 fails 00:25:01.058 00:25:01.058 Latency(us) 00:25:01.058 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:01.058 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:01.058 Job: Nvme0n1 ended in about 0.42 seconds with error 00:25:01.058 Verification LBA range: start 0x0 length 0x400 00:25:01.058 Nvme0n1 : 0.42 1280.00 80.00 153.70 0.00 43351.82 2730.67 34952.53 00:25:01.058 =================================================================================================================== 00:25:01.058 Total : 1280.00 80.00 153.70 0.00 43351.82 2730.67 34952.53 00:25:01.058 08:50:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:25:01.058 [2024-05-15 08:50:55.832232] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:01.058 [2024-05-15 08:50:55.832275] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339dc0 (9): Bad file descriptor 00:25:01.058 [2024-05-15 08:50:55.843795] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:02.493 08:50:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2248376 00:25:02.493 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2248376) - No such process 00:25:02.493 08:50:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:25:02.493 08:50:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:25:02.493 08:50:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:25:02.493 08:50:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:25:02.493 08:50:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:25:02.493 08:50:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:25:02.493 08:50:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:02.493 08:50:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:02.493 { 00:25:02.493 "params": { 00:25:02.493 "name": "Nvme$subsystem", 00:25:02.493 "trtype": "$TEST_TRANSPORT", 00:25:02.493 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:02.493 "adrfam": "ipv4", 00:25:02.493 "trsvcid": "$NVMF_PORT", 00:25:02.493 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:02.493 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:02.493 "hdgst": ${hdgst:-false}, 00:25:02.493 "ddgst": ${ddgst:-false} 00:25:02.493 }, 00:25:02.493 "method": "bdev_nvme_attach_controller" 00:25:02.493 } 00:25:02.493 EOF 00:25:02.493 )") 00:25:02.493 08:50:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:25:02.493 08:50:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:25:02.493 08:50:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:25:02.493 08:50:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:02.493 "params": { 00:25:02.493 "name": "Nvme0", 00:25:02.493 "trtype": "tcp", 00:25:02.493 "traddr": "10.0.0.2", 00:25:02.493 "adrfam": "ipv4", 00:25:02.493 "trsvcid": "4420", 00:25:02.493 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:02.493 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:02.493 "hdgst": false, 00:25:02.493 "ddgst": false 00:25:02.493 }, 00:25:02.493 "method": "bdev_nvme_attach_controller" 00:25:02.493 }' 00:25:02.493 [2024-05-15 08:50:56.874211] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:25:02.493 [2024-05-15 08:50:56.874300] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2248648 ] 00:25:02.493 EAL: No free 2048 kB hugepages reported on node 1 00:25:02.493 [2024-05-15 08:50:56.942984] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.493 [2024-05-15 08:50:57.033127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:02.493 Running I/O for 1 seconds... 00:25:03.866 00:25:03.866 Latency(us) 00:25:03.866 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:03.866 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:03.866 Verification LBA range: start 0x0 length 0x400 00:25:03.866 Nvme0n1 : 1.01 1525.39 95.34 0.00 0.00 41300.76 7524.50 36311.80 00:25:03.866 =================================================================================================================== 00:25:03.866 Total : 1525.39 95.34 0.00 0.00 41300.76 7524.50 36311.80 00:25:03.866 08:50:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:25:03.866 08:50:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:25:03.866 08:50:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:03.866 08:50:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:03.866 08:50:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:25:03.866 08:50:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:03.866 08:50:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:25:03.866 08:50:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:03.866 08:50:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:25:03.866 08:50:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:03.866 08:50:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:03.866 rmmod nvme_tcp 00:25:03.866 rmmod nvme_fabrics 00:25:03.866 rmmod nvme_keyring 00:25:03.866 08:50:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:03.866 08:50:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:25:03.866 08:50:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:25:03.866 08:50:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 2248325 ']' 00:25:03.866 08:50:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 2248325 00:25:03.866 08:50:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@947 -- # '[' -z 2248325 ']' 00:25:03.866 08:50:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # kill -0 2248325 00:25:03.866 08:50:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # uname 00:25:03.866 08:50:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:25:03.866 08:50:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2248325 00:25:03.866 08:50:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:25:03.866 08:50:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:25:03.866 08:50:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2248325' 00:25:03.866 killing process with pid 2248325 00:25:03.866 08:50:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # kill 2248325 00:25:03.866 [2024-05-15 08:50:58.507880] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:03.866 08:50:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@971 -- # wait 2248325 00:25:04.124 [2024-05-15 08:50:58.734754] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:25:04.124 08:50:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:04.124 08:50:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:04.124 08:50:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:04.124 08:50:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:04.124 08:50:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:04.124 08:50:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:04.124 08:50:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:04.124 08:50:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:06.026 08:51:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:06.026 08:51:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:25:06.026 00:25:06.026 real 0m8.762s 00:25:06.026 user 0m18.594s 00:25:06.026 sys 0m2.882s 00:25:06.026 08:51:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # xtrace_disable 00:25:06.026 08:51:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:25:06.026 ************************************ 00:25:06.026 END TEST nvmf_host_management 00:25:06.026 ************************************ 00:25:06.285 08:51:00 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:25:06.285 08:51:00 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:25:06.285 08:51:00 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:25:06.285 08:51:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:06.285 ************************************ 00:25:06.285 START TEST nvmf_lvol 00:25:06.285 ************************************ 00:25:06.285 08:51:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:25:06.285 * Looking for test storage... 00:25:06.285 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:06.285 08:51:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:06.285 08:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:25:06.285 08:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:06.285 08:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:06.285 08:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:06.285 08:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:06.285 08:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:06.285 08:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:06.285 08:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:06.285 08:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:06.285 08:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:06.285 08:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:06.285 08:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:06.285 08:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:06.285 08:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:06.285 08:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:06.285 08:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:06.285 08:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:06.285 08:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:06.285 08:51:00 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:06.285 08:51:00 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:06.285 08:51:00 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:06.285 08:51:00 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.285 08:51:00 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.286 08:51:00 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.286 08:51:00 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:25:06.286 08:51:00 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.286 08:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:25:06.286 08:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:06.286 08:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:06.286 08:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:06.286 08:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:06.286 08:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:06.286 08:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:06.286 08:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:06.286 08:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:06.286 08:51:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:06.286 08:51:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:06.286 08:51:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:25:06.286 08:51:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:25:06.286 08:51:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:06.286 08:51:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:25:06.286 08:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:06.286 08:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:06.286 08:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:06.286 08:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:06.286 08:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:06.286 08:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:06.286 08:51:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:06.286 08:51:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:06.286 08:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:06.286 08:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:06.286 08:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:25:06.286 08:51:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:25:08.816 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:08.816 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:25:08.816 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:08.816 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:08.816 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:08.816 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:08.816 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:08.816 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:25:08.816 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:08.816 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:25:08.816 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:25:08.816 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:25:08.816 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:25:08.816 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:25:08.816 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:25:08.816 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:08.816 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:08.816 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:08.816 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:08.816 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:08.816 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:08.816 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:08.816 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:08.816 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:08.816 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:08.816 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:08.816 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:08.816 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:08.816 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:08.816 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:08.816 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:08.816 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:08.816 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:08.816 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:08.816 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:08.816 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:08.816 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:08.816 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:08.816 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:08.816 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:08.816 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:08.816 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:08.816 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:08.816 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:08.816 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:08.816 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:08.816 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:08.816 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:08.816 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:08.816 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:08.816 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:08.816 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:08.817 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:08.817 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:08.817 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:08.817 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:08.817 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:08.817 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:08.817 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:08.817 Found net devices under 0000:09:00.0: cvl_0_0 00:25:08.817 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:08.817 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:08.817 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:08.817 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:08.817 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:08.817 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:08.817 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:08.817 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:08.817 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:08.817 Found net devices under 0000:09:00.1: cvl_0_1 00:25:08.817 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:08.817 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:08.817 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:25:08.817 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:08.817 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:08.817 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:08.817 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:08.817 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:08.817 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:08.817 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:08.817 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:08.817 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:08.817 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:08.817 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:08.817 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:08.817 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:08.817 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:08.817 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:08.817 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:08.817 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:08.817 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:08.817 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:08.817 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:08.817 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:08.817 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:08.817 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:08.817 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:08.817 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:25:08.817 00:25:08.817 --- 10.0.0.2 ping statistics --- 00:25:08.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:08.817 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:25:08.817 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:08.817 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:08.817 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:25:08.817 00:25:08.817 --- 10.0.0.1 ping statistics --- 00:25:08.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:08.817 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:25:08.817 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:08.817 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:25:08.817 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:08.817 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:08.817 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:08.817 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:08.817 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:08.817 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:08.817 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:08.817 08:51:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:25:08.817 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:08.817 08:51:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@721 -- # xtrace_disable 00:25:08.817 08:51:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:25:08.817 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=2251249 00:25:08.817 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:25:08.817 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 2251249 00:25:08.817 08:51:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@828 -- # '[' -z 2251249 ']' 00:25:08.817 08:51:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:08.817 08:51:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local max_retries=100 00:25:08.817 08:51:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:08.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:08.817 08:51:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@837 -- # xtrace_disable 00:25:08.817 08:51:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:25:08.817 [2024-05-15 08:51:03.579355] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:25:08.817 [2024-05-15 08:51:03.579431] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:09.076 EAL: No free 2048 kB hugepages reported on node 1 00:25:09.076 [2024-05-15 08:51:03.653764] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:09.076 [2024-05-15 08:51:03.734393] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:09.076 [2024-05-15 08:51:03.734441] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:09.076 [2024-05-15 08:51:03.734456] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:09.076 [2024-05-15 08:51:03.734469] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:09.076 [2024-05-15 08:51:03.734480] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:09.076 [2024-05-15 08:51:03.734566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:09.076 [2024-05-15 08:51:03.734622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:09.076 [2024-05-15 08:51:03.734625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:09.076 08:51:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:25:09.076 08:51:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@861 -- # return 0 00:25:09.076 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:09.076 08:51:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@727 -- # xtrace_disable 00:25:09.076 08:51:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:25:09.076 08:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:09.076 08:51:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:09.643 [2024-05-15 08:51:04.139677] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:09.643 08:51:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:09.901 08:51:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:25:09.901 08:51:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:10.159 08:51:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:25:10.159 08:51:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:25:10.417 08:51:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:25:10.675 08:51:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=6c654154-7cd9-4d58-8ef0-99af1d0c67dd 00:25:10.675 08:51:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6c654154-7cd9-4d58-8ef0-99af1d0c67dd lvol 20 00:25:10.675 08:51:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=e3566205-86d3-49cd-b714-d61ba727fd40 00:25:10.675 08:51:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:25:10.938 08:51:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e3566205-86d3-49cd-b714-d61ba727fd40 00:25:11.197 08:51:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:11.454 [2024-05-15 08:51:06.179543] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:11.454 [2024-05-15 08:51:06.179838] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:11.454 08:51:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:11.712 08:51:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2251555 00:25:11.712 08:51:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:25:11.712 08:51:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:25:11.712 EAL: No free 2048 kB hugepages reported on node 1 00:25:13.088 08:51:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot e3566205-86d3-49cd-b714-d61ba727fd40 MY_SNAPSHOT 00:25:13.088 08:51:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=8e7df20c-77dc-4c11-b8e6-72588f3772de 00:25:13.088 08:51:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize e3566205-86d3-49cd-b714-d61ba727fd40 30 00:25:13.347 08:51:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 8e7df20c-77dc-4c11-b8e6-72588f3772de MY_CLONE 00:25:13.605 08:51:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=aea05b01-0093-4096-be01-441ecb744305 00:25:13.605 08:51:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate aea05b01-0093-4096-be01-441ecb744305 00:25:14.540 08:51:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2251555 00:25:22.711 Initializing NVMe Controllers 00:25:22.711 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:25:22.711 Controller IO queue size 128, less than required. 00:25:22.711 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:22.711 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:25:22.711 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:25:22.711 Initialization complete. Launching workers. 00:25:22.711 ======================================================== 00:25:22.711 Latency(us) 00:25:22.711 Device Information : IOPS MiB/s Average min max 00:25:22.711 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10821.30 42.27 11833.57 1055.50 72441.67 00:25:22.711 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10686.70 41.74 11980.88 2094.79 68980.30 00:25:22.711 ======================================================== 00:25:22.711 Total : 21508.00 84.02 11906.77 1055.50 72441.67 00:25:22.711 00:25:22.711 08:51:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:22.711 08:51:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e3566205-86d3-49cd-b714-d61ba727fd40 00:25:22.986 08:51:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6c654154-7cd9-4d58-8ef0-99af1d0c67dd 00:25:22.986 08:51:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:25:22.986 08:51:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:25:22.986 08:51:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:25:22.986 08:51:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:22.986 08:51:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:25:22.986 08:51:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:22.986 08:51:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:25:22.986 08:51:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:22.986 08:51:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:22.986 rmmod nvme_tcp 00:25:22.986 rmmod nvme_fabrics 00:25:22.986 rmmod nvme_keyring 00:25:23.245 08:51:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:23.245 08:51:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:25:23.245 08:51:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:25:23.245 08:51:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 2251249 ']' 00:25:23.245 08:51:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 2251249 00:25:23.245 08:51:17 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@947 -- # '[' -z 2251249 ']' 00:25:23.245 08:51:17 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # kill -0 2251249 00:25:23.245 08:51:17 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # uname 00:25:23.245 08:51:17 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:25:23.245 08:51:17 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2251249 00:25:23.245 08:51:17 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:25:23.245 08:51:17 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:25:23.245 08:51:17 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2251249' 00:25:23.245 killing process with pid 2251249 00:25:23.245 08:51:17 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # kill 2251249 00:25:23.245 [2024-05-15 08:51:17.823627] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:23.245 08:51:17 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@971 -- # wait 2251249 00:25:23.503 08:51:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:23.503 08:51:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:23.503 08:51:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:23.503 08:51:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:23.503 08:51:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:23.503 08:51:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:23.503 08:51:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:23.503 08:51:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:25.405 08:51:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:25.405 00:25:25.405 real 0m19.274s 00:25:25.405 user 1m4.805s 00:25:25.405 sys 0m5.721s 00:25:25.405 08:51:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # xtrace_disable 00:25:25.405 08:51:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:25:25.405 ************************************ 00:25:25.405 END TEST nvmf_lvol 00:25:25.405 ************************************ 00:25:25.405 08:51:20 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:25:25.405 08:51:20 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:25:25.405 08:51:20 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:25:25.405 08:51:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:25.405 ************************************ 00:25:25.405 START TEST nvmf_lvs_grow 00:25:25.405 ************************************ 00:25:25.405 08:51:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:25:25.663 * Looking for test storage... 00:25:25.663 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:25.663 08:51:20 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:25.663 08:51:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:25:25.663 08:51:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:25.663 08:51:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:25.663 08:51:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:25.663 08:51:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:25.663 08:51:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:25.663 08:51:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:25.664 08:51:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:25.664 08:51:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:25.664 08:51:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:25.664 08:51:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:25.664 08:51:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:25.664 08:51:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:25.664 08:51:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:25.664 08:51:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:25.664 08:51:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:25.664 08:51:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:25.664 08:51:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:25.664 08:51:20 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:25.664 08:51:20 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:25.664 08:51:20 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:25.664 08:51:20 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.664 08:51:20 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.664 08:51:20 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.664 08:51:20 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:25:25.664 08:51:20 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.664 08:51:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:25:25.664 08:51:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:25.664 08:51:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:25.664 08:51:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:25.664 08:51:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:25.664 08:51:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:25.664 08:51:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:25.664 08:51:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:25.664 08:51:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:25.664 08:51:20 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:25.664 08:51:20 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:25.664 08:51:20 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:25:25.664 08:51:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:25.664 08:51:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:25.664 08:51:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:25.664 08:51:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:25.664 08:51:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:25.664 08:51:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:25.664 08:51:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:25.664 08:51:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:25.664 08:51:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:25.664 08:51:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:25.664 08:51:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:25:25.664 08:51:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:28.197 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:28.197 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:28.197 Found net devices under 0000:09:00.0: cvl_0_0 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:28.197 Found net devices under 0000:09:00.1: cvl_0_1 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:28.197 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:28.198 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:28.198 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:28.198 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:28.198 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:28.198 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:28.198 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:28.198 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:28.198 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:28.198 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:28.198 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:28.198 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:28.198 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:28.198 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:28.198 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:28.198 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:28.198 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:28.198 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:28.198 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:25:28.198 00:25:28.198 --- 10.0.0.2 ping statistics --- 00:25:28.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:28.198 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:25:28.198 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:28.198 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:28.198 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:25:28.198 00:25:28.198 --- 10.0.0.1 ping statistics --- 00:25:28.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:28.198 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:25:28.198 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:28.198 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:25:28.198 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:28.198 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:28.198 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:28.198 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:28.198 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:28.198 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:28.198 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:28.198 08:51:22 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:25:28.198 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:28.198 08:51:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@721 -- # xtrace_disable 00:25:28.198 08:51:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:25:28.198 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=2255729 00:25:28.198 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:28.198 08:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 2255729 00:25:28.198 08:51:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@828 -- # '[' -z 2255729 ']' 00:25:28.198 08:51:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:28.198 08:51:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local max_retries=100 00:25:28.198 08:51:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:28.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:28.198 08:51:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # xtrace_disable 00:25:28.198 08:51:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:25:28.456 [2024-05-15 08:51:23.022682] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:25:28.456 [2024-05-15 08:51:23.022752] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:28.456 EAL: No free 2048 kB hugepages reported on node 1 00:25:28.456 [2024-05-15 08:51:23.093911] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:28.456 [2024-05-15 08:51:23.175640] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:28.456 [2024-05-15 08:51:23.175719] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:28.456 [2024-05-15 08:51:23.175735] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:28.456 [2024-05-15 08:51:23.175746] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:28.456 [2024-05-15 08:51:23.175756] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:28.456 [2024-05-15 08:51:23.175783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:28.713 08:51:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:25:28.713 08:51:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@861 -- # return 0 00:25:28.713 08:51:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:28.713 08:51:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@727 -- # xtrace_disable 00:25:28.713 08:51:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:25:28.713 08:51:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:28.713 08:51:23 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:28.969 [2024-05-15 08:51:23.535020] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:28.969 08:51:23 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:25:28.969 08:51:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:25:28.969 08:51:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1104 -- # xtrace_disable 00:25:28.969 08:51:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:25:28.969 ************************************ 00:25:28.969 START TEST lvs_grow_clean 00:25:28.969 ************************************ 00:25:28.969 08:51:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # lvs_grow 00:25:28.969 08:51:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:25:28.969 08:51:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:25:28.969 08:51:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:25:28.969 08:51:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:25:28.969 08:51:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:25:28.969 08:51:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:25:28.969 08:51:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:25:28.969 08:51:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:25:28.969 08:51:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:25:29.225 08:51:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:25:29.225 08:51:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:25:29.483 08:51:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=69c2e54e-155e-40f4-b701-c31f3f3ab2b6 00:25:29.483 08:51:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 69c2e54e-155e-40f4-b701-c31f3f3ab2b6 00:25:29.483 08:51:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:25:29.740 08:51:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:25:29.740 08:51:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:25:29.740 08:51:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 69c2e54e-155e-40f4-b701-c31f3f3ab2b6 lvol 150 00:25:29.997 08:51:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=3e6e0d15-aa9c-4e0b-bb2d-cdbeb8534a09 00:25:29.997 08:51:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:25:29.997 08:51:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:25:30.254 [2024-05-15 08:51:24.838351] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:25:30.254 [2024-05-15 08:51:24.838451] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:25:30.254 true 00:25:30.254 08:51:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 69c2e54e-155e-40f4-b701-c31f3f3ab2b6 00:25:30.254 08:51:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:25:30.512 08:51:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:25:30.512 08:51:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:25:30.770 08:51:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3e6e0d15-aa9c-4e0b-bb2d-cdbeb8534a09 00:25:31.027 08:51:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:31.285 [2024-05-15 08:51:25.841173] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:31.285 [2024-05-15 08:51:25.841582] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:31.285 08:51:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:31.543 08:51:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2256163 00:25:31.543 08:51:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:31.543 08:51:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:25:31.543 08:51:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2256163 /var/tmp/bdevperf.sock 00:25:31.543 08:51:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@828 -- # '[' -z 2256163 ']' 00:25:31.543 08:51:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:31.543 08:51:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local max_retries=100 00:25:31.543 08:51:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:31.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:31.543 08:51:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # xtrace_disable 00:25:31.543 08:51:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:25:31.543 [2024-05-15 08:51:26.192041] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:25:31.543 [2024-05-15 08:51:26.192121] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2256163 ] 00:25:31.543 EAL: No free 2048 kB hugepages reported on node 1 00:25:31.543 [2024-05-15 08:51:26.261840] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:31.801 [2024-05-15 08:51:26.351921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:31.801 08:51:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:25:31.801 08:51:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@861 -- # return 0 00:25:31.801 08:51:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:25:32.057 Nvme0n1 00:25:32.057 08:51:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:25:32.315 [ 00:25:32.315 { 00:25:32.315 "name": "Nvme0n1", 00:25:32.315 "aliases": [ 00:25:32.315 "3e6e0d15-aa9c-4e0b-bb2d-cdbeb8534a09" 00:25:32.315 ], 00:25:32.315 "product_name": "NVMe disk", 00:25:32.315 "block_size": 4096, 00:25:32.315 "num_blocks": 38912, 00:25:32.315 "uuid": "3e6e0d15-aa9c-4e0b-bb2d-cdbeb8534a09", 00:25:32.315 "assigned_rate_limits": { 00:25:32.315 "rw_ios_per_sec": 0, 00:25:32.315 "rw_mbytes_per_sec": 0, 00:25:32.315 "r_mbytes_per_sec": 0, 00:25:32.315 "w_mbytes_per_sec": 0 00:25:32.315 }, 00:25:32.315 "claimed": false, 00:25:32.315 "zoned": false, 00:25:32.315 "supported_io_types": { 00:25:32.315 "read": true, 00:25:32.315 "write": true, 00:25:32.315 "unmap": true, 00:25:32.315 "write_zeroes": true, 00:25:32.315 "flush": true, 00:25:32.315 "reset": true, 00:25:32.315 "compare": true, 00:25:32.315 "compare_and_write": true, 00:25:32.315 "abort": true, 00:25:32.315 "nvme_admin": true, 00:25:32.315 "nvme_io": true 00:25:32.315 }, 00:25:32.315 "memory_domains": [ 00:25:32.315 { 00:25:32.315 "dma_device_id": "system", 00:25:32.315 "dma_device_type": 1 00:25:32.315 } 00:25:32.315 ], 00:25:32.315 "driver_specific": { 00:25:32.315 "nvme": [ 00:25:32.315 { 00:25:32.315 "trid": { 00:25:32.315 "trtype": "TCP", 00:25:32.315 "adrfam": "IPv4", 00:25:32.315 "traddr": "10.0.0.2", 00:25:32.315 "trsvcid": "4420", 00:25:32.315 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:32.315 }, 00:25:32.315 "ctrlr_data": { 00:25:32.315 "cntlid": 1, 00:25:32.315 "vendor_id": "0x8086", 00:25:32.315 "model_number": "SPDK bdev Controller", 00:25:32.315 "serial_number": "SPDK0", 00:25:32.315 "firmware_revision": "24.05", 00:25:32.315 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:32.315 "oacs": { 00:25:32.315 "security": 0, 00:25:32.315 "format": 0, 00:25:32.315 "firmware": 0, 00:25:32.315 "ns_manage": 0 00:25:32.315 }, 00:25:32.315 "multi_ctrlr": true, 00:25:32.315 "ana_reporting": false 00:25:32.315 }, 00:25:32.315 "vs": { 00:25:32.315 "nvme_version": "1.3" 00:25:32.315 }, 00:25:32.315 "ns_data": { 00:25:32.315 "id": 1, 00:25:32.315 "can_share": true 00:25:32.315 } 00:25:32.315 } 00:25:32.315 ], 00:25:32.315 "mp_policy": "active_passive" 00:25:32.315 } 00:25:32.315 } 00:25:32.315 ] 00:25:32.315 08:51:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2256182 00:25:32.315 08:51:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:25:32.315 08:51:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:32.574 Running I/O for 10 seconds... 00:25:33.508 Latency(us) 00:25:33.508 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:33.508 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:33.508 Nvme0n1 : 1.00 14733.00 57.55 0.00 0.00 0.00 0.00 0.00 00:25:33.508 =================================================================================================================== 00:25:33.508 Total : 14733.00 57.55 0.00 0.00 0.00 0.00 0.00 00:25:33.508 00:25:34.440 08:51:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 69c2e54e-155e-40f4-b701-c31f3f3ab2b6 00:25:34.440 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:34.440 Nvme0n1 : 2.00 14637.50 57.18 0.00 0.00 0.00 0.00 0.00 00:25:34.440 =================================================================================================================== 00:25:34.440 Total : 14637.50 57.18 0.00 0.00 0.00 0.00 0.00 00:25:34.440 00:25:34.698 true 00:25:34.698 08:51:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:25:34.698 08:51:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 69c2e54e-155e-40f4-b701-c31f3f3ab2b6 00:25:34.956 08:51:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:25:34.956 08:51:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:25:34.956 08:51:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2256182 00:25:35.522 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:35.522 Nvme0n1 : 3.00 14648.00 57.22 0.00 0.00 0.00 0.00 0.00 00:25:35.522 =================================================================================================================== 00:25:35.522 Total : 14648.00 57.22 0.00 0.00 0.00 0.00 0.00 00:25:35.522 00:25:36.455 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:36.455 Nvme0n1 : 4.00 14828.25 57.92 0.00 0.00 0.00 0.00 0.00 00:25:36.455 =================================================================================================================== 00:25:36.455 Total : 14828.25 57.92 0.00 0.00 0.00 0.00 0.00 00:25:36.455 00:25:37.863 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:37.863 Nvme0n1 : 5.00 14872.40 58.10 0.00 0.00 0.00 0.00 0.00 00:25:37.863 =================================================================================================================== 00:25:37.863 Total : 14872.40 58.10 0.00 0.00 0.00 0.00 0.00 00:25:37.863 00:25:38.429 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:38.429 Nvme0n1 : 6.00 14933.67 58.33 0.00 0.00 0.00 0.00 0.00 00:25:38.429 =================================================================================================================== 00:25:38.429 Total : 14933.67 58.33 0.00 0.00 0.00 0.00 0.00 00:25:38.429 00:25:39.801 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:39.801 Nvme0n1 : 7.00 14941.14 58.36 0.00 0.00 0.00 0.00 0.00 00:25:39.801 =================================================================================================================== 00:25:39.801 Total : 14941.14 58.36 0.00 0.00 0.00 0.00 0.00 00:25:39.801 00:25:40.735 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:40.735 Nvme0n1 : 8.00 14946.75 58.39 0.00 0.00 0.00 0.00 0.00 00:25:40.735 =================================================================================================================== 00:25:40.735 Total : 14946.75 58.39 0.00 0.00 0.00 0.00 0.00 00:25:40.735 00:25:41.668 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:41.668 Nvme0n1 : 9.00 14945.67 58.38 0.00 0.00 0.00 0.00 0.00 00:25:41.668 =================================================================================================================== 00:25:41.668 Total : 14945.67 58.38 0.00 0.00 0.00 0.00 0.00 00:25:41.668 00:25:42.602 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:42.602 Nvme0n1 : 10.00 15008.40 58.63 0.00 0.00 0.00 0.00 0.00 00:25:42.602 =================================================================================================================== 00:25:42.602 Total : 15008.40 58.63 0.00 0.00 0.00 0.00 0.00 00:25:42.602 00:25:42.602 00:25:42.602 Latency(us) 00:25:42.602 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:42.602 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:42.602 Nvme0n1 : 10.00 15008.37 58.63 0.00 0.00 8522.79 2536.49 16214.09 00:25:42.602 =================================================================================================================== 00:25:42.602 Total : 15008.37 58.63 0.00 0.00 8522.79 2536.49 16214.09 00:25:42.602 0 00:25:42.602 08:51:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2256163 00:25:42.602 08:51:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@947 -- # '[' -z 2256163 ']' 00:25:42.602 08:51:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # kill -0 2256163 00:25:42.602 08:51:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # uname 00:25:42.602 08:51:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:25:42.602 08:51:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2256163 00:25:42.602 08:51:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:25:42.602 08:51:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:25:42.602 08:51:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2256163' 00:25:42.602 killing process with pid 2256163 00:25:42.602 08:51:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # kill 2256163 00:25:42.602 Received shutdown signal, test time was about 10.000000 seconds 00:25:42.602 00:25:42.602 Latency(us) 00:25:42.602 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:42.602 =================================================================================================================== 00:25:42.602 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:42.602 08:51:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # wait 2256163 00:25:42.860 08:51:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:43.117 08:51:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:43.375 08:51:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 69c2e54e-155e-40f4-b701-c31f3f3ab2b6 00:25:43.375 08:51:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:25:43.633 08:51:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:25:43.633 08:51:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:25:43.634 08:51:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:25:43.891 [2024-05-15 08:51:38.526010] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:25:43.891 08:51:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 69c2e54e-155e-40f4-b701-c31f3f3ab2b6 00:25:43.891 08:51:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@649 -- # local es=0 00:25:43.891 08:51:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 69c2e54e-155e-40f4-b701-c31f3f3ab2b6 00:25:43.891 08:51:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:43.892 08:51:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:43.892 08:51:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:43.892 08:51:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:43.892 08:51:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:43.892 08:51:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:43.892 08:51:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:43.892 08:51:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:25:43.892 08:51:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 69c2e54e-155e-40f4-b701-c31f3f3ab2b6 00:25:44.150 request: 00:25:44.150 { 00:25:44.150 "uuid": "69c2e54e-155e-40f4-b701-c31f3f3ab2b6", 00:25:44.150 "method": "bdev_lvol_get_lvstores", 00:25:44.150 "req_id": 1 00:25:44.150 } 00:25:44.150 Got JSON-RPC error response 00:25:44.150 response: 00:25:44.150 { 00:25:44.150 "code": -19, 00:25:44.150 "message": "No such device" 00:25:44.150 } 00:25:44.150 08:51:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # es=1 00:25:44.150 08:51:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:25:44.150 08:51:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:25:44.150 08:51:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:25:44.150 08:51:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:25:44.408 aio_bdev 00:25:44.408 08:51:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 3e6e0d15-aa9c-4e0b-bb2d-cdbeb8534a09 00:25:44.408 08:51:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_name=3e6e0d15-aa9c-4e0b-bb2d-cdbeb8534a09 00:25:44.408 08:51:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_timeout= 00:25:44.408 08:51:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local i 00:25:44.408 08:51:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # [[ -z '' ]] 00:25:44.408 08:51:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # bdev_timeout=2000 00:25:44.408 08:51:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:44.666 08:51:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3e6e0d15-aa9c-4e0b-bb2d-cdbeb8534a09 -t 2000 00:25:44.924 [ 00:25:44.924 { 00:25:44.924 "name": "3e6e0d15-aa9c-4e0b-bb2d-cdbeb8534a09", 00:25:44.924 "aliases": [ 00:25:44.924 "lvs/lvol" 00:25:44.924 ], 00:25:44.924 "product_name": "Logical Volume", 00:25:44.924 "block_size": 4096, 00:25:44.924 "num_blocks": 38912, 00:25:44.924 "uuid": "3e6e0d15-aa9c-4e0b-bb2d-cdbeb8534a09", 00:25:44.924 "assigned_rate_limits": { 00:25:44.924 "rw_ios_per_sec": 0, 00:25:44.924 "rw_mbytes_per_sec": 0, 00:25:44.924 "r_mbytes_per_sec": 0, 00:25:44.924 "w_mbytes_per_sec": 0 00:25:44.924 }, 00:25:44.924 "claimed": false, 00:25:44.924 "zoned": false, 00:25:44.924 "supported_io_types": { 00:25:44.924 "read": true, 00:25:44.924 "write": true, 00:25:44.924 "unmap": true, 00:25:44.924 "write_zeroes": true, 00:25:44.924 "flush": false, 00:25:44.924 "reset": true, 00:25:44.924 "compare": false, 00:25:44.924 "compare_and_write": false, 00:25:44.924 "abort": false, 00:25:44.924 "nvme_admin": false, 00:25:44.924 "nvme_io": false 00:25:44.924 }, 00:25:44.924 "driver_specific": { 00:25:44.924 "lvol": { 00:25:44.924 "lvol_store_uuid": "69c2e54e-155e-40f4-b701-c31f3f3ab2b6", 00:25:44.924 "base_bdev": "aio_bdev", 00:25:44.924 "thin_provision": false, 00:25:44.924 "num_allocated_clusters": 38, 00:25:44.924 "snapshot": false, 00:25:44.924 "clone": false, 00:25:44.924 "esnap_clone": false 00:25:44.924 } 00:25:44.924 } 00:25:44.924 } 00:25:44.924 ] 00:25:44.924 08:51:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # return 0 00:25:44.924 08:51:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 69c2e54e-155e-40f4-b701-c31f3f3ab2b6 00:25:44.924 08:51:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:25:45.182 08:51:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:25:45.182 08:51:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 69c2e54e-155e-40f4-b701-c31f3f3ab2b6 00:25:45.182 08:51:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:25:45.440 08:51:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:25:45.440 08:51:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3e6e0d15-aa9c-4e0b-bb2d-cdbeb8534a09 00:25:45.697 08:51:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 69c2e54e-155e-40f4-b701-c31f3f3ab2b6 00:25:45.955 08:51:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:25:46.213 08:51:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:25:46.213 00:25:46.213 real 0m17.394s 00:25:46.213 user 0m16.762s 00:25:46.213 sys 0m1.934s 00:25:46.213 08:51:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # xtrace_disable 00:25:46.213 08:51:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:25:46.213 ************************************ 00:25:46.213 END TEST lvs_grow_clean 00:25:46.213 ************************************ 00:25:46.470 08:51:41 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:25:46.471 08:51:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:25:46.471 08:51:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1104 -- # xtrace_disable 00:25:46.471 08:51:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:25:46.471 ************************************ 00:25:46.471 START TEST lvs_grow_dirty 00:25:46.471 ************************************ 00:25:46.471 08:51:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # lvs_grow dirty 00:25:46.471 08:51:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:25:46.471 08:51:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:25:46.471 08:51:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:25:46.471 08:51:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:25:46.471 08:51:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:25:46.471 08:51:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:25:46.471 08:51:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:25:46.471 08:51:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:25:46.471 08:51:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:25:46.729 08:51:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:25:46.729 08:51:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:25:46.986 08:51:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=51d4eba1-a89a-4caa-9358-3bf7bf7403c0 00:25:46.986 08:51:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 51d4eba1-a89a-4caa-9358-3bf7bf7403c0 00:25:46.986 08:51:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:25:47.245 08:51:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:25:47.245 08:51:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:25:47.245 08:51:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 51d4eba1-a89a-4caa-9358-3bf7bf7403c0 lvol 150 00:25:47.503 08:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=68062bbe-ac02-4aa7-aa6a-cac3dbccf486 00:25:47.503 08:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:25:47.503 08:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:25:47.761 [2024-05-15 08:51:42.366509] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:25:47.761 [2024-05-15 08:51:42.366618] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:25:47.761 true 00:25:47.761 08:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 51d4eba1-a89a-4caa-9358-3bf7bf7403c0 00:25:47.761 08:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:25:48.019 08:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:25:48.019 08:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:25:48.277 08:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 68062bbe-ac02-4aa7-aa6a-cac3dbccf486 00:25:48.540 08:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:48.800 [2024-05-15 08:51:43.353572] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:48.800 08:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:49.057 08:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2258220 00:25:49.057 08:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:25:49.057 08:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:49.057 08:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2258220 /var/tmp/bdevperf.sock 00:25:49.057 08:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@828 -- # '[' -z 2258220 ']' 00:25:49.057 08:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:49.057 08:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local max_retries=100 00:25:49.057 08:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:49.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:49.057 08:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # xtrace_disable 00:25:49.057 08:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:25:49.057 [2024-05-15 08:51:43.655765] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:25:49.057 [2024-05-15 08:51:43.655851] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2258220 ] 00:25:49.057 EAL: No free 2048 kB hugepages reported on node 1 00:25:49.057 [2024-05-15 08:51:43.726413] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:49.057 [2024-05-15 08:51:43.815376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:49.315 08:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:25:49.315 08:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@861 -- # return 0 00:25:49.315 08:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:25:49.574 Nvme0n1 00:25:49.574 08:51:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:25:49.831 [ 00:25:49.831 { 00:25:49.831 "name": "Nvme0n1", 00:25:49.831 "aliases": [ 00:25:49.831 "68062bbe-ac02-4aa7-aa6a-cac3dbccf486" 00:25:49.831 ], 00:25:49.831 "product_name": "NVMe disk", 00:25:49.831 "block_size": 4096, 00:25:49.831 "num_blocks": 38912, 00:25:49.831 "uuid": "68062bbe-ac02-4aa7-aa6a-cac3dbccf486", 00:25:49.831 "assigned_rate_limits": { 00:25:49.831 "rw_ios_per_sec": 0, 00:25:49.831 "rw_mbytes_per_sec": 0, 00:25:49.831 "r_mbytes_per_sec": 0, 00:25:49.831 "w_mbytes_per_sec": 0 00:25:49.831 }, 00:25:49.831 "claimed": false, 00:25:49.831 "zoned": false, 00:25:49.831 "supported_io_types": { 00:25:49.831 "read": true, 00:25:49.831 "write": true, 00:25:49.831 "unmap": true, 00:25:49.831 "write_zeroes": true, 00:25:49.831 "flush": true, 00:25:49.831 "reset": true, 00:25:49.831 "compare": true, 00:25:49.831 "compare_and_write": true, 00:25:49.831 "abort": true, 00:25:49.831 "nvme_admin": true, 00:25:49.831 "nvme_io": true 00:25:49.831 }, 00:25:49.831 "memory_domains": [ 00:25:49.831 { 00:25:49.831 "dma_device_id": "system", 00:25:49.831 "dma_device_type": 1 00:25:49.831 } 00:25:49.831 ], 00:25:49.831 "driver_specific": { 00:25:49.831 "nvme": [ 00:25:49.832 { 00:25:49.832 "trid": { 00:25:49.832 "trtype": "TCP", 00:25:49.832 "adrfam": "IPv4", 00:25:49.832 "traddr": "10.0.0.2", 00:25:49.832 "trsvcid": "4420", 00:25:49.832 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:49.832 }, 00:25:49.832 "ctrlr_data": { 00:25:49.832 "cntlid": 1, 00:25:49.832 "vendor_id": "0x8086", 00:25:49.832 "model_number": "SPDK bdev Controller", 00:25:49.832 "serial_number": "SPDK0", 00:25:49.832 "firmware_revision": "24.05", 00:25:49.832 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:49.832 "oacs": { 00:25:49.832 "security": 0, 00:25:49.832 "format": 0, 00:25:49.832 "firmware": 0, 00:25:49.832 "ns_manage": 0 00:25:49.832 }, 00:25:49.832 "multi_ctrlr": true, 00:25:49.832 "ana_reporting": false 00:25:49.832 }, 00:25:49.832 "vs": { 00:25:49.832 "nvme_version": "1.3" 00:25:49.832 }, 00:25:49.832 "ns_data": { 00:25:49.832 "id": 1, 00:25:49.832 "can_share": true 00:25:49.832 } 00:25:49.832 } 00:25:49.832 ], 00:25:49.832 "mp_policy": "active_passive" 00:25:49.832 } 00:25:49.832 } 00:25:49.832 ] 00:25:50.089 08:51:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2258355 00:25:50.089 08:51:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:50.089 08:51:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:25:50.089 Running I/O for 10 seconds... 00:25:51.024 Latency(us) 00:25:51.024 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:51.024 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:51.024 Nvme0n1 : 1.00 14163.00 55.32 0.00 0.00 0.00 0.00 0.00 00:25:51.024 =================================================================================================================== 00:25:51.024 Total : 14163.00 55.32 0.00 0.00 0.00 0.00 0.00 00:25:51.024 00:25:52.023 08:51:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 51d4eba1-a89a-4caa-9358-3bf7bf7403c0 00:25:52.023 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:52.023 Nvme0n1 : 2.00 14575.50 56.94 0.00 0.00 0.00 0.00 0.00 00:25:52.023 =================================================================================================================== 00:25:52.023 Total : 14575.50 56.94 0.00 0.00 0.00 0.00 0.00 00:25:52.023 00:25:52.281 true 00:25:52.281 08:51:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 51d4eba1-a89a-4caa-9358-3bf7bf7403c0 00:25:52.281 08:51:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:25:52.539 08:51:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:25:52.539 08:51:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:25:52.539 08:51:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2258355 00:25:53.104 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:53.104 Nvme0n1 : 3.00 14606.33 57.06 0.00 0.00 0.00 0.00 0.00 00:25:53.104 =================================================================================================================== 00:25:53.104 Total : 14606.33 57.06 0.00 0.00 0.00 0.00 0.00 00:25:53.104 00:25:54.035 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:54.035 Nvme0n1 : 4.00 14780.75 57.74 0.00 0.00 0.00 0.00 0.00 00:25:54.035 =================================================================================================================== 00:25:54.035 Total : 14780.75 57.74 0.00 0.00 0.00 0.00 0.00 00:25:54.035 00:25:54.968 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:54.968 Nvme0n1 : 5.00 14783.60 57.75 0.00 0.00 0.00 0.00 0.00 00:25:54.968 =================================================================================================================== 00:25:54.968 Total : 14783.60 57.75 0.00 0.00 0.00 0.00 0.00 00:25:54.968 00:25:56.341 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:56.341 Nvme0n1 : 6.00 14797.67 57.80 0.00 0.00 0.00 0.00 0.00 00:25:56.341 =================================================================================================================== 00:25:56.341 Total : 14797.67 57.80 0.00 0.00 0.00 0.00 0.00 00:25:56.341 00:25:57.310 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:57.310 Nvme0n1 : 7.00 14817.43 57.88 0.00 0.00 0.00 0.00 0.00 00:25:57.310 =================================================================================================================== 00:25:57.310 Total : 14817.43 57.88 0.00 0.00 0.00 0.00 0.00 00:25:57.310 00:25:58.239 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:58.239 Nvme0n1 : 8.00 14838.50 57.96 0.00 0.00 0.00 0.00 0.00 00:25:58.239 =================================================================================================================== 00:25:58.239 Total : 14838.50 57.96 0.00 0.00 0.00 0.00 0.00 00:25:58.239 00:25:59.173 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:59.173 Nvme0n1 : 9.00 14840.78 57.97 0.00 0.00 0.00 0.00 0.00 00:25:59.173 =================================================================================================================== 00:25:59.173 Total : 14840.78 57.97 0.00 0.00 0.00 0.00 0.00 00:25:59.173 00:26:00.105 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:00.105 Nvme0n1 : 10.00 14849.70 58.01 0.00 0.00 0.00 0.00 0.00 00:26:00.105 =================================================================================================================== 00:26:00.105 Total : 14849.70 58.01 0.00 0.00 0.00 0.00 0.00 00:26:00.105 00:26:00.105 00:26:00.105 Latency(us) 00:26:00.105 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:00.105 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:00.105 Nvme0n1 : 10.01 14853.83 58.02 0.00 0.00 8612.28 2281.62 17282.09 00:26:00.105 =================================================================================================================== 00:26:00.105 Total : 14853.83 58.02 0.00 0.00 8612.28 2281.62 17282.09 00:26:00.105 0 00:26:00.105 08:51:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2258220 00:26:00.105 08:51:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@947 -- # '[' -z 2258220 ']' 00:26:00.105 08:51:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # kill -0 2258220 00:26:00.105 08:51:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # uname 00:26:00.105 08:51:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:26:00.105 08:51:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2258220 00:26:00.105 08:51:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:26:00.105 08:51:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:26:00.105 08:51:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2258220' 00:26:00.105 killing process with pid 2258220 00:26:00.105 08:51:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # kill 2258220 00:26:00.105 Received shutdown signal, test time was about 10.000000 seconds 00:26:00.105 00:26:00.105 Latency(us) 00:26:00.105 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:00.105 =================================================================================================================== 00:26:00.105 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:00.105 08:51:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # wait 2258220 00:26:00.362 08:51:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:00.619 08:51:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:00.877 08:51:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 51d4eba1-a89a-4caa-9358-3bf7bf7403c0 00:26:00.877 08:51:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:26:01.135 08:51:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:26:01.135 08:51:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:26:01.135 08:51:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2255729 00:26:01.135 08:51:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2255729 00:26:01.135 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2255729 Killed "${NVMF_APP[@]}" "$@" 00:26:01.135 08:51:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:26:01.135 08:51:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:26:01.135 08:51:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:01.135 08:51:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@721 -- # xtrace_disable 00:26:01.135 08:51:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:26:01.135 08:51:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=2259694 00:26:01.135 08:51:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:26:01.135 08:51:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 2259694 00:26:01.135 08:51:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@828 -- # '[' -z 2259694 ']' 00:26:01.135 08:51:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:01.135 08:51:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local max_retries=100 00:26:01.135 08:51:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:01.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:01.135 08:51:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # xtrace_disable 00:26:01.135 08:51:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:26:01.135 [2024-05-15 08:51:55.829908] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:26:01.135 [2024-05-15 08:51:55.830004] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:01.135 EAL: No free 2048 kB hugepages reported on node 1 00:26:01.135 [2024-05-15 08:51:55.923081] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:01.392 [2024-05-15 08:51:56.016543] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:01.392 [2024-05-15 08:51:56.016614] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:01.392 [2024-05-15 08:51:56.016652] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:01.392 [2024-05-15 08:51:56.016675] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:01.392 [2024-05-15 08:51:56.016694] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:01.392 [2024-05-15 08:51:56.016734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:01.392 08:51:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:26:01.392 08:51:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@861 -- # return 0 00:26:01.392 08:51:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:01.392 08:51:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@727 -- # xtrace_disable 00:26:01.392 08:51:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:26:01.392 08:51:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:01.392 08:51:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:26:01.958 [2024-05-15 08:51:56.447603] blobstore.c:4838:bs_recover: *NOTICE*: Performing recovery on blobstore 00:26:01.958 [2024-05-15 08:51:56.447744] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:26:01.958 [2024-05-15 08:51:56.447805] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:26:01.958 08:51:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:26:01.958 08:51:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 68062bbe-ac02-4aa7-aa6a-cac3dbccf486 00:26:01.958 08:51:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_name=68062bbe-ac02-4aa7-aa6a-cac3dbccf486 00:26:01.958 08:51:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_timeout= 00:26:01.958 08:51:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local i 00:26:01.958 08:51:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # [[ -z '' ]] 00:26:01.958 08:51:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # bdev_timeout=2000 00:26:01.958 08:51:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:26:01.958 08:51:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 68062bbe-ac02-4aa7-aa6a-cac3dbccf486 -t 2000 00:26:02.216 [ 00:26:02.216 { 00:26:02.216 "name": "68062bbe-ac02-4aa7-aa6a-cac3dbccf486", 00:26:02.216 "aliases": [ 00:26:02.216 "lvs/lvol" 00:26:02.216 ], 00:26:02.216 "product_name": "Logical Volume", 00:26:02.216 "block_size": 4096, 00:26:02.216 "num_blocks": 38912, 00:26:02.216 "uuid": "68062bbe-ac02-4aa7-aa6a-cac3dbccf486", 00:26:02.216 "assigned_rate_limits": { 00:26:02.216 "rw_ios_per_sec": 0, 00:26:02.216 "rw_mbytes_per_sec": 0, 00:26:02.216 "r_mbytes_per_sec": 0, 00:26:02.216 "w_mbytes_per_sec": 0 00:26:02.216 }, 00:26:02.216 "claimed": false, 00:26:02.216 "zoned": false, 00:26:02.216 "supported_io_types": { 00:26:02.216 "read": true, 00:26:02.216 "write": true, 00:26:02.216 "unmap": true, 00:26:02.216 "write_zeroes": true, 00:26:02.216 "flush": false, 00:26:02.216 "reset": true, 00:26:02.216 "compare": false, 00:26:02.216 "compare_and_write": false, 00:26:02.216 "abort": false, 00:26:02.216 "nvme_admin": false, 00:26:02.216 "nvme_io": false 00:26:02.216 }, 00:26:02.216 "driver_specific": { 00:26:02.216 "lvol": { 00:26:02.216 "lvol_store_uuid": "51d4eba1-a89a-4caa-9358-3bf7bf7403c0", 00:26:02.216 "base_bdev": "aio_bdev", 00:26:02.216 "thin_provision": false, 00:26:02.216 "num_allocated_clusters": 38, 00:26:02.216 "snapshot": false, 00:26:02.216 "clone": false, 00:26:02.216 "esnap_clone": false 00:26:02.216 } 00:26:02.216 } 00:26:02.216 } 00:26:02.216 ] 00:26:02.216 08:51:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # return 0 00:26:02.216 08:51:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 51d4eba1-a89a-4caa-9358-3bf7bf7403c0 00:26:02.216 08:51:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:26:02.473 08:51:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:26:02.473 08:51:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 51d4eba1-a89a-4caa-9358-3bf7bf7403c0 00:26:02.473 08:51:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:26:02.730 08:51:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:26:02.730 08:51:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:26:03.295 [2024-05-15 08:51:57.788802] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:26:03.295 08:51:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 51d4eba1-a89a-4caa-9358-3bf7bf7403c0 00:26:03.295 08:51:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@649 -- # local es=0 00:26:03.295 08:51:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 51d4eba1-a89a-4caa-9358-3bf7bf7403c0 00:26:03.295 08:51:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:03.295 08:51:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:03.295 08:51:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:03.295 08:51:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:03.295 08:51:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:03.295 08:51:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:03.295 08:51:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:03.295 08:51:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:26:03.295 08:51:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 51d4eba1-a89a-4caa-9358-3bf7bf7403c0 00:26:03.295 request: 00:26:03.295 { 00:26:03.295 "uuid": "51d4eba1-a89a-4caa-9358-3bf7bf7403c0", 00:26:03.295 "method": "bdev_lvol_get_lvstores", 00:26:03.295 "req_id": 1 00:26:03.295 } 00:26:03.295 Got JSON-RPC error response 00:26:03.295 response: 00:26:03.295 { 00:26:03.295 "code": -19, 00:26:03.295 "message": "No such device" 00:26:03.295 } 00:26:03.295 08:51:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # es=1 00:26:03.295 08:51:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:26:03.295 08:51:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:26:03.295 08:51:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:26:03.295 08:51:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:26:03.861 aio_bdev 00:26:03.861 08:51:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 68062bbe-ac02-4aa7-aa6a-cac3dbccf486 00:26:03.861 08:51:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_name=68062bbe-ac02-4aa7-aa6a-cac3dbccf486 00:26:03.861 08:51:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_timeout= 00:26:03.861 08:51:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local i 00:26:03.861 08:51:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # [[ -z '' ]] 00:26:03.861 08:51:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # bdev_timeout=2000 00:26:03.861 08:51:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:26:03.861 08:51:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 68062bbe-ac02-4aa7-aa6a-cac3dbccf486 -t 2000 00:26:04.119 [ 00:26:04.119 { 00:26:04.119 "name": "68062bbe-ac02-4aa7-aa6a-cac3dbccf486", 00:26:04.119 "aliases": [ 00:26:04.119 "lvs/lvol" 00:26:04.119 ], 00:26:04.119 "product_name": "Logical Volume", 00:26:04.119 "block_size": 4096, 00:26:04.119 "num_blocks": 38912, 00:26:04.119 "uuid": "68062bbe-ac02-4aa7-aa6a-cac3dbccf486", 00:26:04.119 "assigned_rate_limits": { 00:26:04.119 "rw_ios_per_sec": 0, 00:26:04.119 "rw_mbytes_per_sec": 0, 00:26:04.119 "r_mbytes_per_sec": 0, 00:26:04.119 "w_mbytes_per_sec": 0 00:26:04.119 }, 00:26:04.119 "claimed": false, 00:26:04.119 "zoned": false, 00:26:04.119 "supported_io_types": { 00:26:04.119 "read": true, 00:26:04.119 "write": true, 00:26:04.119 "unmap": true, 00:26:04.119 "write_zeroes": true, 00:26:04.119 "flush": false, 00:26:04.119 "reset": true, 00:26:04.119 "compare": false, 00:26:04.119 "compare_and_write": false, 00:26:04.119 "abort": false, 00:26:04.119 "nvme_admin": false, 00:26:04.119 "nvme_io": false 00:26:04.119 }, 00:26:04.119 "driver_specific": { 00:26:04.119 "lvol": { 00:26:04.119 "lvol_store_uuid": "51d4eba1-a89a-4caa-9358-3bf7bf7403c0", 00:26:04.119 "base_bdev": "aio_bdev", 00:26:04.119 "thin_provision": false, 00:26:04.119 "num_allocated_clusters": 38, 00:26:04.119 "snapshot": false, 00:26:04.119 "clone": false, 00:26:04.119 "esnap_clone": false 00:26:04.119 } 00:26:04.119 } 00:26:04.119 } 00:26:04.119 ] 00:26:04.119 08:51:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # return 0 00:26:04.119 08:51:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 51d4eba1-a89a-4caa-9358-3bf7bf7403c0 00:26:04.119 08:51:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:26:04.376 08:51:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:26:04.376 08:51:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 51d4eba1-a89a-4caa-9358-3bf7bf7403c0 00:26:04.376 08:51:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:26:04.633 08:51:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:26:04.633 08:51:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 68062bbe-ac02-4aa7-aa6a-cac3dbccf486 00:26:04.890 08:51:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 51d4eba1-a89a-4caa-9358-3bf7bf7403c0 00:26:05.148 08:51:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:26:05.406 08:52:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:26:05.406 00:26:05.406 real 0m19.148s 00:26:05.406 user 0m48.315s 00:26:05.406 sys 0m4.668s 00:26:05.406 08:52:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # xtrace_disable 00:26:05.406 08:52:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:26:05.406 ************************************ 00:26:05.406 END TEST lvs_grow_dirty 00:26:05.406 ************************************ 00:26:05.664 08:52:00 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:26:05.664 08:52:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # type=--id 00:26:05.664 08:52:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # id=0 00:26:05.664 08:52:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # '[' --id = --pid ']' 00:26:05.664 08:52:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:26:05.664 08:52:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # shm_files=nvmf_trace.0 00:26:05.664 08:52:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # [[ -z nvmf_trace.0 ]] 00:26:05.664 08:52:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # for n in $shm_files 00:26:05.664 08:52:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:26:05.664 nvmf_trace.0 00:26:05.664 08:52:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # return 0 00:26:05.664 08:52:00 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:26:05.664 08:52:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:05.664 08:52:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:26:05.664 08:52:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:05.664 08:52:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:26:05.664 08:52:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:05.664 08:52:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:05.664 rmmod nvme_tcp 00:26:05.664 rmmod nvme_fabrics 00:26:05.664 rmmod nvme_keyring 00:26:05.664 08:52:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:05.664 08:52:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:26:05.664 08:52:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:26:05.664 08:52:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 2259694 ']' 00:26:05.664 08:52:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 2259694 00:26:05.664 08:52:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@947 -- # '[' -z 2259694 ']' 00:26:05.664 08:52:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # kill -0 2259694 00:26:05.664 08:52:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # uname 00:26:05.664 08:52:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:26:05.664 08:52:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2259694 00:26:05.664 08:52:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:26:05.664 08:52:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:26:05.664 08:52:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2259694' 00:26:05.664 killing process with pid 2259694 00:26:05.664 08:52:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # kill 2259694 00:26:05.664 08:52:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # wait 2259694 00:26:05.923 08:52:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:05.923 08:52:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:05.923 08:52:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:05.923 08:52:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:05.923 08:52:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:05.923 08:52:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:05.923 08:52:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:05.923 08:52:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:08.455 08:52:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:08.455 00:26:08.455 real 0m42.427s 00:26:08.455 user 1m11.053s 00:26:08.455 sys 0m8.958s 00:26:08.455 08:52:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # xtrace_disable 00:26:08.455 08:52:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:26:08.455 ************************************ 00:26:08.455 END TEST nvmf_lvs_grow 00:26:08.455 ************************************ 00:26:08.455 08:52:02 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:26:08.455 08:52:02 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:26:08.455 08:52:02 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:26:08.455 08:52:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:08.455 ************************************ 00:26:08.455 START TEST nvmf_bdev_io_wait 00:26:08.455 ************************************ 00:26:08.455 08:52:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:26:08.455 * Looking for test storage... 00:26:08.455 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:08.455 08:52:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:08.455 08:52:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:26:08.455 08:52:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:08.455 08:52:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:08.455 08:52:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:08.455 08:52:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:08.455 08:52:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:08.455 08:52:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:08.455 08:52:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:08.455 08:52:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:08.455 08:52:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:08.455 08:52:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:08.455 08:52:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:08.455 08:52:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:08.455 08:52:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:08.455 08:52:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:08.455 08:52:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:08.455 08:52:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:08.455 08:52:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:08.455 08:52:02 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:08.455 08:52:02 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:08.455 08:52:02 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:08.455 08:52:02 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.455 08:52:02 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.455 08:52:02 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.455 08:52:02 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:26:08.455 08:52:02 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.455 08:52:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:26:08.455 08:52:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:08.455 08:52:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:08.455 08:52:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:08.455 08:52:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:08.455 08:52:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:08.455 08:52:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:08.455 08:52:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:08.455 08:52:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:08.455 08:52:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:08.455 08:52:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:08.455 08:52:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:26:08.455 08:52:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:08.455 08:52:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:08.455 08:52:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:08.455 08:52:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:08.455 08:52:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:08.455 08:52:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:08.455 08:52:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:08.455 08:52:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:08.455 08:52:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:08.456 08:52:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:08.456 08:52:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:26:08.456 08:52:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:26:10.987 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:10.987 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:26:10.987 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:10.987 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:10.987 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:10.987 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:10.987 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:10.987 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:26:10.987 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:10.987 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:26:10.987 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:26:10.987 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:26:10.987 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:26:10.987 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:26:10.987 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:26:10.987 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:10.987 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:10.987 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:10.987 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:10.987 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:10.987 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:10.987 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:10.987 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:10.987 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:10.987 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:10.988 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:10.988 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:10.988 Found net devices under 0000:09:00.0: cvl_0_0 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:10.988 Found net devices under 0000:09:00.1: cvl_0_1 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:10.988 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:10.988 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:26:10.988 00:26:10.988 --- 10.0.0.2 ping statistics --- 00:26:10.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:10.988 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:10.988 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:10.988 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:26:10.988 00:26:10.988 --- 10.0.0.1 ping statistics --- 00:26:10.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:10.988 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@721 -- # xtrace_disable 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=2262503 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 2262503 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@828 -- # '[' -z 2262503 ']' 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local max_retries=100 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:10.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # xtrace_disable 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:26:10.988 [2024-05-15 08:52:05.410195] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:26:10.988 [2024-05-15 08:52:05.410295] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:10.988 EAL: No free 2048 kB hugepages reported on node 1 00:26:10.988 [2024-05-15 08:52:05.491687] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:10.988 [2024-05-15 08:52:05.579616] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:10.988 [2024-05-15 08:52:05.579676] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:10.988 [2024-05-15 08:52:05.579693] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:10.988 [2024-05-15 08:52:05.579708] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:10.988 [2024-05-15 08:52:05.579720] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:10.988 [2024-05-15 08:52:05.579803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:10.988 [2024-05-15 08:52:05.579869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:10.988 [2024-05-15 08:52:05.579959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:10.988 [2024-05-15 08:52:05.579962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@861 -- # return 0 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:10.988 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@727 -- # xtrace_disable 00:26:10.989 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:26:10.989 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:10.989 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:26:10.989 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:10.989 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:26:10.989 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:10.989 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:26:10.989 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:10.989 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:26:10.989 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:10.989 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:10.989 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:10.989 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:26:10.989 [2024-05-15 08:52:05.724649] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:10.989 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:10.989 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:10.989 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:10.989 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:26:10.989 Malloc0 00:26:10.989 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:10.989 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:10.989 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:10.989 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:26:10.989 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:10.989 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:10.989 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:10.989 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:26:11.247 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:11.247 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:11.247 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:11.247 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:26:11.247 [2024-05-15 08:52:05.786618] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:11.247 [2024-05-15 08:52:05.786938] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:11.247 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:11.247 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2262559 00:26:11.247 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2262561 00:26:11.247 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:26:11.248 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:26:11.248 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:26:11.248 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:26:11.248 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:11.248 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:11.248 { 00:26:11.248 "params": { 00:26:11.248 "name": "Nvme$subsystem", 00:26:11.248 "trtype": "$TEST_TRANSPORT", 00:26:11.248 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:11.248 "adrfam": "ipv4", 00:26:11.248 "trsvcid": "$NVMF_PORT", 00:26:11.248 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:11.248 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:11.248 "hdgst": ${hdgst:-false}, 00:26:11.248 "ddgst": ${ddgst:-false} 00:26:11.248 }, 00:26:11.248 "method": "bdev_nvme_attach_controller" 00:26:11.248 } 00:26:11.248 EOF 00:26:11.248 )") 00:26:11.248 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2262564 00:26:11.248 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:26:11.248 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:26:11.248 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:26:11.248 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:26:11.248 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:11.248 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2262567 00:26:11.248 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:26:11.248 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:11.248 { 00:26:11.248 "params": { 00:26:11.248 "name": "Nvme$subsystem", 00:26:11.248 "trtype": "$TEST_TRANSPORT", 00:26:11.248 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:11.248 "adrfam": "ipv4", 00:26:11.248 "trsvcid": "$NVMF_PORT", 00:26:11.248 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:11.248 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:11.248 "hdgst": ${hdgst:-false}, 00:26:11.248 "ddgst": ${ddgst:-false} 00:26:11.248 }, 00:26:11.248 "method": "bdev_nvme_attach_controller" 00:26:11.248 } 00:26:11.248 EOF 00:26:11.248 )") 00:26:11.248 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:26:11.248 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:26:11.248 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:26:11.248 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:26:11.248 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:26:11.248 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:26:11.248 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:11.248 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:26:11.248 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:11.248 { 00:26:11.248 "params": { 00:26:11.248 "name": "Nvme$subsystem", 00:26:11.248 "trtype": "$TEST_TRANSPORT", 00:26:11.248 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:11.248 "adrfam": "ipv4", 00:26:11.248 "trsvcid": "$NVMF_PORT", 00:26:11.248 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:11.248 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:11.248 "hdgst": ${hdgst:-false}, 00:26:11.248 "ddgst": ${ddgst:-false} 00:26:11.248 }, 00:26:11.248 "method": "bdev_nvme_attach_controller" 00:26:11.248 } 00:26:11.248 EOF 00:26:11.248 )") 00:26:11.248 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:26:11.248 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:26:11.248 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:26:11.248 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:11.248 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:11.248 { 00:26:11.248 "params": { 00:26:11.248 "name": "Nvme$subsystem", 00:26:11.248 "trtype": "$TEST_TRANSPORT", 00:26:11.248 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:11.248 "adrfam": "ipv4", 00:26:11.248 "trsvcid": "$NVMF_PORT", 00:26:11.248 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:11.248 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:11.248 "hdgst": ${hdgst:-false}, 00:26:11.248 "ddgst": ${ddgst:-false} 00:26:11.248 }, 00:26:11.248 "method": "bdev_nvme_attach_controller" 00:26:11.248 } 00:26:11.248 EOF 00:26:11.248 )") 00:26:11.248 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:26:11.248 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2262559 00:26:11.248 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:26:11.248 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:26:11.248 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:26:11.248 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:26:11.248 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:26:11.248 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:11.248 "params": { 00:26:11.248 "name": "Nvme1", 00:26:11.248 "trtype": "tcp", 00:26:11.248 "traddr": "10.0.0.2", 00:26:11.248 "adrfam": "ipv4", 00:26:11.248 "trsvcid": "4420", 00:26:11.248 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:11.248 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:11.248 "hdgst": false, 00:26:11.248 "ddgst": false 00:26:11.248 }, 00:26:11.248 "method": "bdev_nvme_attach_controller" 00:26:11.248 }' 00:26:11.248 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:26:11.248 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:26:11.248 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:11.248 "params": { 00:26:11.248 "name": "Nvme1", 00:26:11.248 "trtype": "tcp", 00:26:11.248 "traddr": "10.0.0.2", 00:26:11.248 "adrfam": "ipv4", 00:26:11.248 "trsvcid": "4420", 00:26:11.248 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:11.248 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:11.248 "hdgst": false, 00:26:11.248 "ddgst": false 00:26:11.248 }, 00:26:11.248 "method": "bdev_nvme_attach_controller" 00:26:11.248 }' 00:26:11.248 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:26:11.248 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:11.248 "params": { 00:26:11.248 "name": "Nvme1", 00:26:11.248 "trtype": "tcp", 00:26:11.248 "traddr": "10.0.0.2", 00:26:11.248 "adrfam": "ipv4", 00:26:11.248 "trsvcid": "4420", 00:26:11.248 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:11.248 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:11.248 "hdgst": false, 00:26:11.248 "ddgst": false 00:26:11.248 }, 00:26:11.248 "method": "bdev_nvme_attach_controller" 00:26:11.248 }' 00:26:11.248 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:26:11.248 08:52:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:11.248 "params": { 00:26:11.248 "name": "Nvme1", 00:26:11.248 "trtype": "tcp", 00:26:11.248 "traddr": "10.0.0.2", 00:26:11.248 "adrfam": "ipv4", 00:26:11.248 "trsvcid": "4420", 00:26:11.248 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:11.248 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:11.248 "hdgst": false, 00:26:11.248 "ddgst": false 00:26:11.248 }, 00:26:11.248 "method": "bdev_nvme_attach_controller" 00:26:11.248 }' 00:26:11.248 [2024-05-15 08:52:05.832292] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:26:11.248 [2024-05-15 08:52:05.832292] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:26:11.248 [2024-05-15 08:52:05.832303] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:26:11.248 [2024-05-15 08:52:05.832303] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:26:11.248 [2024-05-15 08:52:05.832374] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-05-15 08:52:05.832374] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:26:11.248 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:26:11.249 [2024-05-15 08:52:05.832382] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-05-15 08:52:05.832383] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:26:11.249 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:26:11.249 EAL: No free 2048 kB hugepages reported on node 1 00:26:11.249 EAL: No free 2048 kB hugepages reported on node 1 00:26:11.249 [2024-05-15 08:52:05.993935] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:11.249 EAL: No free 2048 kB hugepages reported on node 1 00:26:11.507 [2024-05-15 08:52:06.060242] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:11.507 [2024-05-15 08:52:06.060812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:26:11.507 EAL: No free 2048 kB hugepages reported on node 1 00:26:11.507 [2024-05-15 08:52:06.127883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:26:11.507 [2024-05-15 08:52:06.158967] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:11.507 [2024-05-15 08:52:06.234247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:26:11.507 [2024-05-15 08:52:06.259803] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:11.766 [2024-05-15 08:52:06.332380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:26:11.766 Running I/O for 1 seconds... 00:26:11.766 Running I/O for 1 seconds... 00:26:11.766 Running I/O for 1 seconds... 00:26:11.766 Running I/O for 1 seconds... 00:26:12.702 00:26:12.702 Latency(us) 00:26:12.702 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:12.702 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:26:12.702 Nvme1n1 : 1.00 184798.77 721.87 0.00 0.00 689.92 286.72 934.49 00:26:12.702 =================================================================================================================== 00:26:12.702 Total : 184798.77 721.87 0.00 0.00 689.92 286.72 934.49 00:26:12.702 00:26:12.702 Latency(us) 00:26:12.702 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:12.702 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:26:12.702 Nvme1n1 : 1.02 5351.78 20.91 0.00 0.00 23718.61 8155.59 33981.63 00:26:12.702 =================================================================================================================== 00:26:12.702 Total : 5351.78 20.91 0.00 0.00 23718.61 8155.59 33981.63 00:26:12.960 00:26:12.960 Latency(us) 00:26:12.960 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:12.960 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:26:12.960 Nvme1n1 : 1.01 10718.75 41.87 0.00 0.00 11893.65 6893.42 25049.32 00:26:12.960 =================================================================================================================== 00:26:12.960 Total : 10718.75 41.87 0.00 0.00 11893.65 6893.42 25049.32 00:26:12.960 00:26:12.960 Latency(us) 00:26:12.960 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:12.960 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:26:12.960 Nvme1n1 : 1.01 5112.10 19.97 0.00 0.00 24925.49 8107.05 50875.35 00:26:12.960 =================================================================================================================== 00:26:12.960 Total : 5112.10 19.97 0.00 0.00 24925.49 8107.05 50875.35 00:26:13.218 08:52:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2262561 00:26:13.218 08:52:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2262564 00:26:13.218 08:52:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2262567 00:26:13.218 08:52:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:13.218 08:52:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:13.218 08:52:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:26:13.218 08:52:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:13.218 08:52:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:26:13.218 08:52:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:26:13.218 08:52:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:13.218 08:52:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:26:13.218 08:52:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:13.218 08:52:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:26:13.218 08:52:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:13.218 08:52:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:13.218 rmmod nvme_tcp 00:26:13.218 rmmod nvme_fabrics 00:26:13.218 rmmod nvme_keyring 00:26:13.218 08:52:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:13.218 08:52:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:26:13.218 08:52:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:26:13.218 08:52:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 2262503 ']' 00:26:13.218 08:52:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 2262503 00:26:13.218 08:52:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@947 -- # '[' -z 2262503 ']' 00:26:13.218 08:52:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # kill -0 2262503 00:26:13.218 08:52:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # uname 00:26:13.218 08:52:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:26:13.218 08:52:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2262503 00:26:13.218 08:52:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:26:13.218 08:52:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:26:13.218 08:52:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2262503' 00:26:13.218 killing process with pid 2262503 00:26:13.218 08:52:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # kill 2262503 00:26:13.218 [2024-05-15 08:52:07.933700] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:13.218 08:52:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # wait 2262503 00:26:13.476 08:52:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:13.476 08:52:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:13.476 08:52:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:13.476 08:52:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:13.476 08:52:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:13.476 08:52:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:13.476 08:52:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:13.476 08:52:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:16.009 08:52:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:16.009 00:26:16.009 real 0m7.507s 00:26:16.009 user 0m16.163s 00:26:16.009 sys 0m3.693s 00:26:16.009 08:52:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # xtrace_disable 00:26:16.009 08:52:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:26:16.009 ************************************ 00:26:16.009 END TEST nvmf_bdev_io_wait 00:26:16.009 ************************************ 00:26:16.009 08:52:10 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:26:16.009 08:52:10 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:26:16.009 08:52:10 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:26:16.009 08:52:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:16.009 ************************************ 00:26:16.009 START TEST nvmf_queue_depth 00:26:16.009 ************************************ 00:26:16.009 08:52:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:26:16.009 * Looking for test storage... 00:26:16.009 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:16.009 08:52:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:16.009 08:52:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:26:16.009 08:52:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:16.009 08:52:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:16.009 08:52:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:16.009 08:52:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:16.009 08:52:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:16.009 08:52:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:16.009 08:52:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:16.009 08:52:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:16.009 08:52:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:16.009 08:52:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:16.009 08:52:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:16.009 08:52:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:16.010 08:52:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:16.010 08:52:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:16.010 08:52:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:16.010 08:52:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:16.010 08:52:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:16.010 08:52:10 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:16.010 08:52:10 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:16.010 08:52:10 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:16.010 08:52:10 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.010 08:52:10 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.010 08:52:10 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.010 08:52:10 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:26:16.010 08:52:10 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.010 08:52:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:26:16.010 08:52:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:16.010 08:52:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:16.010 08:52:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:16.010 08:52:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:16.010 08:52:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:16.010 08:52:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:16.010 08:52:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:16.010 08:52:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:16.010 08:52:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:26:16.010 08:52:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:26:16.010 08:52:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:16.010 08:52:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:26:16.010 08:52:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:16.010 08:52:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:16.010 08:52:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:16.010 08:52:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:16.010 08:52:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:16.010 08:52:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:16.010 08:52:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:16.010 08:52:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:16.010 08:52:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:16.010 08:52:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:16.010 08:52:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:26:16.010 08:52:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:26:18.566 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:18.566 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:26:18.566 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:18.566 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:18.566 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:18.566 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:18.566 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:18.566 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:26:18.566 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:18.566 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:26:18.566 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:26:18.566 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:26:18.566 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:26:18.566 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:26:18.566 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:26:18.566 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:18.566 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:18.566 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:18.566 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:18.566 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:18.566 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:18.566 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:18.566 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:18.566 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:18.566 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:18.566 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:18.566 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:18.566 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:18.566 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:18.566 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:18.566 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:18.566 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:18.566 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:18.566 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:18.566 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:18.566 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:18.566 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:18.566 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:18.566 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:18.566 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:18.566 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:18.566 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:18.566 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:18.566 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:18.566 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:18.566 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:18.566 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:18.567 Found net devices under 0000:09:00.0: cvl_0_0 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:18.567 Found net devices under 0000:09:00.1: cvl_0_1 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:18.567 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:18.567 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.148 ms 00:26:18.567 00:26:18.567 --- 10.0.0.2 ping statistics --- 00:26:18.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:18.567 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:18.567 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:18.567 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:26:18.567 00:26:18.567 --- 10.0.0.1 ping statistics --- 00:26:18.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:18.567 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@721 -- # xtrace_disable 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=2265170 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 2265170 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@828 -- # '[' -z 2265170 ']' 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local max_retries=100 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:18.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@837 -- # xtrace_disable 00:26:18.567 08:52:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:26:18.567 [2024-05-15 08:52:12.981080] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:26:18.567 [2024-05-15 08:52:12.981158] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:18.567 EAL: No free 2048 kB hugepages reported on node 1 00:26:18.567 [2024-05-15 08:52:13.056169] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:18.567 [2024-05-15 08:52:13.139509] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:18.567 [2024-05-15 08:52:13.139571] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:18.567 [2024-05-15 08:52:13.139598] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:18.567 [2024-05-15 08:52:13.139609] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:18.567 [2024-05-15 08:52:13.139619] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:18.567 [2024-05-15 08:52:13.139646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:18.567 08:52:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:26:18.567 08:52:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@861 -- # return 0 00:26:18.567 08:52:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:18.567 08:52:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@727 -- # xtrace_disable 00:26:18.567 08:52:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:26:18.567 08:52:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:18.567 08:52:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:18.567 08:52:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:18.567 08:52:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:26:18.567 [2024-05-15 08:52:13.290196] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:18.567 08:52:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:18.567 08:52:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:18.567 08:52:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:18.567 08:52:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:26:18.567 Malloc0 00:26:18.567 08:52:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:18.567 08:52:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:18.567 08:52:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:18.568 08:52:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:26:18.568 08:52:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:18.568 08:52:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:18.568 08:52:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:18.568 08:52:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:26:18.568 08:52:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:18.568 08:52:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:18.568 08:52:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:18.568 08:52:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:26:18.826 [2024-05-15 08:52:13.351798] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:18.826 [2024-05-15 08:52:13.352106] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:18.826 08:52:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:18.826 08:52:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2265189 00:26:18.826 08:52:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:26:18.826 08:52:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:18.826 08:52:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2265189 /var/tmp/bdevperf.sock 00:26:18.826 08:52:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@828 -- # '[' -z 2265189 ']' 00:26:18.826 08:52:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:18.826 08:52:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local max_retries=100 00:26:18.826 08:52:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:18.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:18.826 08:52:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@837 -- # xtrace_disable 00:26:18.826 08:52:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:26:18.826 [2024-05-15 08:52:13.395656] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:26:18.826 [2024-05-15 08:52:13.395723] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2265189 ] 00:26:18.826 EAL: No free 2048 kB hugepages reported on node 1 00:26:18.826 [2024-05-15 08:52:13.466798] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:18.826 [2024-05-15 08:52:13.553963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:19.084 08:52:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:26:19.084 08:52:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@861 -- # return 0 00:26:19.084 08:52:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:19.084 08:52:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:19.084 08:52:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:26:19.084 NVMe0n1 00:26:19.084 08:52:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:19.084 08:52:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:19.343 Running I/O for 10 seconds... 00:26:29.371 00:26:29.371 Latency(us) 00:26:29.371 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:29.371 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:26:29.371 Verification LBA range: start 0x0 length 0x4000 00:26:29.371 NVMe0n1 : 10.10 8586.32 33.54 0.00 0.00 118681.96 24660.95 75342.13 00:26:29.371 =================================================================================================================== 00:26:29.371 Total : 8586.32 33.54 0.00 0.00 118681.96 24660.95 75342.13 00:26:29.371 0 00:26:29.371 08:52:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2265189 00:26:29.371 08:52:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@947 -- # '[' -z 2265189 ']' 00:26:29.371 08:52:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # kill -0 2265189 00:26:29.371 08:52:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # uname 00:26:29.371 08:52:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:26:29.371 08:52:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2265189 00:26:29.371 08:52:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:26:29.371 08:52:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:26:29.371 08:52:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2265189' 00:26:29.371 killing process with pid 2265189 00:26:29.371 08:52:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # kill 2265189 00:26:29.371 Received shutdown signal, test time was about 10.000000 seconds 00:26:29.371 00:26:29.371 Latency(us) 00:26:29.371 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:29.371 =================================================================================================================== 00:26:29.371 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:29.371 08:52:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@971 -- # wait 2265189 00:26:29.629 08:52:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:26:29.629 08:52:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:26:29.629 08:52:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:29.629 08:52:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:26:29.629 08:52:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:29.629 08:52:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:26:29.629 08:52:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:29.629 08:52:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:29.629 rmmod nvme_tcp 00:26:29.629 rmmod nvme_fabrics 00:26:29.629 rmmod nvme_keyring 00:26:29.629 08:52:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:29.629 08:52:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:26:29.629 08:52:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:26:29.629 08:52:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 2265170 ']' 00:26:29.629 08:52:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 2265170 00:26:29.629 08:52:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@947 -- # '[' -z 2265170 ']' 00:26:29.629 08:52:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # kill -0 2265170 00:26:29.629 08:52:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # uname 00:26:29.629 08:52:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:26:29.629 08:52:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2265170 00:26:29.629 08:52:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:26:29.629 08:52:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:26:29.629 08:52:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2265170' 00:26:29.629 killing process with pid 2265170 00:26:29.629 08:52:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # kill 2265170 00:26:29.629 [2024-05-15 08:52:24.378479] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:29.629 08:52:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@971 -- # wait 2265170 00:26:29.887 08:52:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:29.887 08:52:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:29.887 08:52:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:29.887 08:52:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:29.887 08:52:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:29.887 08:52:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:29.887 08:52:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:29.887 08:52:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:32.421 08:52:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:32.421 00:26:32.421 real 0m16.427s 00:26:32.421 user 0m22.632s 00:26:32.421 sys 0m3.348s 00:26:32.421 08:52:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # xtrace_disable 00:26:32.421 08:52:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:26:32.421 ************************************ 00:26:32.421 END TEST nvmf_queue_depth 00:26:32.421 ************************************ 00:26:32.421 08:52:26 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:26:32.421 08:52:26 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:26:32.421 08:52:26 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:26:32.421 08:52:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:32.421 ************************************ 00:26:32.421 START TEST nvmf_target_multipath 00:26:32.421 ************************************ 00:26:32.421 08:52:26 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:26:32.421 * Looking for test storage... 00:26:32.421 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:32.421 08:52:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:32.421 08:52:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:26:32.421 08:52:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:32.421 08:52:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:32.421 08:52:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:32.421 08:52:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:32.421 08:52:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:32.421 08:52:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:32.421 08:52:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:32.421 08:52:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:32.421 08:52:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:32.421 08:52:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:32.421 08:52:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:32.421 08:52:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:32.421 08:52:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:32.421 08:52:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:32.421 08:52:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:32.421 08:52:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:32.421 08:52:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:32.421 08:52:26 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:32.421 08:52:26 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:32.421 08:52:26 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:32.421 08:52:26 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.421 08:52:26 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.421 08:52:26 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.421 08:52:26 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:26:32.421 08:52:26 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.421 08:52:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:26:32.421 08:52:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:32.421 08:52:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:32.421 08:52:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:32.421 08:52:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:32.421 08:52:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:32.421 08:52:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:32.421 08:52:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:32.421 08:52:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:32.421 08:52:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:32.421 08:52:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:32.421 08:52:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:26:32.421 08:52:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:32.421 08:52:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:26:32.421 08:52:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:32.421 08:52:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:32.421 08:52:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:32.421 08:52:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:32.421 08:52:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:32.421 08:52:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:32.421 08:52:26 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:32.421 08:52:26 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:32.421 08:52:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:32.421 08:52:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:32.421 08:52:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:26:32.421 08:52:26 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:26:34.946 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:34.946 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:26:34.946 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:34.946 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:34.946 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:34.946 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:34.946 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:34.946 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:26:34.946 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:34.946 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:26:34.946 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:26:34.946 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:26:34.946 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:26:34.946 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:26:34.946 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:26:34.946 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:34.946 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:34.946 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:34.946 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:34.946 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:34.946 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:34.946 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:34.946 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:34.947 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:34.947 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:34.947 Found net devices under 0000:09:00.0: cvl_0_0 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:34.947 Found net devices under 0000:09:00.1: cvl_0_1 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:34.947 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:34.947 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:26:34.947 00:26:34.947 --- 10.0.0.2 ping statistics --- 00:26:34.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:34.947 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:34.947 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:34.947 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:26:34.947 00:26:34.947 --- 10.0.0.1 ping statistics --- 00:26:34.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:34.947 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:26:34.947 only one NIC for nvmf test 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:34.947 rmmod nvme_tcp 00:26:34.947 rmmod nvme_fabrics 00:26:34.947 rmmod nvme_keyring 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:34.947 08:52:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:34.948 08:52:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:36.850 08:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:36.850 08:52:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:26:36.850 08:52:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:26:36.850 08:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:36.850 08:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:26:36.850 08:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:36.850 08:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:26:36.850 08:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:36.850 08:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:36.850 08:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:36.850 08:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:26:36.850 08:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:26:36.850 08:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:26:36.850 08:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:36.850 08:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:36.850 08:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:36.850 08:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:36.850 08:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:36.850 08:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:36.850 08:52:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:36.850 08:52:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:36.850 08:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:36.850 00:26:36.850 real 0m4.692s 00:26:36.850 user 0m0.923s 00:26:36.850 sys 0m1.786s 00:26:36.850 08:52:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # xtrace_disable 00:26:36.850 08:52:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:26:36.850 ************************************ 00:26:36.850 END TEST nvmf_target_multipath 00:26:36.850 ************************************ 00:26:36.850 08:52:31 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:26:36.850 08:52:31 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:26:36.850 08:52:31 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:26:36.850 08:52:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:36.850 ************************************ 00:26:36.850 START TEST nvmf_zcopy 00:26:36.850 ************************************ 00:26:36.850 08:52:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:26:36.850 * Looking for test storage... 00:26:36.850 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:36.850 08:52:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:36.850 08:52:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:26:36.850 08:52:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:36.850 08:52:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:36.850 08:52:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:36.850 08:52:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:36.850 08:52:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:36.850 08:52:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:36.850 08:52:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:36.850 08:52:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:36.850 08:52:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:36.850 08:52:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:36.850 08:52:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:36.850 08:52:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:36.850 08:52:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:36.850 08:52:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:36.850 08:52:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:36.850 08:52:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:36.850 08:52:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:36.850 08:52:31 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:36.850 08:52:31 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:36.850 08:52:31 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:36.850 08:52:31 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.850 08:52:31 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.850 08:52:31 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.850 08:52:31 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:26:36.850 08:52:31 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.850 08:52:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:26:36.850 08:52:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:36.850 08:52:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:36.850 08:52:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:36.850 08:52:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:36.850 08:52:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:36.850 08:52:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:36.850 08:52:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:36.850 08:52:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:36.850 08:52:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:26:36.850 08:52:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:36.850 08:52:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:36.850 08:52:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:36.850 08:52:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:36.850 08:52:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:36.850 08:52:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:36.850 08:52:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:36.850 08:52:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:36.850 08:52:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:36.850 08:52:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:36.850 08:52:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:26:36.850 08:52:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:39.378 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:39.378 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:39.378 Found net devices under 0000:09:00.0: cvl_0_0 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:39.378 Found net devices under 0000:09:00.1: cvl_0_1 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:39.378 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:39.379 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:39.379 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:39.379 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:39.379 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:39.379 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:39.379 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:39.379 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:39.379 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:39.379 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:39.379 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:39.379 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:39.636 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:39.636 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:39.636 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:39.636 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:26:39.636 00:26:39.636 --- 10.0.0.2 ping statistics --- 00:26:39.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:39.636 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:26:39.636 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:39.636 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:39.636 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:26:39.636 00:26:39.636 --- 10.0.0.1 ping statistics --- 00:26:39.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:39.636 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:26:39.636 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:39.636 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:26:39.636 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:39.636 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:39.636 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:39.636 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:39.636 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:39.636 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:39.636 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:39.636 08:52:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:26:39.636 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:39.636 08:52:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@721 -- # xtrace_disable 00:26:39.636 08:52:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:26:39.636 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=2270945 00:26:39.636 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:39.636 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 2270945 00:26:39.636 08:52:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@828 -- # '[' -z 2270945 ']' 00:26:39.636 08:52:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:39.636 08:52:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local max_retries=100 00:26:39.636 08:52:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:39.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:39.636 08:52:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@837 -- # xtrace_disable 00:26:39.636 08:52:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:26:39.636 [2024-05-15 08:52:34.257708] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:26:39.636 [2024-05-15 08:52:34.257798] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:39.636 EAL: No free 2048 kB hugepages reported on node 1 00:26:39.636 [2024-05-15 08:52:34.332951] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:39.636 [2024-05-15 08:52:34.416191] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:39.636 [2024-05-15 08:52:34.416272] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:39.636 [2024-05-15 08:52:34.416297] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:39.637 [2024-05-15 08:52:34.416308] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:39.637 [2024-05-15 08:52:34.416318] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:39.637 [2024-05-15 08:52:34.416349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:39.895 08:52:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:26:39.895 08:52:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@861 -- # return 0 00:26:39.895 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:39.895 08:52:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@727 -- # xtrace_disable 00:26:39.895 08:52:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:26:39.895 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:39.895 08:52:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:26:39.895 08:52:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:26:39.895 08:52:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:39.895 08:52:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:26:39.895 [2024-05-15 08:52:34.557080] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:39.895 08:52:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:39.895 08:52:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:26:39.895 08:52:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:39.895 08:52:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:26:39.895 08:52:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:39.895 08:52:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:39.895 08:52:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:39.895 08:52:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:26:39.895 [2024-05-15 08:52:34.573056] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:39.895 [2024-05-15 08:52:34.573356] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:39.895 08:52:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:39.895 08:52:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:39.895 08:52:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:39.895 08:52:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:26:39.895 08:52:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:39.895 08:52:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:26:39.895 08:52:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:39.895 08:52:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:26:39.895 malloc0 00:26:39.895 08:52:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:39.895 08:52:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:26:39.895 08:52:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:39.895 08:52:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:26:39.895 08:52:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:39.895 08:52:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:26:39.895 08:52:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:26:39.895 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:26:39.895 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:26:39.895 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:39.895 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:39.895 { 00:26:39.895 "params": { 00:26:39.895 "name": "Nvme$subsystem", 00:26:39.895 "trtype": "$TEST_TRANSPORT", 00:26:39.895 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:39.895 "adrfam": "ipv4", 00:26:39.895 "trsvcid": "$NVMF_PORT", 00:26:39.895 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:39.895 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:39.895 "hdgst": ${hdgst:-false}, 00:26:39.895 "ddgst": ${ddgst:-false} 00:26:39.895 }, 00:26:39.895 "method": "bdev_nvme_attach_controller" 00:26:39.895 } 00:26:39.895 EOF 00:26:39.895 )") 00:26:39.895 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:26:39.895 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:26:39.895 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:26:39.895 08:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:39.895 "params": { 00:26:39.895 "name": "Nvme1", 00:26:39.895 "trtype": "tcp", 00:26:39.895 "traddr": "10.0.0.2", 00:26:39.895 "adrfam": "ipv4", 00:26:39.895 "trsvcid": "4420", 00:26:39.895 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:39.895 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:39.895 "hdgst": false, 00:26:39.895 "ddgst": false 00:26:39.895 }, 00:26:39.895 "method": "bdev_nvme_attach_controller" 00:26:39.895 }' 00:26:39.895 [2024-05-15 08:52:34.651846] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:26:39.895 [2024-05-15 08:52:34.651913] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2271080 ] 00:26:39.895 EAL: No free 2048 kB hugepages reported on node 1 00:26:40.153 [2024-05-15 08:52:34.723916] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:40.153 [2024-05-15 08:52:34.814484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:40.411 Running I/O for 10 seconds... 00:26:50.396 00:26:50.396 Latency(us) 00:26:50.396 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:50.396 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:26:50.396 Verification LBA range: start 0x0 length 0x1000 00:26:50.396 Nvme1n1 : 10.02 5715.61 44.65 0.00 0.00 22332.56 3252.53 31651.46 00:26:50.396 =================================================================================================================== 00:26:50.396 Total : 5715.61 44.65 0.00 0.00 22332.56 3252.53 31651.46 00:26:50.654 08:52:45 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2272276 00:26:50.654 08:52:45 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:26:50.654 08:52:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:26:50.654 08:52:45 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:26:50.654 08:52:45 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:26:50.654 08:52:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:26:50.654 08:52:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:26:50.654 08:52:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:50.654 08:52:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:50.654 { 00:26:50.654 "params": { 00:26:50.654 "name": "Nvme$subsystem", 00:26:50.654 "trtype": "$TEST_TRANSPORT", 00:26:50.654 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:50.654 "adrfam": "ipv4", 00:26:50.654 "trsvcid": "$NVMF_PORT", 00:26:50.654 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:50.654 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:50.654 "hdgst": ${hdgst:-false}, 00:26:50.654 "ddgst": ${ddgst:-false} 00:26:50.654 }, 00:26:50.654 "method": "bdev_nvme_attach_controller" 00:26:50.654 } 00:26:50.654 EOF 00:26:50.654 )") 00:26:50.655 08:52:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:26:50.655 [2024-05-15 08:52:45.273298] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:50.655 [2024-05-15 08:52:45.273351] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:50.655 08:52:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:26:50.655 08:52:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:26:50.655 08:52:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:50.655 "params": { 00:26:50.655 "name": "Nvme1", 00:26:50.655 "trtype": "tcp", 00:26:50.655 "traddr": "10.0.0.2", 00:26:50.655 "adrfam": "ipv4", 00:26:50.655 "trsvcid": "4420", 00:26:50.655 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:50.655 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:50.655 "hdgst": false, 00:26:50.655 "ddgst": false 00:26:50.655 }, 00:26:50.655 "method": "bdev_nvme_attach_controller" 00:26:50.655 }' 00:26:50.655 [2024-05-15 08:52:45.281238] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:50.655 [2024-05-15 08:52:45.281280] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:50.655 [2024-05-15 08:52:45.289243] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:50.655 [2024-05-15 08:52:45.289266] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:50.655 [2024-05-15 08:52:45.297304] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:50.655 [2024-05-15 08:52:45.297329] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:50.655 [2024-05-15 08:52:45.305310] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:50.655 [2024-05-15 08:52:45.305334] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:50.655 [2024-05-15 08:52:45.307759] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:26:50.655 [2024-05-15 08:52:45.307828] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2272276 ] 00:26:50.655 [2024-05-15 08:52:45.313313] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:50.655 [2024-05-15 08:52:45.313338] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:50.655 [2024-05-15 08:52:45.321333] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:50.655 [2024-05-15 08:52:45.321356] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:50.655 [2024-05-15 08:52:45.329365] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:50.655 [2024-05-15 08:52:45.329387] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:50.655 [2024-05-15 08:52:45.337385] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:50.655 [2024-05-15 08:52:45.337407] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:50.655 EAL: No free 2048 kB hugepages reported on node 1 00:26:50.655 [2024-05-15 08:52:45.345408] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:50.655 [2024-05-15 08:52:45.345431] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:50.655 [2024-05-15 08:52:45.353428] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:50.655 [2024-05-15 08:52:45.353450] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:50.655 [2024-05-15 08:52:45.361452] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:50.655 [2024-05-15 08:52:45.361473] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:50.655 [2024-05-15 08:52:45.369473] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:50.655 [2024-05-15 08:52:45.369495] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:50.655 [2024-05-15 08:52:45.377510] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:50.655 [2024-05-15 08:52:45.377534] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:50.655 [2024-05-15 08:52:45.380355] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:50.655 [2024-05-15 08:52:45.385548] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:50.655 [2024-05-15 08:52:45.385577] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:50.655 [2024-05-15 08:52:45.393606] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:50.655 [2024-05-15 08:52:45.393650] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:50.655 [2024-05-15 08:52:45.401590] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:50.655 [2024-05-15 08:52:45.401616] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:50.655 [2024-05-15 08:52:45.409607] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:50.655 [2024-05-15 08:52:45.409632] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:50.655 [2024-05-15 08:52:45.417630] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:50.655 [2024-05-15 08:52:45.417654] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:50.655 [2024-05-15 08:52:45.425655] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:50.655 [2024-05-15 08:52:45.425680] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:50.655 [2024-05-15 08:52:45.433706] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:50.655 [2024-05-15 08:52:45.433744] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:50.655 [2024-05-15 08:52:45.441735] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:50.655 [2024-05-15 08:52:45.441774] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:50.913 [2024-05-15 08:52:45.449721] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:50.913 [2024-05-15 08:52:45.449761] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:50.913 [2024-05-15 08:52:45.457741] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:50.913 [2024-05-15 08:52:45.457766] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:50.913 [2024-05-15 08:52:45.465764] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:50.913 [2024-05-15 08:52:45.465789] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:50.913 [2024-05-15 08:52:45.473785] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:50.913 [2024-05-15 08:52:45.473810] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:50.913 [2024-05-15 08:52:45.473811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:50.913 [2024-05-15 08:52:45.481805] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:50.913 [2024-05-15 08:52:45.481829] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:50.913 [2024-05-15 08:52:45.489847] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:50.913 [2024-05-15 08:52:45.489881] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:50.913 [2024-05-15 08:52:45.497887] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:50.913 [2024-05-15 08:52:45.497927] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:50.913 [2024-05-15 08:52:45.505910] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:50.913 [2024-05-15 08:52:45.505952] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:50.913 [2024-05-15 08:52:45.513930] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:50.913 [2024-05-15 08:52:45.513973] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:50.913 [2024-05-15 08:52:45.521953] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:50.913 [2024-05-15 08:52:45.521995] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:50.913 [2024-05-15 08:52:45.529973] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:50.913 [2024-05-15 08:52:45.530015] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:50.913 [2024-05-15 08:52:45.537994] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:50.913 [2024-05-15 08:52:45.538047] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:50.913 [2024-05-15 08:52:45.545979] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:50.913 [2024-05-15 08:52:45.546005] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:50.913 [2024-05-15 08:52:45.554030] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:50.913 [2024-05-15 08:52:45.554071] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:50.913 [2024-05-15 08:52:45.562052] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:50.914 [2024-05-15 08:52:45.562091] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:50.914 [2024-05-15 08:52:45.570075] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:50.914 [2024-05-15 08:52:45.570116] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:50.914 [2024-05-15 08:52:45.578061] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:50.914 [2024-05-15 08:52:45.578086] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:50.914 [2024-05-15 08:52:45.586082] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:50.914 [2024-05-15 08:52:45.586106] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:50.914 [2024-05-15 08:52:45.594122] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:50.914 [2024-05-15 08:52:45.594152] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:50.914 [2024-05-15 08:52:45.602137] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:50.914 [2024-05-15 08:52:45.602164] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:50.914 [2024-05-15 08:52:45.610158] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:50.914 [2024-05-15 08:52:45.610185] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:50.914 [2024-05-15 08:52:45.618179] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:50.914 [2024-05-15 08:52:45.618206] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:50.914 [2024-05-15 08:52:45.626203] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:50.914 [2024-05-15 08:52:45.626235] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:50.914 [2024-05-15 08:52:45.634230] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:50.914 [2024-05-15 08:52:45.634267] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:50.914 [2024-05-15 08:52:45.642264] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:50.914 [2024-05-15 08:52:45.642285] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:50.914 [2024-05-15 08:52:45.650284] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:50.914 [2024-05-15 08:52:45.650305] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:50.914 [2024-05-15 08:52:45.658308] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:50.914 [2024-05-15 08:52:45.658332] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:50.914 [2024-05-15 08:52:45.666324] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:50.914 [2024-05-15 08:52:45.666347] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:50.914 [2024-05-15 08:52:45.674340] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:50.914 [2024-05-15 08:52:45.674363] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:50.914 [2024-05-15 08:52:45.682359] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:50.914 [2024-05-15 08:52:45.682380] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:50.914 [2024-05-15 08:52:45.690387] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:50.914 [2024-05-15 08:52:45.690413] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:50.914 [2024-05-15 08:52:45.698410] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:50.914 [2024-05-15 08:52:45.698433] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:50.914 Running I/O for 5 seconds... 00:26:51.172 [2024-05-15 08:52:45.706431] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.172 [2024-05-15 08:52:45.706453] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.172 [2024-05-15 08:52:45.720575] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.172 [2024-05-15 08:52:45.720607] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.172 [2024-05-15 08:52:45.732293] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.172 [2024-05-15 08:52:45.732322] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.172 [2024-05-15 08:52:45.743647] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.172 [2024-05-15 08:52:45.743678] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.172 [2024-05-15 08:52:45.755454] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.172 [2024-05-15 08:52:45.755482] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.172 [2024-05-15 08:52:45.767062] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.172 [2024-05-15 08:52:45.767092] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.172 [2024-05-15 08:52:45.780445] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.172 [2024-05-15 08:52:45.780473] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.173 [2024-05-15 08:52:45.790889] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.173 [2024-05-15 08:52:45.790920] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.173 [2024-05-15 08:52:45.802384] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.173 [2024-05-15 08:52:45.802411] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.173 [2024-05-15 08:52:45.815656] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.173 [2024-05-15 08:52:45.815686] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.173 [2024-05-15 08:52:45.826369] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.173 [2024-05-15 08:52:45.826396] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.173 [2024-05-15 08:52:45.837743] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.173 [2024-05-15 08:52:45.837773] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.173 [2024-05-15 08:52:45.851091] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.173 [2024-05-15 08:52:45.851121] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.173 [2024-05-15 08:52:45.861877] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.173 [2024-05-15 08:52:45.861907] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.173 [2024-05-15 08:52:45.872814] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.173 [2024-05-15 08:52:45.872844] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.173 [2024-05-15 08:52:45.884145] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.173 [2024-05-15 08:52:45.884175] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.173 [2024-05-15 08:52:45.895708] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.173 [2024-05-15 08:52:45.895738] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.173 [2024-05-15 08:52:45.907041] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.173 [2024-05-15 08:52:45.907071] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.173 [2024-05-15 08:52:45.920050] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.173 [2024-05-15 08:52:45.920079] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.173 [2024-05-15 08:52:45.930823] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.173 [2024-05-15 08:52:45.930853] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.173 [2024-05-15 08:52:45.941868] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.173 [2024-05-15 08:52:45.941898] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.173 [2024-05-15 08:52:45.954225] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.173 [2024-05-15 08:52:45.954255] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.173 [2024-05-15 08:52:45.963659] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.173 [2024-05-15 08:52:45.963690] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.432 [2024-05-15 08:52:45.975294] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.432 [2024-05-15 08:52:45.975320] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.432 [2024-05-15 08:52:45.986224] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.432 [2024-05-15 08:52:45.986268] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.432 [2024-05-15 08:52:45.997804] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.432 [2024-05-15 08:52:45.997834] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.432 [2024-05-15 08:52:46.009341] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.432 [2024-05-15 08:52:46.009367] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.432 [2024-05-15 08:52:46.022677] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.432 [2024-05-15 08:52:46.022707] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.432 [2024-05-15 08:52:46.033170] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.432 [2024-05-15 08:52:46.033200] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.432 [2024-05-15 08:52:46.044225] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.432 [2024-05-15 08:52:46.044269] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.432 [2024-05-15 08:52:46.056983] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.432 [2024-05-15 08:52:46.057012] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.432 [2024-05-15 08:52:46.067207] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.432 [2024-05-15 08:52:46.067266] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.432 [2024-05-15 08:52:46.078284] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.432 [2024-05-15 08:52:46.078313] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.432 [2024-05-15 08:52:46.089438] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.432 [2024-05-15 08:52:46.089465] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.432 [2024-05-15 08:52:46.100711] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.432 [2024-05-15 08:52:46.100741] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.432 [2024-05-15 08:52:46.113441] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.432 [2024-05-15 08:52:46.113475] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.432 [2024-05-15 08:52:46.124788] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.432 [2024-05-15 08:52:46.124818] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.432 [2024-05-15 08:52:46.135877] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.432 [2024-05-15 08:52:46.135907] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.432 [2024-05-15 08:52:46.147443] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.432 [2024-05-15 08:52:46.147469] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.432 [2024-05-15 08:52:46.158905] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.432 [2024-05-15 08:52:46.158935] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.432 [2024-05-15 08:52:46.170301] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.432 [2024-05-15 08:52:46.170328] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.432 [2024-05-15 08:52:46.183501] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.432 [2024-05-15 08:52:46.183545] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.432 [2024-05-15 08:52:46.194198] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.432 [2024-05-15 08:52:46.194239] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.432 [2024-05-15 08:52:46.205543] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.432 [2024-05-15 08:52:46.205573] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.432 [2024-05-15 08:52:46.216980] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.432 [2024-05-15 08:52:46.217010] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.691 [2024-05-15 08:52:46.228512] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.691 [2024-05-15 08:52:46.228557] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.691 [2024-05-15 08:52:46.239497] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.691 [2024-05-15 08:52:46.239541] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.691 [2024-05-15 08:52:46.250879] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.691 [2024-05-15 08:52:46.250908] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.691 [2024-05-15 08:52:46.262348] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.691 [2024-05-15 08:52:46.262374] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.691 [2024-05-15 08:52:46.273722] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.691 [2024-05-15 08:52:46.273752] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.691 [2024-05-15 08:52:46.285394] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.691 [2024-05-15 08:52:46.285421] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.691 [2024-05-15 08:52:46.296399] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.691 [2024-05-15 08:52:46.296426] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.691 [2024-05-15 08:52:46.307649] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.691 [2024-05-15 08:52:46.307678] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.691 [2024-05-15 08:52:46.319411] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.691 [2024-05-15 08:52:46.319437] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.691 [2024-05-15 08:52:46.330658] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.691 [2024-05-15 08:52:46.330694] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.691 [2024-05-15 08:52:46.342153] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.691 [2024-05-15 08:52:46.342183] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.691 [2024-05-15 08:52:46.353703] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.691 [2024-05-15 08:52:46.353733] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.691 [2024-05-15 08:52:46.364722] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.691 [2024-05-15 08:52:46.364752] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.691 [2024-05-15 08:52:46.376488] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.691 [2024-05-15 08:52:46.376523] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.691 [2024-05-15 08:52:46.388181] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.691 [2024-05-15 08:52:46.388211] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.691 [2024-05-15 08:52:46.399323] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.691 [2024-05-15 08:52:46.399350] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.691 [2024-05-15 08:52:46.410694] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.691 [2024-05-15 08:52:46.410725] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.691 [2024-05-15 08:52:46.422272] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.691 [2024-05-15 08:52:46.422298] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.691 [2024-05-15 08:52:46.432985] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.691 [2024-05-15 08:52:46.433015] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.691 [2024-05-15 08:52:46.444881] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.691 [2024-05-15 08:52:46.444911] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.691 [2024-05-15 08:52:46.456919] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.691 [2024-05-15 08:52:46.456949] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.691 [2024-05-15 08:52:46.470337] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.691 [2024-05-15 08:52:46.470365] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.691 [2024-05-15 08:52:46.481512] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.691 [2024-05-15 08:52:46.481538] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.950 [2024-05-15 08:52:46.492441] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.950 [2024-05-15 08:52:46.492468] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.950 [2024-05-15 08:52:46.505072] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.950 [2024-05-15 08:52:46.505101] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.950 [2024-05-15 08:52:46.514964] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.950 [2024-05-15 08:52:46.514993] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.950 [2024-05-15 08:52:46.526844] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.950 [2024-05-15 08:52:46.526874] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.950 [2024-05-15 08:52:46.539990] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.950 [2024-05-15 08:52:46.540020] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.950 [2024-05-15 08:52:46.549939] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.950 [2024-05-15 08:52:46.549976] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.950 [2024-05-15 08:52:46.561471] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.950 [2024-05-15 08:52:46.561498] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.950 [2024-05-15 08:52:46.573098] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.950 [2024-05-15 08:52:46.573128] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.950 [2024-05-15 08:52:46.584442] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.950 [2024-05-15 08:52:46.584469] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.950 [2024-05-15 08:52:46.595822] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.950 [2024-05-15 08:52:46.595851] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.950 [2024-05-15 08:52:46.609197] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.950 [2024-05-15 08:52:46.609235] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.950 [2024-05-15 08:52:46.619736] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.950 [2024-05-15 08:52:46.619766] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.950 [2024-05-15 08:52:46.630340] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.950 [2024-05-15 08:52:46.630366] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.950 [2024-05-15 08:52:46.641312] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.950 [2024-05-15 08:52:46.641339] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.950 [2024-05-15 08:52:46.652682] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.950 [2024-05-15 08:52:46.652712] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.950 [2024-05-15 08:52:46.665618] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.950 [2024-05-15 08:52:46.665648] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.950 [2024-05-15 08:52:46.676288] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.950 [2024-05-15 08:52:46.676317] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.950 [2024-05-15 08:52:46.687802] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.950 [2024-05-15 08:52:46.687832] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.950 [2024-05-15 08:52:46.699180] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.950 [2024-05-15 08:52:46.699210] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.950 [2024-05-15 08:52:46.710754] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.950 [2024-05-15 08:52:46.710784] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.950 [2024-05-15 08:52:46.722538] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.950 [2024-05-15 08:52:46.722568] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:51.950 [2024-05-15 08:52:46.733868] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:51.950 [2024-05-15 08:52:46.733898] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.209 [2024-05-15 08:52:46.745427] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.209 [2024-05-15 08:52:46.745454] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.209 [2024-05-15 08:52:46.757240] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.209 [2024-05-15 08:52:46.757282] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.209 [2024-05-15 08:52:46.771006] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.209 [2024-05-15 08:52:46.771046] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.209 [2024-05-15 08:52:46.781904] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.209 [2024-05-15 08:52:46.781933] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.209 [2024-05-15 08:52:46.793047] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.209 [2024-05-15 08:52:46.793077] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.209 [2024-05-15 08:52:46.804553] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.209 [2024-05-15 08:52:46.804583] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.209 [2024-05-15 08:52:46.816049] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.209 [2024-05-15 08:52:46.816078] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.209 [2024-05-15 08:52:46.827566] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.209 [2024-05-15 08:52:46.827596] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.209 [2024-05-15 08:52:46.839115] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.209 [2024-05-15 08:52:46.839146] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.209 [2024-05-15 08:52:46.850956] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.209 [2024-05-15 08:52:46.850986] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.209 [2024-05-15 08:52:46.862224] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.209 [2024-05-15 08:52:46.862252] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.209 [2024-05-15 08:52:46.875374] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.209 [2024-05-15 08:52:46.875401] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.209 [2024-05-15 08:52:46.885484] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.209 [2024-05-15 08:52:46.885514] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.209 [2024-05-15 08:52:46.896668] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.209 [2024-05-15 08:52:46.896699] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.209 [2024-05-15 08:52:46.909477] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.209 [2024-05-15 08:52:46.909504] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.209 [2024-05-15 08:52:46.919292] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.209 [2024-05-15 08:52:46.919319] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.209 [2024-05-15 08:52:46.930952] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.209 [2024-05-15 08:52:46.930983] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.209 [2024-05-15 08:52:46.942061] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.209 [2024-05-15 08:52:46.942091] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.209 [2024-05-15 08:52:46.954648] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.209 [2024-05-15 08:52:46.954679] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.209 [2024-05-15 08:52:46.964998] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.209 [2024-05-15 08:52:46.965028] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.209 [2024-05-15 08:52:46.975916] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.209 [2024-05-15 08:52:46.975946] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.209 [2024-05-15 08:52:46.987046] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.209 [2024-05-15 08:52:46.987087] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.209 [2024-05-15 08:52:46.998242] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.209 [2024-05-15 08:52:46.998285] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.468 [2024-05-15 08:52:47.009119] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.468 [2024-05-15 08:52:47.009148] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.468 [2024-05-15 08:52:47.019984] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.468 [2024-05-15 08:52:47.020011] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.468 [2024-05-15 08:52:47.033067] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.468 [2024-05-15 08:52:47.033096] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.468 [2024-05-15 08:52:47.045099] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.468 [2024-05-15 08:52:47.045129] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.468 [2024-05-15 08:52:47.054526] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.468 [2024-05-15 08:52:47.054555] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.468 [2024-05-15 08:52:47.066248] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.468 [2024-05-15 08:52:47.066292] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.468 [2024-05-15 08:52:47.077087] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.468 [2024-05-15 08:52:47.077117] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.468 [2024-05-15 08:52:47.088489] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.468 [2024-05-15 08:52:47.088532] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.468 [2024-05-15 08:52:47.100126] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.468 [2024-05-15 08:52:47.100156] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.468 [2024-05-15 08:52:47.111363] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.468 [2024-05-15 08:52:47.111393] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.468 [2024-05-15 08:52:47.121822] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.468 [2024-05-15 08:52:47.121852] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.468 [2024-05-15 08:52:47.132812] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.468 [2024-05-15 08:52:47.132842] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.468 [2024-05-15 08:52:47.145827] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.468 [2024-05-15 08:52:47.145856] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.468 [2024-05-15 08:52:47.156466] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.468 [2024-05-15 08:52:47.156492] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.468 [2024-05-15 08:52:47.167676] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.468 [2024-05-15 08:52:47.167706] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.468 [2024-05-15 08:52:47.180528] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.468 [2024-05-15 08:52:47.180557] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.468 [2024-05-15 08:52:47.190918] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.468 [2024-05-15 08:52:47.190947] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.468 [2024-05-15 08:52:47.201967] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.468 [2024-05-15 08:52:47.201996] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.468 [2024-05-15 08:52:47.213258] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.468 [2024-05-15 08:52:47.213288] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.468 [2024-05-15 08:52:47.224859] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.468 [2024-05-15 08:52:47.224889] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.468 [2024-05-15 08:52:47.236296] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.468 [2024-05-15 08:52:47.236323] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.468 [2024-05-15 08:52:47.249998] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.468 [2024-05-15 08:52:47.250029] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.727 [2024-05-15 08:52:47.260901] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.727 [2024-05-15 08:52:47.260930] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.727 [2024-05-15 08:52:47.271853] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.727 [2024-05-15 08:52:47.271883] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.727 [2024-05-15 08:52:47.283329] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.727 [2024-05-15 08:52:47.283356] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.727 [2024-05-15 08:52:47.294512] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.727 [2024-05-15 08:52:47.294556] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.727 [2024-05-15 08:52:47.305233] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.727 [2024-05-15 08:52:47.305276] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.727 [2024-05-15 08:52:47.316506] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.727 [2024-05-15 08:52:47.316533] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.727 [2024-05-15 08:52:47.329613] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.727 [2024-05-15 08:52:47.329643] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.727 [2024-05-15 08:52:47.340564] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.727 [2024-05-15 08:52:47.340594] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.727 [2024-05-15 08:52:47.352106] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.727 [2024-05-15 08:52:47.352136] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.727 [2024-05-15 08:52:47.365191] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.727 [2024-05-15 08:52:47.365232] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.727 [2024-05-15 08:52:47.375044] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.727 [2024-05-15 08:52:47.375074] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.727 [2024-05-15 08:52:47.387062] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.727 [2024-05-15 08:52:47.387091] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.727 [2024-05-15 08:52:47.398083] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.727 [2024-05-15 08:52:47.398109] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.727 [2024-05-15 08:52:47.409980] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.727 [2024-05-15 08:52:47.410009] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.728 [2024-05-15 08:52:47.421378] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.728 [2024-05-15 08:52:47.421404] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.728 [2024-05-15 08:52:47.434507] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.728 [2024-05-15 08:52:47.434533] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.728 [2024-05-15 08:52:47.445659] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.728 [2024-05-15 08:52:47.445689] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.728 [2024-05-15 08:52:47.457001] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.728 [2024-05-15 08:52:47.457031] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.728 [2024-05-15 08:52:47.468925] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.728 [2024-05-15 08:52:47.468956] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.728 [2024-05-15 08:52:47.480440] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.728 [2024-05-15 08:52:47.480467] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.728 [2024-05-15 08:52:47.491796] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.728 [2024-05-15 08:52:47.491826] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.728 [2024-05-15 08:52:47.505036] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.728 [2024-05-15 08:52:47.505066] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.728 [2024-05-15 08:52:47.516295] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.728 [2024-05-15 08:52:47.516330] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.986 [2024-05-15 08:52:47.528500] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.986 [2024-05-15 08:52:47.528545] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.986 [2024-05-15 08:52:47.539392] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.986 [2024-05-15 08:52:47.539419] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.986 [2024-05-15 08:52:47.550679] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.986 [2024-05-15 08:52:47.550709] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.986 [2024-05-15 08:52:47.561762] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.986 [2024-05-15 08:52:47.561792] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.986 [2024-05-15 08:52:47.574997] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.986 [2024-05-15 08:52:47.575026] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.986 [2024-05-15 08:52:47.585993] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.986 [2024-05-15 08:52:47.586023] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.986 [2024-05-15 08:52:47.597923] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.986 [2024-05-15 08:52:47.597953] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.986 [2024-05-15 08:52:47.609840] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.986 [2024-05-15 08:52:47.609870] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.986 [2024-05-15 08:52:47.620989] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.986 [2024-05-15 08:52:47.621019] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.986 [2024-05-15 08:52:47.632662] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.986 [2024-05-15 08:52:47.632693] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.986 [2024-05-15 08:52:47.643719] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.986 [2024-05-15 08:52:47.643749] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.986 [2024-05-15 08:52:47.655235] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.986 [2024-05-15 08:52:47.655281] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.986 [2024-05-15 08:52:47.666403] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.986 [2024-05-15 08:52:47.666431] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.986 [2024-05-15 08:52:47.679461] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.986 [2024-05-15 08:52:47.679488] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.986 [2024-05-15 08:52:47.689925] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.986 [2024-05-15 08:52:47.689956] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.986 [2024-05-15 08:52:47.701659] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.986 [2024-05-15 08:52:47.701688] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.986 [2024-05-15 08:52:47.712708] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.986 [2024-05-15 08:52:47.712738] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.986 [2024-05-15 08:52:47.724385] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.986 [2024-05-15 08:52:47.724412] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.986 [2024-05-15 08:52:47.735661] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.986 [2024-05-15 08:52:47.735690] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.987 [2024-05-15 08:52:47.746051] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.987 [2024-05-15 08:52:47.746077] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.987 [2024-05-15 08:52:47.756458] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.987 [2024-05-15 08:52:47.756484] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.987 [2024-05-15 08:52:47.766769] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.987 [2024-05-15 08:52:47.766796] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:52.987 [2024-05-15 08:52:47.777091] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:52.987 [2024-05-15 08:52:47.777118] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.245 [2024-05-15 08:52:47.787834] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.245 [2024-05-15 08:52:47.787861] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.245 [2024-05-15 08:52:47.798318] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.245 [2024-05-15 08:52:47.798345] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.245 [2024-05-15 08:52:47.810859] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.245 [2024-05-15 08:52:47.810886] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.245 [2024-05-15 08:52:47.820714] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.245 [2024-05-15 08:52:47.820743] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.245 [2024-05-15 08:52:47.831099] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.245 [2024-05-15 08:52:47.831126] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.245 [2024-05-15 08:52:47.841617] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.245 [2024-05-15 08:52:47.841651] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.245 [2024-05-15 08:52:47.852147] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.245 [2024-05-15 08:52:47.852173] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.245 [2024-05-15 08:52:47.862270] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.245 [2024-05-15 08:52:47.862297] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.245 [2024-05-15 08:52:47.872479] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.245 [2024-05-15 08:52:47.872505] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.245 [2024-05-15 08:52:47.882681] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.245 [2024-05-15 08:52:47.882707] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.245 [2024-05-15 08:52:47.893031] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.245 [2024-05-15 08:52:47.893058] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.245 [2024-05-15 08:52:47.905102] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.245 [2024-05-15 08:52:47.905129] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.245 [2024-05-15 08:52:47.914794] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.245 [2024-05-15 08:52:47.914821] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.245 [2024-05-15 08:52:47.927304] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.245 [2024-05-15 08:52:47.927331] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.245 [2024-05-15 08:52:47.939051] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.245 [2024-05-15 08:52:47.939078] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.245 [2024-05-15 08:52:47.947882] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.245 [2024-05-15 08:52:47.947908] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.245 [2024-05-15 08:52:47.959293] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.245 [2024-05-15 08:52:47.959319] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.245 [2024-05-15 08:52:47.971393] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.245 [2024-05-15 08:52:47.971420] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.245 [2024-05-15 08:52:47.982775] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.245 [2024-05-15 08:52:47.982802] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.245 [2024-05-15 08:52:47.991567] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.245 [2024-05-15 08:52:47.991594] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.245 [2024-05-15 08:52:48.002918] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.245 [2024-05-15 08:52:48.002946] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.245 [2024-05-15 08:52:48.014949] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.245 [2024-05-15 08:52:48.014976] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.245 [2024-05-15 08:52:48.024068] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.245 [2024-05-15 08:52:48.024095] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.245 [2024-05-15 08:52:48.035265] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.245 [2024-05-15 08:52:48.035291] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.503 [2024-05-15 08:52:48.045668] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.503 [2024-05-15 08:52:48.045705] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.503 [2024-05-15 08:52:48.055633] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.503 [2024-05-15 08:52:48.055659] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.503 [2024-05-15 08:52:48.066274] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.503 [2024-05-15 08:52:48.066301] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.503 [2024-05-15 08:52:48.076991] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.503 [2024-05-15 08:52:48.077019] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.503 [2024-05-15 08:52:48.087510] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.503 [2024-05-15 08:52:48.087537] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.503 [2024-05-15 08:52:48.099671] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.503 [2024-05-15 08:52:48.099698] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.503 [2024-05-15 08:52:48.109008] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.503 [2024-05-15 08:52:48.109035] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.503 [2024-05-15 08:52:48.119797] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.503 [2024-05-15 08:52:48.119824] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.503 [2024-05-15 08:52:48.130454] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.503 [2024-05-15 08:52:48.130481] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.503 [2024-05-15 08:52:48.143144] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.503 [2024-05-15 08:52:48.143171] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.503 [2024-05-15 08:52:48.153358] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.503 [2024-05-15 08:52:48.153384] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.503 [2024-05-15 08:52:48.164334] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.503 [2024-05-15 08:52:48.164361] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.503 [2024-05-15 08:52:48.176424] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.503 [2024-05-15 08:52:48.176452] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.503 [2024-05-15 08:52:48.185625] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.503 [2024-05-15 08:52:48.185651] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.503 [2024-05-15 08:52:48.196768] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.503 [2024-05-15 08:52:48.196794] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.503 [2024-05-15 08:52:48.209019] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.503 [2024-05-15 08:52:48.209046] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.503 [2024-05-15 08:52:48.219057] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.503 [2024-05-15 08:52:48.219084] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.503 [2024-05-15 08:52:48.229226] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.503 [2024-05-15 08:52:48.229253] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.503 [2024-05-15 08:52:48.239744] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.503 [2024-05-15 08:52:48.239772] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.504 [2024-05-15 08:52:48.249837] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.504 [2024-05-15 08:52:48.249870] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.504 [2024-05-15 08:52:48.260009] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.504 [2024-05-15 08:52:48.260037] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.504 [2024-05-15 08:52:48.270505] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.504 [2024-05-15 08:52:48.270532] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.504 [2024-05-15 08:52:48.280797] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.504 [2024-05-15 08:52:48.280823] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.504 [2024-05-15 08:52:48.291179] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.504 [2024-05-15 08:52:48.291206] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.762 [2024-05-15 08:52:48.301811] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.762 [2024-05-15 08:52:48.301838] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.762 [2024-05-15 08:52:48.312342] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.762 [2024-05-15 08:52:48.312369] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.762 [2024-05-15 08:52:48.322892] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.762 [2024-05-15 08:52:48.322919] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.762 [2024-05-15 08:52:48.333166] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.762 [2024-05-15 08:52:48.333194] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.762 [2024-05-15 08:52:48.343731] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.762 [2024-05-15 08:52:48.343758] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.762 [2024-05-15 08:52:48.354502] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.762 [2024-05-15 08:52:48.354529] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.762 [2024-05-15 08:52:48.367883] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.762 [2024-05-15 08:52:48.367913] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.762 [2024-05-15 08:52:48.378956] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.762 [2024-05-15 08:52:48.378985] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.762 [2024-05-15 08:52:48.391003] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.762 [2024-05-15 08:52:48.391032] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.762 [2024-05-15 08:52:48.402294] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.762 [2024-05-15 08:52:48.402320] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.762 [2024-05-15 08:52:48.415759] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.762 [2024-05-15 08:52:48.415788] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.762 [2024-05-15 08:52:48.426372] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.762 [2024-05-15 08:52:48.426399] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.762 [2024-05-15 08:52:48.438168] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.762 [2024-05-15 08:52:48.438197] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.762 [2024-05-15 08:52:48.449318] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.762 [2024-05-15 08:52:48.449344] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.762 [2024-05-15 08:52:48.460791] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.762 [2024-05-15 08:52:48.460831] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.762 [2024-05-15 08:52:48.472396] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.762 [2024-05-15 08:52:48.472423] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.762 [2024-05-15 08:52:48.485417] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.762 [2024-05-15 08:52:48.485444] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.762 [2024-05-15 08:52:48.496410] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.762 [2024-05-15 08:52:48.496437] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.762 [2024-05-15 08:52:48.507948] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.762 [2024-05-15 08:52:48.507978] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.762 [2024-05-15 08:52:48.519703] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.762 [2024-05-15 08:52:48.519733] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.762 [2024-05-15 08:52:48.531421] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.762 [2024-05-15 08:52:48.531448] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:53.762 [2024-05-15 08:52:48.543360] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:53.762 [2024-05-15 08:52:48.543386] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.021 [2024-05-15 08:52:48.554947] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.021 [2024-05-15 08:52:48.554977] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.021 [2024-05-15 08:52:48.566358] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.021 [2024-05-15 08:52:48.566384] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.021 [2024-05-15 08:52:48.577743] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.021 [2024-05-15 08:52:48.577772] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.021 [2024-05-15 08:52:48.591130] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.021 [2024-05-15 08:52:48.591159] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.021 [2024-05-15 08:52:48.602195] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.021 [2024-05-15 08:52:48.602234] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.021 [2024-05-15 08:52:48.613462] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.021 [2024-05-15 08:52:48.613489] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.021 [2024-05-15 08:52:48.626313] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.021 [2024-05-15 08:52:48.626340] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.021 [2024-05-15 08:52:48.636062] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.021 [2024-05-15 08:52:48.636092] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.021 [2024-05-15 08:52:48.647882] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.021 [2024-05-15 08:52:48.647912] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.021 [2024-05-15 08:52:48.659480] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.021 [2024-05-15 08:52:48.659523] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.021 [2024-05-15 08:52:48.670901] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.021 [2024-05-15 08:52:48.670931] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.021 [2024-05-15 08:52:48.682975] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.021 [2024-05-15 08:52:48.683019] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.021 [2024-05-15 08:52:48.694707] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.021 [2024-05-15 08:52:48.694737] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.021 [2024-05-15 08:52:48.707969] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.021 [2024-05-15 08:52:48.707999] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.021 [2024-05-15 08:52:48.718152] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.021 [2024-05-15 08:52:48.718182] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.021 [2024-05-15 08:52:48.730088] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.021 [2024-05-15 08:52:48.730117] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.021 [2024-05-15 08:52:48.741649] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.021 [2024-05-15 08:52:48.741678] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.021 [2024-05-15 08:52:48.753311] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.021 [2024-05-15 08:52:48.753339] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.021 [2024-05-15 08:52:48.765161] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.021 [2024-05-15 08:52:48.765191] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.021 [2024-05-15 08:52:48.776937] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.021 [2024-05-15 08:52:48.776967] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.021 [2024-05-15 08:52:48.790297] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.021 [2024-05-15 08:52:48.790324] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.021 [2024-05-15 08:52:48.800878] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.021 [2024-05-15 08:52:48.800907] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.021 [2024-05-15 08:52:48.812088] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.021 [2024-05-15 08:52:48.812118] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.279 [2024-05-15 08:52:48.825285] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.279 [2024-05-15 08:52:48.825312] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.279 [2024-05-15 08:52:48.835855] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.279 [2024-05-15 08:52:48.835885] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.279 [2024-05-15 08:52:48.847252] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.279 [2024-05-15 08:52:48.847296] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.279 [2024-05-15 08:52:48.860493] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.279 [2024-05-15 08:52:48.860537] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.279 [2024-05-15 08:52:48.870631] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.279 [2024-05-15 08:52:48.870661] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.279 [2024-05-15 08:52:48.881988] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.279 [2024-05-15 08:52:48.882017] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.279 [2024-05-15 08:52:48.895430] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.279 [2024-05-15 08:52:48.895457] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.280 [2024-05-15 08:52:48.906388] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.280 [2024-05-15 08:52:48.906414] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.280 [2024-05-15 08:52:48.917619] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.280 [2024-05-15 08:52:48.917649] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.280 [2024-05-15 08:52:48.928393] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.280 [2024-05-15 08:52:48.928420] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.280 [2024-05-15 08:52:48.939571] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.280 [2024-05-15 08:52:48.939601] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.280 [2024-05-15 08:52:48.952911] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.280 [2024-05-15 08:52:48.952940] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.280 [2024-05-15 08:52:48.963609] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.280 [2024-05-15 08:52:48.963638] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.280 [2024-05-15 08:52:48.975185] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.280 [2024-05-15 08:52:48.975223] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.280 [2024-05-15 08:52:48.986340] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.280 [2024-05-15 08:52:48.986368] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.280 [2024-05-15 08:52:48.997401] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.280 [2024-05-15 08:52:48.997428] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.280 [2024-05-15 08:52:49.008534] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.280 [2024-05-15 08:52:49.008565] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.280 [2024-05-15 08:52:49.020176] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.280 [2024-05-15 08:52:49.020207] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.280 [2024-05-15 08:52:49.031186] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.280 [2024-05-15 08:52:49.031223] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.280 [2024-05-15 08:52:49.042636] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.280 [2024-05-15 08:52:49.042666] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.280 [2024-05-15 08:52:49.055960] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.280 [2024-05-15 08:52:49.055990] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.280 [2024-05-15 08:52:49.066985] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.280 [2024-05-15 08:52:49.067015] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.539 [2024-05-15 08:52:49.078043] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.539 [2024-05-15 08:52:49.078073] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.539 [2024-05-15 08:52:49.091258] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.539 [2024-05-15 08:52:49.091299] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.539 [2024-05-15 08:52:49.102179] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.539 [2024-05-15 08:52:49.102209] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.539 [2024-05-15 08:52:49.113195] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.539 [2024-05-15 08:52:49.113250] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.539 [2024-05-15 08:52:49.124406] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.539 [2024-05-15 08:52:49.124433] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.539 [2024-05-15 08:52:49.136113] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.539 [2024-05-15 08:52:49.136144] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.539 [2024-05-15 08:52:49.147366] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.539 [2024-05-15 08:52:49.147394] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.539 [2024-05-15 08:52:49.158574] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.539 [2024-05-15 08:52:49.158605] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.539 [2024-05-15 08:52:49.170026] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.539 [2024-05-15 08:52:49.170055] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.539 [2024-05-15 08:52:49.183274] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.539 [2024-05-15 08:52:49.183301] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.539 [2024-05-15 08:52:49.193754] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.539 [2024-05-15 08:52:49.193783] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.539 [2024-05-15 08:52:49.205314] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.539 [2024-05-15 08:52:49.205341] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.539 [2024-05-15 08:52:49.216685] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.539 [2024-05-15 08:52:49.216715] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.539 [2024-05-15 08:52:49.229955] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.539 [2024-05-15 08:52:49.229984] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.539 [2024-05-15 08:52:49.240845] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.539 [2024-05-15 08:52:49.240875] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.539 [2024-05-15 08:52:49.252204] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.539 [2024-05-15 08:52:49.252243] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.539 [2024-05-15 08:52:49.263846] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.539 [2024-05-15 08:52:49.263877] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.539 [2024-05-15 08:52:49.275338] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.539 [2024-05-15 08:52:49.275366] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.539 [2024-05-15 08:52:49.287141] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.539 [2024-05-15 08:52:49.287171] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.539 [2024-05-15 08:52:49.298374] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.539 [2024-05-15 08:52:49.298405] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.539 [2024-05-15 08:52:49.309973] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.539 [2024-05-15 08:52:49.310003] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.539 [2024-05-15 08:52:49.322998] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.539 [2024-05-15 08:52:49.323028] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.798 [2024-05-15 08:52:49.333825] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.798 [2024-05-15 08:52:49.333855] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.798 [2024-05-15 08:52:49.345379] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.798 [2024-05-15 08:52:49.345406] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.798 [2024-05-15 08:52:49.356801] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.798 [2024-05-15 08:52:49.356830] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.798 [2024-05-15 08:52:49.367842] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.798 [2024-05-15 08:52:49.367872] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.798 [2024-05-15 08:52:49.379273] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.798 [2024-05-15 08:52:49.379301] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.798 [2024-05-15 08:52:49.391606] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.798 [2024-05-15 08:52:49.391633] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.798 [2024-05-15 08:52:49.400353] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.798 [2024-05-15 08:52:49.400383] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.798 [2024-05-15 08:52:49.413185] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.798 [2024-05-15 08:52:49.413211] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.798 [2024-05-15 08:52:49.423415] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.798 [2024-05-15 08:52:49.423442] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.798 [2024-05-15 08:52:49.434355] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.798 [2024-05-15 08:52:49.434382] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.798 [2024-05-15 08:52:49.446628] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.798 [2024-05-15 08:52:49.446655] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.798 [2024-05-15 08:52:49.456408] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.798 [2024-05-15 08:52:49.456434] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.798 [2024-05-15 08:52:49.466819] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.798 [2024-05-15 08:52:49.466845] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.798 [2024-05-15 08:52:49.477292] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.798 [2024-05-15 08:52:49.477319] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.798 [2024-05-15 08:52:49.489525] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.798 [2024-05-15 08:52:49.489552] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.798 [2024-05-15 08:52:49.499066] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.798 [2024-05-15 08:52:49.499093] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.798 [2024-05-15 08:52:49.509243] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.798 [2024-05-15 08:52:49.509269] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.798 [2024-05-15 08:52:49.519581] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.798 [2024-05-15 08:52:49.519608] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.798 [2024-05-15 08:52:49.532003] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.798 [2024-05-15 08:52:49.532030] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.798 [2024-05-15 08:52:49.541801] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.798 [2024-05-15 08:52:49.541840] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.798 [2024-05-15 08:52:49.552706] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.798 [2024-05-15 08:52:49.552733] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.798 [2024-05-15 08:52:49.564872] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.798 [2024-05-15 08:52:49.564899] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.798 [2024-05-15 08:52:49.574779] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.798 [2024-05-15 08:52:49.574806] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:54.798 [2024-05-15 08:52:49.585124] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:54.798 [2024-05-15 08:52:49.585151] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.057 [2024-05-15 08:52:49.595325] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.057 [2024-05-15 08:52:49.595352] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.057 [2024-05-15 08:52:49.605510] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.057 [2024-05-15 08:52:49.605537] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.057 [2024-05-15 08:52:49.615644] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.057 [2024-05-15 08:52:49.615670] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.057 [2024-05-15 08:52:49.626369] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.057 [2024-05-15 08:52:49.626395] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.057 [2024-05-15 08:52:49.638599] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.057 [2024-05-15 08:52:49.638626] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.057 [2024-05-15 08:52:49.648609] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.057 [2024-05-15 08:52:49.648636] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.057 [2024-05-15 08:52:49.659190] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.057 [2024-05-15 08:52:49.659224] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.057 [2024-05-15 08:52:49.669632] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.057 [2024-05-15 08:52:49.669659] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.057 [2024-05-15 08:52:49.679885] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.057 [2024-05-15 08:52:49.679912] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.057 [2024-05-15 08:52:49.689913] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.057 [2024-05-15 08:52:49.689940] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.057 [2024-05-15 08:52:49.700343] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.057 [2024-05-15 08:52:49.700370] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.057 [2024-05-15 08:52:49.710900] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.057 [2024-05-15 08:52:49.710926] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.057 [2024-05-15 08:52:49.721329] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.057 [2024-05-15 08:52:49.721356] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.057 [2024-05-15 08:52:49.731744] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.057 [2024-05-15 08:52:49.731770] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.057 [2024-05-15 08:52:49.742118] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.057 [2024-05-15 08:52:49.742156] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.057 [2024-05-15 08:52:49.752624] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.057 [2024-05-15 08:52:49.752650] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.057 [2024-05-15 08:52:49.765275] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.057 [2024-05-15 08:52:49.765302] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.057 [2024-05-15 08:52:49.775318] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.057 [2024-05-15 08:52:49.775344] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.057 [2024-05-15 08:52:49.786022] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.057 [2024-05-15 08:52:49.786049] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.057 [2024-05-15 08:52:49.796936] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.057 [2024-05-15 08:52:49.796963] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.057 [2024-05-15 08:52:49.807293] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.057 [2024-05-15 08:52:49.807336] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.057 [2024-05-15 08:52:49.817976] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.057 [2024-05-15 08:52:49.818003] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.057 [2024-05-15 08:52:49.828684] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.057 [2024-05-15 08:52:49.828711] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.057 [2024-05-15 08:52:49.839638] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.057 [2024-05-15 08:52:49.839669] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.316 [2024-05-15 08:52:49.850672] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.316 [2024-05-15 08:52:49.850699] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.316 [2024-05-15 08:52:49.861332] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.316 [2024-05-15 08:52:49.861359] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.316 [2024-05-15 08:52:49.873355] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.316 [2024-05-15 08:52:49.873383] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.316 [2024-05-15 08:52:49.882269] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.316 [2024-05-15 08:52:49.882295] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.316 [2024-05-15 08:52:49.893531] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.316 [2024-05-15 08:52:49.893557] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.316 [2024-05-15 08:52:49.905919] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.316 [2024-05-15 08:52:49.905946] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.316 [2024-05-15 08:52:49.916502] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.316 [2024-05-15 08:52:49.916530] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.316 [2024-05-15 08:52:49.927532] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.316 [2024-05-15 08:52:49.927559] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.316 [2024-05-15 08:52:49.940074] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.316 [2024-05-15 08:52:49.940101] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.316 [2024-05-15 08:52:49.949430] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.316 [2024-05-15 08:52:49.949468] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.316 [2024-05-15 08:52:49.960564] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.316 [2024-05-15 08:52:49.960591] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.316 [2024-05-15 08:52:49.971264] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.316 [2024-05-15 08:52:49.971291] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.316 [2024-05-15 08:52:49.981561] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.316 [2024-05-15 08:52:49.981588] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.316 [2024-05-15 08:52:49.992507] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.316 [2024-05-15 08:52:49.992534] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.316 [2024-05-15 08:52:50.002939] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.316 [2024-05-15 08:52:50.002968] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.316 [2024-05-15 08:52:50.013119] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.316 [2024-05-15 08:52:50.013147] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.316 [2024-05-15 08:52:50.023161] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.316 [2024-05-15 08:52:50.023188] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.316 [2024-05-15 08:52:50.034098] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.316 [2024-05-15 08:52:50.034129] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.316 [2024-05-15 08:52:50.045038] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.316 [2024-05-15 08:52:50.045069] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.316 [2024-05-15 08:52:50.056014] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.316 [2024-05-15 08:52:50.056045] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.316 [2024-05-15 08:52:50.068717] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.316 [2024-05-15 08:52:50.068748] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.316 [2024-05-15 08:52:50.078959] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.316 [2024-05-15 08:52:50.078988] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.316 [2024-05-15 08:52:50.090843] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.316 [2024-05-15 08:52:50.090874] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.316 [2024-05-15 08:52:50.102274] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.316 [2024-05-15 08:52:50.102301] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.575 [2024-05-15 08:52:50.115695] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.575 [2024-05-15 08:52:50.115725] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.575 [2024-05-15 08:52:50.126289] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.575 [2024-05-15 08:52:50.126315] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.575 [2024-05-15 08:52:50.137974] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.575 [2024-05-15 08:52:50.138003] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.575 [2024-05-15 08:52:50.149392] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.575 [2024-05-15 08:52:50.149419] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.575 [2024-05-15 08:52:50.160666] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.575 [2024-05-15 08:52:50.160705] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.575 [2024-05-15 08:52:50.172558] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.575 [2024-05-15 08:52:50.172588] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.575 [2024-05-15 08:52:50.183669] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.575 [2024-05-15 08:52:50.183698] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.575 [2024-05-15 08:52:50.194885] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.575 [2024-05-15 08:52:50.194915] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.575 [2024-05-15 08:52:50.210202] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.575 [2024-05-15 08:52:50.210243] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.575 [2024-05-15 08:52:50.220705] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.575 [2024-05-15 08:52:50.220735] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.575 [2024-05-15 08:52:50.232669] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.575 [2024-05-15 08:52:50.232701] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.575 [2024-05-15 08:52:50.244126] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.575 [2024-05-15 08:52:50.244156] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.575 [2024-05-15 08:52:50.257334] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.575 [2024-05-15 08:52:50.257361] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.575 [2024-05-15 08:52:50.268069] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.575 [2024-05-15 08:52:50.268099] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.575 [2024-05-15 08:52:50.279047] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.575 [2024-05-15 08:52:50.279078] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.575 [2024-05-15 08:52:50.290517] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.575 [2024-05-15 08:52:50.290544] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.575 [2024-05-15 08:52:50.301816] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.575 [2024-05-15 08:52:50.301846] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.575 [2024-05-15 08:52:50.312979] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.576 [2024-05-15 08:52:50.313009] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.576 [2024-05-15 08:52:50.324762] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.576 [2024-05-15 08:52:50.324791] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.576 [2024-05-15 08:52:50.336481] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.576 [2024-05-15 08:52:50.336527] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.576 [2024-05-15 08:52:50.347863] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.576 [2024-05-15 08:52:50.347893] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.576 [2024-05-15 08:52:50.361196] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.576 [2024-05-15 08:52:50.361459] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.834 [2024-05-15 08:52:50.371724] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.834 [2024-05-15 08:52:50.371755] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.834 [2024-05-15 08:52:50.382931] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.834 [2024-05-15 08:52:50.382969] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.834 [2024-05-15 08:52:50.394167] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.834 [2024-05-15 08:52:50.394197] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.834 [2024-05-15 08:52:50.405428] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.834 [2024-05-15 08:52:50.405455] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.834 [2024-05-15 08:52:50.419308] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.834 [2024-05-15 08:52:50.419335] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.834 [2024-05-15 08:52:50.430185] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.834 [2024-05-15 08:52:50.430223] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.834 [2024-05-15 08:52:50.441472] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.834 [2024-05-15 08:52:50.441516] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.834 [2024-05-15 08:52:50.453083] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.834 [2024-05-15 08:52:50.453113] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.834 [2024-05-15 08:52:50.464633] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.834 [2024-05-15 08:52:50.464662] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.834 [2024-05-15 08:52:50.475631] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.834 [2024-05-15 08:52:50.475661] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.834 [2024-05-15 08:52:50.487359] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.834 [2024-05-15 08:52:50.487386] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.834 [2024-05-15 08:52:50.498725] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.834 [2024-05-15 08:52:50.498755] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.834 [2024-05-15 08:52:50.511510] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.834 [2024-05-15 08:52:50.511540] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.834 [2024-05-15 08:52:50.522181] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.834 [2024-05-15 08:52:50.522211] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.834 [2024-05-15 08:52:50.533919] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.834 [2024-05-15 08:52:50.533948] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.834 [2024-05-15 08:52:50.547025] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.834 [2024-05-15 08:52:50.547055] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.834 [2024-05-15 08:52:50.557675] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.834 [2024-05-15 08:52:50.557705] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.834 [2024-05-15 08:52:50.568908] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.834 [2024-05-15 08:52:50.568943] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.834 [2024-05-15 08:52:50.580145] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.834 [2024-05-15 08:52:50.580174] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.834 [2024-05-15 08:52:50.591854] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.834 [2024-05-15 08:52:50.591883] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.834 [2024-05-15 08:52:50.603469] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.834 [2024-05-15 08:52:50.603511] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.835 [2024-05-15 08:52:50.616481] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.835 [2024-05-15 08:52:50.616524] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:55.835 [2024-05-15 08:52:50.626324] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:55.835 [2024-05-15 08:52:50.626351] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:56.092 [2024-05-15 08:52:50.638375] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:56.092 [2024-05-15 08:52:50.638402] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:56.092 [2024-05-15 08:52:50.649594] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:56.092 [2024-05-15 08:52:50.649624] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:56.092 [2024-05-15 08:52:50.660859] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:56.092 [2024-05-15 08:52:50.660888] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:56.092 [2024-05-15 08:52:50.672331] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:56.092 [2024-05-15 08:52:50.672357] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:56.092 [2024-05-15 08:52:50.683390] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:56.092 [2024-05-15 08:52:50.683416] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:56.092 [2024-05-15 08:52:50.694901] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:56.092 [2024-05-15 08:52:50.694930] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:56.092 [2024-05-15 08:52:50.705896] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:56.092 [2024-05-15 08:52:50.705926] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:56.092 [2024-05-15 08:52:50.717025] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:56.092 [2024-05-15 08:52:50.717055] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:56.092 [2024-05-15 08:52:50.723180] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:56.092 [2024-05-15 08:52:50.723209] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:56.092 00:26:56.092 Latency(us) 00:26:56.092 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:56.092 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:26:56.092 Nvme1n1 : 5.01 11461.09 89.54 0.00 0.00 11152.38 4538.97 26991.12 00:26:56.092 =================================================================================================================== 00:26:56.093 Total : 11461.09 89.54 0.00 0.00 11152.38 4538.97 26991.12 00:26:56.093 [2024-05-15 08:52:50.731203] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:56.093 [2024-05-15 08:52:50.731239] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:56.093 [2024-05-15 08:52:50.739235] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:56.093 [2024-05-15 08:52:50.739278] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:56.093 [2024-05-15 08:52:50.747338] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:56.093 [2024-05-15 08:52:50.747390] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:56.093 [2024-05-15 08:52:50.755357] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:56.093 [2024-05-15 08:52:50.755412] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:56.093 [2024-05-15 08:52:50.763361] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:56.093 [2024-05-15 08:52:50.763413] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:56.093 [2024-05-15 08:52:50.771396] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:56.093 [2024-05-15 08:52:50.771449] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:56.093 [2024-05-15 08:52:50.779411] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:56.093 [2024-05-15 08:52:50.779463] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:56.093 [2024-05-15 08:52:50.787439] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:56.093 [2024-05-15 08:52:50.787494] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:56.093 [2024-05-15 08:52:50.795453] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:56.093 [2024-05-15 08:52:50.795503] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:56.093 [2024-05-15 08:52:50.803484] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:56.093 [2024-05-15 08:52:50.803537] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:56.093 [2024-05-15 08:52:50.811489] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:56.093 [2024-05-15 08:52:50.811558] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:56.093 [2024-05-15 08:52:50.819518] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:56.093 [2024-05-15 08:52:50.819567] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:56.093 [2024-05-15 08:52:50.827541] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:56.093 [2024-05-15 08:52:50.827593] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:56.093 [2024-05-15 08:52:50.835569] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:56.093 [2024-05-15 08:52:50.835619] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:56.093 [2024-05-15 08:52:50.843591] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:56.093 [2024-05-15 08:52:50.843642] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:56.093 [2024-05-15 08:52:50.851604] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:56.093 [2024-05-15 08:52:50.851651] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:56.093 [2024-05-15 08:52:50.859625] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:56.093 [2024-05-15 08:52:50.859675] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:56.093 [2024-05-15 08:52:50.867657] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:56.093 [2024-05-15 08:52:50.867705] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:56.093 [2024-05-15 08:52:50.875632] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:56.093 [2024-05-15 08:52:50.875658] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:56.093 [2024-05-15 08:52:50.883684] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:56.093 [2024-05-15 08:52:50.883728] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:56.351 [2024-05-15 08:52:50.891717] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:56.351 [2024-05-15 08:52:50.891763] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:56.351 [2024-05-15 08:52:50.899756] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:56.351 [2024-05-15 08:52:50.899809] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:56.351 [2024-05-15 08:52:50.907737] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:56.351 [2024-05-15 08:52:50.907774] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:56.351 [2024-05-15 08:52:50.915733] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:56.351 [2024-05-15 08:52:50.915759] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:56.351 [2024-05-15 08:52:50.923819] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:56.351 [2024-05-15 08:52:50.923867] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:56.351 [2024-05-15 08:52:50.931844] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:56.351 [2024-05-15 08:52:50.931897] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:56.351 [2024-05-15 08:52:50.939824] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:56.351 [2024-05-15 08:52:50.939857] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:56.351 [2024-05-15 08:52:50.947823] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:56.351 [2024-05-15 08:52:50.947848] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:56.351 [2024-05-15 08:52:50.955843] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:56.351 [2024-05-15 08:52:50.955867] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:56.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2272276) - No such process 00:26:56.351 08:52:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2272276 00:26:56.351 08:52:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:56.351 08:52:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:56.351 08:52:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:26:56.351 08:52:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:56.351 08:52:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:26:56.351 08:52:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:56.351 08:52:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:26:56.351 delay0 00:26:56.351 08:52:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:56.351 08:52:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:26:56.351 08:52:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:56.351 08:52:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:26:56.351 08:52:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:56.351 08:52:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:26:56.351 EAL: No free 2048 kB hugepages reported on node 1 00:26:56.351 [2024-05-15 08:52:51.073420] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:27:02.906 Initializing NVMe Controllers 00:27:02.906 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:02.906 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:02.906 Initialization complete. Launching workers. 00:27:02.906 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 94 00:27:02.906 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 381, failed to submit 33 00:27:02.906 success 210, unsuccess 171, failed 0 00:27:02.906 08:52:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:27:02.906 08:52:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:27:02.906 08:52:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:02.906 08:52:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:27:02.906 08:52:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:02.906 08:52:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:27:02.906 08:52:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:02.906 08:52:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:02.906 rmmod nvme_tcp 00:27:02.907 rmmod nvme_fabrics 00:27:02.907 rmmod nvme_keyring 00:27:02.907 08:52:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:02.907 08:52:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:27:02.907 08:52:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:27:02.907 08:52:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 2270945 ']' 00:27:02.907 08:52:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 2270945 00:27:02.907 08:52:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@947 -- # '[' -z 2270945 ']' 00:27:02.907 08:52:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # kill -0 2270945 00:27:02.907 08:52:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # uname 00:27:02.907 08:52:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:27:02.907 08:52:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2270945 00:27:02.907 08:52:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:27:02.907 08:52:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:27:02.907 08:52:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2270945' 00:27:02.907 killing process with pid 2270945 00:27:02.907 08:52:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # kill 2270945 00:27:02.907 [2024-05-15 08:52:57.409042] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:02.907 08:52:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@971 -- # wait 2270945 00:27:02.907 08:52:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:02.907 08:52:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:02.907 08:52:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:02.907 08:52:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:02.907 08:52:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:02.907 08:52:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:02.907 08:52:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:02.907 08:52:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:05.441 08:52:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:05.441 00:27:05.441 real 0m28.229s 00:27:05.441 user 0m41.108s 00:27:05.441 sys 0m8.623s 00:27:05.441 08:52:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # xtrace_disable 00:27:05.441 08:52:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:27:05.441 ************************************ 00:27:05.441 END TEST nvmf_zcopy 00:27:05.441 ************************************ 00:27:05.441 08:52:59 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:27:05.441 08:52:59 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:27:05.441 08:52:59 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:27:05.441 08:52:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:05.441 ************************************ 00:27:05.441 START TEST nvmf_nmic 00:27:05.441 ************************************ 00:27:05.441 08:52:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:27:05.441 * Looking for test storage... 00:27:05.441 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:05.441 08:52:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:05.441 08:52:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:27:05.441 08:52:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:05.441 08:52:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:05.441 08:52:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:05.441 08:52:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:05.441 08:52:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:05.441 08:52:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:05.441 08:52:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:05.441 08:52:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:05.441 08:52:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:05.441 08:52:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:05.441 08:52:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:05.441 08:52:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:05.441 08:52:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:05.441 08:52:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:05.441 08:52:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:05.441 08:52:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:05.441 08:52:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:05.441 08:52:59 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:05.441 08:52:59 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:05.441 08:52:59 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:05.441 08:52:59 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.441 08:52:59 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.441 08:52:59 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.441 08:52:59 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:27:05.441 08:52:59 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.441 08:52:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:27:05.441 08:52:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:05.441 08:52:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:05.441 08:52:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:05.441 08:52:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:05.441 08:52:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:05.441 08:52:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:05.441 08:52:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:05.441 08:52:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:05.441 08:52:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:05.441 08:52:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:05.441 08:52:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:27:05.441 08:52:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:05.441 08:52:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:05.441 08:52:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:05.441 08:52:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:05.441 08:52:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:05.441 08:52:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:05.441 08:52:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:05.441 08:52:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:05.441 08:52:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:05.441 08:52:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:05.441 08:52:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:27:05.441 08:52:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:07.969 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:07.969 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:07.969 Found net devices under 0000:09:00.0: cvl_0_0 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:07.969 Found net devices under 0000:09:00.1: cvl_0_1 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:07.969 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:07.969 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:27:07.969 00:27:07.969 --- 10.0.0.2 ping statistics --- 00:27:07.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:07.969 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:07.969 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:07.969 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.058 ms 00:27:07.969 00:27:07.969 --- 10.0.0.1 ping statistics --- 00:27:07.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:07.969 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:27:07.969 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:07.970 08:53:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@721 -- # xtrace_disable 00:27:07.970 08:53:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:27:07.970 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=2275941 00:27:07.970 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 2275941 00:27:07.970 08:53:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@828 -- # '[' -z 2275941 ']' 00:27:07.970 08:53:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:07.970 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:07.970 08:53:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local max_retries=100 00:27:07.970 08:53:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:07.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:07.970 08:53:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@837 -- # xtrace_disable 00:27:07.970 08:53:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:27:07.970 [2024-05-15 08:53:02.530563] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:27:07.970 [2024-05-15 08:53:02.530642] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:07.970 EAL: No free 2048 kB hugepages reported on node 1 00:27:07.970 [2024-05-15 08:53:02.604271] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:07.970 [2024-05-15 08:53:02.687052] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:07.970 [2024-05-15 08:53:02.687106] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:07.970 [2024-05-15 08:53:02.687133] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:07.970 [2024-05-15 08:53:02.687145] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:07.970 [2024-05-15 08:53:02.687154] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:07.970 [2024-05-15 08:53:02.687239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:07.970 [2024-05-15 08:53:02.687301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:07.970 [2024-05-15 08:53:02.687583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:07.970 [2024-05-15 08:53:02.687586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:08.227 08:53:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:27:08.227 08:53:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@861 -- # return 0 00:27:08.227 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:08.227 08:53:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@727 -- # xtrace_disable 00:27:08.227 08:53:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:27:08.227 08:53:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:08.227 08:53:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:08.227 08:53:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:08.227 08:53:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:27:08.227 [2024-05-15 08:53:02.820843] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:08.227 08:53:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:08.227 08:53:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:08.227 08:53:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:08.227 08:53:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:27:08.228 Malloc0 00:27:08.228 08:53:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:08.228 08:53:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:27:08.228 08:53:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:08.228 08:53:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:27:08.228 08:53:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:08.228 08:53:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:08.228 08:53:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:08.228 08:53:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:27:08.228 08:53:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:08.228 08:53:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:08.228 08:53:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:08.228 08:53:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:27:08.228 [2024-05-15 08:53:02.871835] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:08.228 [2024-05-15 08:53:02.872142] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:08.228 08:53:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:08.228 08:53:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:27:08.228 test case1: single bdev can't be used in multiple subsystems 00:27:08.228 08:53:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:27:08.228 08:53:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:08.228 08:53:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:27:08.228 08:53:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:08.228 08:53:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:08.228 08:53:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:08.228 08:53:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:27:08.228 08:53:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:08.228 08:53:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:27:08.228 08:53:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:27:08.228 08:53:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:08.228 08:53:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:27:08.228 [2024-05-15 08:53:02.895949] bdev.c:8030:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:27:08.228 [2024-05-15 08:53:02.895977] subsystem.c:2063:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:27:08.228 [2024-05-15 08:53:02.896007] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:08.228 request: 00:27:08.228 { 00:27:08.228 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:27:08.228 "namespace": { 00:27:08.228 "bdev_name": "Malloc0", 00:27:08.228 "no_auto_visible": false 00:27:08.228 }, 00:27:08.228 "method": "nvmf_subsystem_add_ns", 00:27:08.228 "req_id": 1 00:27:08.228 } 00:27:08.228 Got JSON-RPC error response 00:27:08.228 response: 00:27:08.228 { 00:27:08.228 "code": -32602, 00:27:08.228 "message": "Invalid parameters" 00:27:08.228 } 00:27:08.228 08:53:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:27:08.228 08:53:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:27:08.228 08:53:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:27:08.228 08:53:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:27:08.228 Adding namespace failed - expected result. 00:27:08.228 08:53:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:27:08.228 test case2: host connect to nvmf target in multiple paths 00:27:08.228 08:53:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:08.228 08:53:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:08.228 08:53:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:27:08.228 [2024-05-15 08:53:02.904059] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:08.228 08:53:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:08.228 08:53:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:27:08.792 08:53:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:27:09.414 08:53:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:27:09.414 08:53:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1195 -- # local i=0 00:27:09.415 08:53:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:27:09.415 08:53:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:27:09.415 08:53:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # sleep 2 00:27:11.939 08:53:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:27:11.939 08:53:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:27:11.939 08:53:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:27:11.939 08:53:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:27:11.939 08:53:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:27:11.939 08:53:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # return 0 00:27:11.939 08:53:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:27:11.939 [global] 00:27:11.939 thread=1 00:27:11.939 invalidate=1 00:27:11.939 rw=write 00:27:11.939 time_based=1 00:27:11.939 runtime=1 00:27:11.939 ioengine=libaio 00:27:11.939 direct=1 00:27:11.939 bs=4096 00:27:11.939 iodepth=1 00:27:11.939 norandommap=0 00:27:11.939 numjobs=1 00:27:11.939 00:27:11.939 verify_dump=1 00:27:11.939 verify_backlog=512 00:27:11.939 verify_state_save=0 00:27:11.939 do_verify=1 00:27:11.939 verify=crc32c-intel 00:27:11.939 [job0] 00:27:11.939 filename=/dev/nvme0n1 00:27:11.939 Could not set queue depth (nvme0n1) 00:27:11.939 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:11.939 fio-3.35 00:27:11.939 Starting 1 thread 00:27:12.873 00:27:12.873 job0: (groupid=0, jobs=1): err= 0: pid=2276476: Wed May 15 08:53:07 2024 00:27:12.873 read: IOPS=20, BW=82.0KiB/s (84.0kB/s)(84.0KiB/1024msec) 00:27:12.873 slat (nsec): min=12052, max=36039, avg=25376.10, stdev=10630.68 00:27:12.873 clat (usec): min=40845, max=44958, avg=41754.80, stdev=875.68 00:27:12.873 lat (usec): min=40880, max=44974, avg=41780.18, stdev=873.85 00:27:12.873 clat percentiles (usec): 00:27:12.873 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:27:12.873 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:27:12.873 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:27:12.873 | 99.00th=[44827], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:27:12.873 | 99.99th=[44827] 00:27:12.873 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:27:12.873 slat (usec): min=7, max=30976, avg=78.40, stdev=1368.21 00:27:12.873 clat (usec): min=158, max=376, avg=202.22, stdev=19.44 00:27:12.873 lat (usec): min=166, max=31256, avg=280.62, stdev=1371.81 00:27:12.873 clat percentiles (usec): 00:27:12.873 | 1.00th=[ 161], 5.00th=[ 172], 10.00th=[ 180], 20.00th=[ 190], 00:27:12.873 | 30.00th=[ 194], 40.00th=[ 198], 50.00th=[ 202], 60.00th=[ 206], 00:27:12.873 | 70.00th=[ 210], 80.00th=[ 215], 90.00th=[ 223], 95.00th=[ 233], 00:27:12.873 | 99.00th=[ 253], 99.50th=[ 281], 99.90th=[ 375], 99.95th=[ 375], 00:27:12.873 | 99.99th=[ 375] 00:27:12.873 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:27:12.873 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:27:12.873 lat (usec) : 250=94.37%, 500=1.69% 00:27:12.873 lat (msec) : 50=3.94% 00:27:12.873 cpu : usr=0.78%, sys=0.98%, ctx=536, majf=0, minf=2 00:27:12.873 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:12.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:12.873 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:12.873 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:12.873 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:12.873 00:27:12.873 Run status group 0 (all jobs): 00:27:12.873 READ: bw=82.0KiB/s (84.0kB/s), 82.0KiB/s-82.0KiB/s (84.0kB/s-84.0kB/s), io=84.0KiB (86.0kB), run=1024-1024msec 00:27:12.873 WRITE: bw=2000KiB/s (2048kB/s), 2000KiB/s-2000KiB/s (2048kB/s-2048kB/s), io=2048KiB (2097kB), run=1024-1024msec 00:27:12.873 00:27:12.873 Disk stats (read/write): 00:27:12.873 nvme0n1: ios=45/512, merge=0/0, ticks=1742/108, in_queue=1850, util=98.70% 00:27:12.873 08:53:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:12.873 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:27:12.873 08:53:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:12.873 08:53:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # local i=0 00:27:12.873 08:53:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:27:12.873 08:53:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:12.873 08:53:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:27:12.873 08:53:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:12.873 08:53:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1228 -- # return 0 00:27:12.873 08:53:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:27:12.873 08:53:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:27:12.873 08:53:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:12.873 08:53:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:27:12.873 08:53:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:12.873 08:53:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:27:12.873 08:53:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:12.873 08:53:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:12.874 rmmod nvme_tcp 00:27:12.874 rmmod nvme_fabrics 00:27:12.874 rmmod nvme_keyring 00:27:12.874 08:53:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:12.874 08:53:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:27:12.874 08:53:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:27:12.874 08:53:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 2275941 ']' 00:27:12.874 08:53:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 2275941 00:27:12.874 08:53:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@947 -- # '[' -z 2275941 ']' 00:27:12.874 08:53:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # kill -0 2275941 00:27:12.874 08:53:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # uname 00:27:12.874 08:53:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:27:12.874 08:53:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2275941 00:27:12.874 08:53:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:27:12.874 08:53:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:27:12.874 08:53:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2275941' 00:27:12.874 killing process with pid 2275941 00:27:12.874 08:53:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # kill 2275941 00:27:12.874 [2024-05-15 08:53:07.643486] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:12.874 08:53:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@971 -- # wait 2275941 00:27:13.132 08:53:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:13.132 08:53:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:13.132 08:53:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:13.132 08:53:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:13.132 08:53:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:13.132 08:53:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:13.132 08:53:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:13.132 08:53:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:15.667 08:53:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:15.667 00:27:15.667 real 0m10.181s 00:27:15.667 user 0m21.807s 00:27:15.667 sys 0m2.538s 00:27:15.667 08:53:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # xtrace_disable 00:27:15.667 08:53:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:27:15.667 ************************************ 00:27:15.667 END TEST nvmf_nmic 00:27:15.667 ************************************ 00:27:15.667 08:53:09 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:27:15.667 08:53:09 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:27:15.667 08:53:09 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:27:15.667 08:53:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:15.667 ************************************ 00:27:15.667 START TEST nvmf_fio_target 00:27:15.667 ************************************ 00:27:15.667 08:53:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:27:15.667 * Looking for test storage... 00:27:15.667 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:15.667 08:53:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:15.667 08:53:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:27:15.667 08:53:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:15.667 08:53:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:15.667 08:53:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:15.667 08:53:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:15.667 08:53:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:15.667 08:53:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:15.667 08:53:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:15.667 08:53:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:15.667 08:53:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:15.667 08:53:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:15.667 08:53:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:15.667 08:53:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:15.667 08:53:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:15.667 08:53:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:15.667 08:53:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:15.668 08:53:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:15.668 08:53:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:15.668 08:53:10 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:15.668 08:53:10 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:15.668 08:53:10 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:15.668 08:53:10 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.668 08:53:10 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.668 08:53:10 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.668 08:53:10 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:27:15.668 08:53:10 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.668 08:53:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:27:15.668 08:53:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:15.668 08:53:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:15.668 08:53:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:15.668 08:53:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:15.668 08:53:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:15.668 08:53:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:15.668 08:53:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:15.668 08:53:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:15.668 08:53:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:15.668 08:53:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:15.668 08:53:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:15.668 08:53:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:27:15.668 08:53:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:15.668 08:53:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:15.668 08:53:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:15.668 08:53:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:15.668 08:53:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:15.668 08:53:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:15.668 08:53:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:15.668 08:53:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:15.668 08:53:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:15.668 08:53:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:15.668 08:53:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:27:15.668 08:53:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:18.198 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:18.198 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:18.198 Found net devices under 0000:09:00.0: cvl_0_0 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:18.198 Found net devices under 0000:09:00.1: cvl_0_1 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:18.198 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:18.199 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:18.199 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:18.199 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:18.199 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:18.199 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:18.199 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:18.199 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:18.199 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:18.199 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:18.199 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:18.199 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:18.199 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:18.199 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:27:18.199 00:27:18.199 --- 10.0.0.2 ping statistics --- 00:27:18.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.199 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:27:18.199 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:18.199 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:18.199 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:27:18.199 00:27:18.199 --- 10.0.0.1 ping statistics --- 00:27:18.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.199 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:27:18.199 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:18.199 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:27:18.199 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:18.199 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:18.199 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:18.199 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:18.199 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:18.199 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:18.199 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:18.199 08:53:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:27:18.199 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:18.199 08:53:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@721 -- # xtrace_disable 00:27:18.199 08:53:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:27:18.199 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=2278938 00:27:18.199 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:18.199 08:53:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 2278938 00:27:18.199 08:53:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@828 -- # '[' -z 2278938 ']' 00:27:18.199 08:53:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:18.199 08:53:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local max_retries=100 00:27:18.199 08:53:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:18.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:18.199 08:53:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@837 -- # xtrace_disable 00:27:18.199 08:53:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:27:18.199 [2024-05-15 08:53:12.779704] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:27:18.199 [2024-05-15 08:53:12.779794] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:18.199 EAL: No free 2048 kB hugepages reported on node 1 00:27:18.199 [2024-05-15 08:53:12.859673] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:18.199 [2024-05-15 08:53:12.950725] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:18.199 [2024-05-15 08:53:12.950798] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:18.199 [2024-05-15 08:53:12.950814] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:18.199 [2024-05-15 08:53:12.950828] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:18.199 [2024-05-15 08:53:12.950839] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:18.199 [2024-05-15 08:53:12.954239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:18.199 [2024-05-15 08:53:12.954288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:18.199 [2024-05-15 08:53:12.954373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:18.199 [2024-05-15 08:53:12.954376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:18.457 08:53:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:27:18.457 08:53:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@861 -- # return 0 00:27:18.457 08:53:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:18.457 08:53:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@727 -- # xtrace_disable 00:27:18.457 08:53:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:27:18.457 08:53:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:18.457 08:53:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:18.715 [2024-05-15 08:53:13.350668] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:18.715 08:53:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:18.972 08:53:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:27:18.972 08:53:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:19.230 08:53:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:27:19.230 08:53:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:19.489 08:53:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:27:19.489 08:53:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:19.746 08:53:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:27:19.746 08:53:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:27:20.004 08:53:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:20.262 08:53:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:27:20.262 08:53:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:20.520 08:53:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:27:20.520 08:53:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:20.778 08:53:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:27:20.778 08:53:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:27:21.035 08:53:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:27:21.293 08:53:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:27:21.293 08:53:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:21.550 08:53:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:27:21.550 08:53:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:21.808 08:53:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:22.066 [2024-05-15 08:53:16.751096] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:22.066 [2024-05-15 08:53:16.751419] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:22.066 08:53:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:27:22.324 08:53:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:27:22.581 08:53:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:27:23.147 08:53:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:27:23.147 08:53:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local i=0 00:27:23.147 08:53:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:27:23.147 08:53:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # [[ -n 4 ]] 00:27:23.147 08:53:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # nvme_device_counter=4 00:27:23.147 08:53:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # sleep 2 00:27:25.043 08:53:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:27:25.044 08:53:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:27:25.044 08:53:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:27:25.044 08:53:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # nvme_devices=4 00:27:25.044 08:53:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:27:25.044 08:53:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # return 0 00:27:25.044 08:53:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:27:25.044 [global] 00:27:25.044 thread=1 00:27:25.044 invalidate=1 00:27:25.044 rw=write 00:27:25.044 time_based=1 00:27:25.044 runtime=1 00:27:25.044 ioengine=libaio 00:27:25.044 direct=1 00:27:25.044 bs=4096 00:27:25.044 iodepth=1 00:27:25.044 norandommap=0 00:27:25.044 numjobs=1 00:27:25.044 00:27:25.044 verify_dump=1 00:27:25.044 verify_backlog=512 00:27:25.044 verify_state_save=0 00:27:25.044 do_verify=1 00:27:25.044 verify=crc32c-intel 00:27:25.044 [job0] 00:27:25.044 filename=/dev/nvme0n1 00:27:25.044 [job1] 00:27:25.044 filename=/dev/nvme0n2 00:27:25.044 [job2] 00:27:25.044 filename=/dev/nvme0n3 00:27:25.044 [job3] 00:27:25.044 filename=/dev/nvme0n4 00:27:25.302 Could not set queue depth (nvme0n1) 00:27:25.302 Could not set queue depth (nvme0n2) 00:27:25.302 Could not set queue depth (nvme0n3) 00:27:25.302 Could not set queue depth (nvme0n4) 00:27:25.302 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:25.302 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:25.302 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:25.302 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:25.302 fio-3.35 00:27:25.302 Starting 4 threads 00:27:26.675 00:27:26.675 job0: (groupid=0, jobs=1): err= 0: pid=2280003: Wed May 15 08:53:21 2024 00:27:26.675 read: IOPS=52, BW=212KiB/s (217kB/s)(212KiB/1001msec) 00:27:26.675 slat (nsec): min=6210, max=40678, avg=11492.13, stdev=6888.91 00:27:26.675 clat (usec): min=339, max=41028, avg=16472.13, stdev=20000.82 00:27:26.675 lat (usec): min=346, max=41046, avg=16483.62, stdev=20005.21 00:27:26.675 clat percentiles (usec): 00:27:26.675 | 1.00th=[ 338], 5.00th=[ 379], 10.00th=[ 388], 20.00th=[ 392], 00:27:26.675 | 30.00th=[ 404], 40.00th=[ 416], 50.00th=[ 494], 60.00th=[ 529], 00:27:26.675 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:27:26.675 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:27:26.675 | 99.99th=[41157] 00:27:26.675 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:27:26.675 slat (nsec): min=5864, max=40413, avg=8178.23, stdev=3909.01 00:27:26.675 clat (usec): min=180, max=443, avg=236.99, stdev=31.45 00:27:26.675 lat (usec): min=187, max=451, avg=245.17, stdev=31.84 00:27:26.675 clat percentiles (usec): 00:27:26.675 | 1.00th=[ 186], 5.00th=[ 194], 10.00th=[ 206], 20.00th=[ 219], 00:27:26.675 | 30.00th=[ 223], 40.00th=[ 229], 50.00th=[ 233], 60.00th=[ 239], 00:27:26.675 | 70.00th=[ 245], 80.00th=[ 253], 90.00th=[ 265], 95.00th=[ 289], 00:27:26.675 | 99.00th=[ 379], 99.50th=[ 383], 99.90th=[ 445], 99.95th=[ 445], 00:27:26.675 | 99.99th=[ 445] 00:27:26.675 bw ( KiB/s): min= 4096, max= 4096, per=25.58%, avg=4096.00, stdev= 0.00, samples=1 00:27:26.675 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:27:26.675 lat (usec) : 250=70.62%, 500=24.96%, 750=0.71% 00:27:26.675 lat (msec) : 50=3.72% 00:27:26.675 cpu : usr=0.20%, sys=0.40%, ctx=565, majf=0, minf=1 00:27:26.675 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:26.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:26.675 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:26.675 issued rwts: total=53,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:26.675 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:26.675 job1: (groupid=0, jobs=1): err= 0: pid=2280004: Wed May 15 08:53:21 2024 00:27:26.675 read: IOPS=21, BW=86.3KiB/s (88.3kB/s)(88.0KiB/1020msec) 00:27:26.675 slat (nsec): min=8114, max=36519, avg=16652.45, stdev=5586.73 00:27:26.675 clat (usec): min=40900, max=43024, avg=41110.78, stdev=477.18 00:27:26.675 lat (usec): min=40908, max=43047, avg=41127.43, stdev=479.03 00:27:26.675 clat percentiles (usec): 00:27:26.675 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:27:26.675 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:27:26.675 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:27:26.675 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:27:26.675 | 99.99th=[43254] 00:27:26.675 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:27:26.675 slat (nsec): min=8250, max=29416, avg=9929.37, stdev=3166.45 00:27:26.675 clat (usec): min=171, max=262, avg=205.10, stdev=22.25 00:27:26.675 lat (usec): min=179, max=276, avg=215.02, stdev=22.51 00:27:26.675 clat percentiles (usec): 00:27:26.675 | 1.00th=[ 174], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 188], 00:27:26.675 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 198], 60.00th=[ 204], 00:27:26.675 | 70.00th=[ 212], 80.00th=[ 231], 90.00th=[ 243], 95.00th=[ 245], 00:27:26.675 | 99.00th=[ 251], 99.50th=[ 258], 99.90th=[ 265], 99.95th=[ 265], 00:27:26.675 | 99.99th=[ 265] 00:27:26.675 bw ( KiB/s): min= 4096, max= 4096, per=25.58%, avg=4096.00, stdev= 0.00, samples=1 00:27:26.675 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:27:26.675 lat (usec) : 250=94.01%, 500=1.87% 00:27:26.675 lat (msec) : 50=4.12% 00:27:26.675 cpu : usr=0.59%, sys=0.39%, ctx=537, majf=0, minf=1 00:27:26.675 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:26.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:26.675 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:26.675 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:26.675 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:26.675 job2: (groupid=0, jobs=1): err= 0: pid=2280005: Wed May 15 08:53:21 2024 00:27:26.675 read: IOPS=1843, BW=7373KiB/s (7550kB/s)(7380KiB/1001msec) 00:27:26.675 slat (nsec): min=5575, max=37962, avg=9118.03, stdev=4624.99 00:27:26.675 clat (usec): min=218, max=1933, avg=281.63, stdev=65.82 00:27:26.675 lat (usec): min=224, max=1941, avg=290.75, stdev=66.73 00:27:26.675 clat percentiles (usec): 00:27:26.675 | 1.00th=[ 225], 5.00th=[ 233], 10.00th=[ 237], 20.00th=[ 243], 00:27:26.675 | 30.00th=[ 249], 40.00th=[ 255], 50.00th=[ 265], 60.00th=[ 277], 00:27:26.675 | 70.00th=[ 289], 80.00th=[ 310], 90.00th=[ 359], 95.00th=[ 392], 00:27:26.675 | 99.00th=[ 469], 99.50th=[ 529], 99.90th=[ 701], 99.95th=[ 1942], 00:27:26.675 | 99.99th=[ 1942] 00:27:26.675 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:27:26.675 slat (nsec): min=7069, max=49489, avg=12562.87, stdev=5823.57 00:27:26.675 clat (usec): min=150, max=1065, avg=207.63, stdev=63.62 00:27:26.675 lat (usec): min=157, max=1084, avg=220.19, stdev=64.99 00:27:26.675 clat percentiles (usec): 00:27:26.675 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 167], 00:27:26.675 | 30.00th=[ 172], 40.00th=[ 178], 50.00th=[ 184], 60.00th=[ 194], 00:27:26.675 | 70.00th=[ 206], 80.00th=[ 239], 90.00th=[ 293], 95.00th=[ 330], 00:27:26.675 | 99.00th=[ 404], 99.50th=[ 412], 99.90th=[ 486], 99.95th=[ 889], 00:27:26.675 | 99.99th=[ 1074] 00:27:26.675 bw ( KiB/s): min= 8192, max= 8192, per=51.15%, avg=8192.00, stdev= 0.00, samples=1 00:27:26.675 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:27:26.675 lat (usec) : 250=58.41%, 500=41.15%, 750=0.36%, 1000=0.03% 00:27:26.675 lat (msec) : 2=0.05% 00:27:26.675 cpu : usr=4.30%, sys=4.70%, ctx=3894, majf=0, minf=1 00:27:26.675 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:26.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:26.675 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:26.675 issued rwts: total=1845,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:26.675 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:26.675 job3: (groupid=0, jobs=1): err= 0: pid=2280006: Wed May 15 08:53:21 2024 00:27:26.675 read: IOPS=564, BW=2256KiB/s (2310kB/s)(2308KiB/1023msec) 00:27:26.675 slat (nsec): min=5578, max=41682, avg=11606.97, stdev=5690.98 00:27:26.675 clat (usec): min=243, max=41319, avg=1328.78, stdev=6252.35 00:27:26.675 lat (usec): min=250, max=41337, avg=1340.38, stdev=6252.73 00:27:26.675 clat percentiles (usec): 00:27:26.676 | 1.00th=[ 265], 5.00th=[ 281], 10.00th=[ 281], 20.00th=[ 289], 00:27:26.676 | 30.00th=[ 297], 40.00th=[ 310], 50.00th=[ 326], 60.00th=[ 359], 00:27:26.676 | 70.00th=[ 379], 80.00th=[ 412], 90.00th=[ 437], 95.00th=[ 457], 00:27:26.676 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:27:26.676 | 99.99th=[41157] 00:27:26.676 write: IOPS=1000, BW=4004KiB/s (4100kB/s)(4096KiB/1023msec); 0 zone resets 00:27:26.676 slat (nsec): min=6170, max=59917, avg=11697.32, stdev=6289.97 00:27:26.676 clat (usec): min=152, max=1700, avg=226.26, stdev=73.21 00:27:26.676 lat (usec): min=161, max=1709, avg=237.95, stdev=74.39 00:27:26.676 clat percentiles (usec): 00:27:26.676 | 1.00th=[ 159], 5.00th=[ 169], 10.00th=[ 176], 20.00th=[ 184], 00:27:26.676 | 30.00th=[ 190], 40.00th=[ 196], 50.00th=[ 202], 60.00th=[ 212], 00:27:26.676 | 70.00th=[ 233], 80.00th=[ 273], 90.00th=[ 310], 95.00th=[ 334], 00:27:26.676 | 99.00th=[ 400], 99.50th=[ 420], 99.90th=[ 494], 99.95th=[ 1696], 00:27:26.676 | 99.99th=[ 1696] 00:27:26.676 bw ( KiB/s): min= 4096, max= 4096, per=25.58%, avg=4096.00, stdev= 0.00, samples=2 00:27:26.676 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:27:26.676 lat (usec) : 250=47.53%, 500=51.22%, 750=0.31% 00:27:26.676 lat (msec) : 2=0.06%, 50=0.87% 00:27:26.676 cpu : usr=1.17%, sys=2.45%, ctx=1601, majf=0, minf=2 00:27:26.676 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:26.676 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:26.676 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:26.676 issued rwts: total=577,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:26.676 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:26.676 00:27:26.676 Run status group 0 (all jobs): 00:27:26.676 READ: bw=9763KiB/s (9998kB/s), 86.3KiB/s-7373KiB/s (88.3kB/s-7550kB/s), io=9988KiB (10.2MB), run=1001-1023msec 00:27:26.676 WRITE: bw=15.6MiB/s (16.4MB/s), 2008KiB/s-8184KiB/s (2056kB/s-8380kB/s), io=16.0MiB (16.8MB), run=1001-1023msec 00:27:26.676 00:27:26.676 Disk stats (read/write): 00:27:26.676 nvme0n1: ios=99/512, merge=0/0, ticks=747/122, in_queue=869, util=86.67% 00:27:26.676 nvme0n2: ios=40/512, merge=0/0, ticks=1656/101, in_queue=1757, util=97.96% 00:27:26.676 nvme0n3: ios=1536/1716, merge=0/0, ticks=425/347, in_queue=772, util=88.89% 00:27:26.676 nvme0n4: ios=572/1024, merge=0/0, ticks=562/212, in_queue=774, util=89.55% 00:27:26.676 08:53:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:27:26.676 [global] 00:27:26.676 thread=1 00:27:26.676 invalidate=1 00:27:26.676 rw=randwrite 00:27:26.676 time_based=1 00:27:26.676 runtime=1 00:27:26.676 ioengine=libaio 00:27:26.676 direct=1 00:27:26.676 bs=4096 00:27:26.676 iodepth=1 00:27:26.676 norandommap=0 00:27:26.676 numjobs=1 00:27:26.676 00:27:26.676 verify_dump=1 00:27:26.676 verify_backlog=512 00:27:26.676 verify_state_save=0 00:27:26.676 do_verify=1 00:27:26.676 verify=crc32c-intel 00:27:26.676 [job0] 00:27:26.676 filename=/dev/nvme0n1 00:27:26.676 [job1] 00:27:26.676 filename=/dev/nvme0n2 00:27:26.676 [job2] 00:27:26.676 filename=/dev/nvme0n3 00:27:26.676 [job3] 00:27:26.676 filename=/dev/nvme0n4 00:27:26.676 Could not set queue depth (nvme0n1) 00:27:26.676 Could not set queue depth (nvme0n2) 00:27:26.676 Could not set queue depth (nvme0n3) 00:27:26.676 Could not set queue depth (nvme0n4) 00:27:26.971 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:26.971 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:26.971 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:26.971 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:26.971 fio-3.35 00:27:26.971 Starting 4 threads 00:27:28.344 00:27:28.344 job0: (groupid=0, jobs=1): err= 0: pid=2280232: Wed May 15 08:53:22 2024 00:27:28.344 read: IOPS=22, BW=90.8KiB/s (93.0kB/s)(92.0KiB/1013msec) 00:27:28.344 slat (nsec): min=7221, max=29693, avg=17986.35, stdev=7892.52 00:27:28.344 clat (usec): min=284, max=41005, avg=39174.96, stdev=8478.54 00:27:28.344 lat (usec): min=297, max=41023, avg=39192.94, stdev=8479.64 00:27:28.344 clat percentiles (usec): 00:27:28.344 | 1.00th=[ 285], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:27:28.344 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:27:28.344 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:27:28.344 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:27:28.344 | 99.99th=[41157] 00:27:28.344 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:27:28.344 slat (nsec): min=7755, max=31904, avg=9645.09, stdev=2436.80 00:27:28.344 clat (usec): min=157, max=335, avg=204.69, stdev=21.44 00:27:28.344 lat (usec): min=165, max=367, avg=214.33, stdev=22.16 00:27:28.344 clat percentiles (usec): 00:27:28.344 | 1.00th=[ 163], 5.00th=[ 169], 10.00th=[ 174], 20.00th=[ 182], 00:27:28.344 | 30.00th=[ 196], 40.00th=[ 206], 50.00th=[ 208], 60.00th=[ 212], 00:27:28.344 | 70.00th=[ 217], 80.00th=[ 221], 90.00th=[ 227], 95.00th=[ 235], 00:27:28.344 | 99.00th=[ 249], 99.50th=[ 285], 99.90th=[ 334], 99.95th=[ 334], 00:27:28.344 | 99.99th=[ 334] 00:27:28.344 bw ( KiB/s): min= 4096, max= 4096, per=29.57%, avg=4096.00, stdev= 0.00, samples=1 00:27:28.344 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:27:28.344 lat (usec) : 250=94.77%, 500=1.12% 00:27:28.344 lat (msec) : 50=4.11% 00:27:28.344 cpu : usr=0.10%, sys=0.89%, ctx=536, majf=0, minf=2 00:27:28.344 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:28.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:28.344 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:28.344 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:28.344 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:28.344 job1: (groupid=0, jobs=1): err= 0: pid=2280233: Wed May 15 08:53:22 2024 00:27:28.344 read: IOPS=21, BW=85.0KiB/s (87.1kB/s)(88.0KiB/1035msec) 00:27:28.344 slat (nsec): min=5825, max=33673, avg=19476.55, stdev=8703.08 00:27:28.344 clat (usec): min=40973, max=42029, avg=41874.63, stdev=291.76 00:27:28.344 lat (usec): min=40986, max=42042, avg=41894.11, stdev=294.18 00:27:28.344 clat percentiles (usec): 00:27:28.344 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[42206], 00:27:28.344 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:27:28.344 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:27:28.344 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:27:28.344 | 99.99th=[42206] 00:27:28.344 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:27:28.344 slat (nsec): min=5929, max=31622, avg=8068.94, stdev=2538.54 00:27:28.344 clat (usec): min=180, max=375, avg=210.60, stdev=28.35 00:27:28.344 lat (usec): min=187, max=382, avg=218.67, stdev=28.40 00:27:28.344 clat percentiles (usec): 00:27:28.344 | 1.00th=[ 182], 5.00th=[ 186], 10.00th=[ 188], 20.00th=[ 190], 00:27:28.344 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 198], 60.00th=[ 204], 00:27:28.344 | 70.00th=[ 215], 80.00th=[ 239], 90.00th=[ 249], 95.00th=[ 260], 00:27:28.344 | 99.00th=[ 306], 99.50th=[ 343], 99.90th=[ 375], 99.95th=[ 375], 00:27:28.344 | 99.99th=[ 375] 00:27:28.344 bw ( KiB/s): min= 4096, max= 4096, per=29.57%, avg=4096.00, stdev= 0.00, samples=1 00:27:28.344 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:27:28.344 lat (usec) : 250=86.52%, 500=9.36% 00:27:28.344 lat (msec) : 50=4.12% 00:27:28.344 cpu : usr=0.29%, sys=0.29%, ctx=535, majf=0, minf=1 00:27:28.344 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:28.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:28.344 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:28.344 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:28.344 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:28.344 job2: (groupid=0, jobs=1): err= 0: pid=2280234: Wed May 15 08:53:22 2024 00:27:28.344 read: IOPS=21, BW=86.1KiB/s (88.2kB/s)(88.0KiB/1022msec) 00:27:28.344 slat (nsec): min=7185, max=36583, avg=20189.95, stdev=9160.82 00:27:28.344 clat (usec): min=40863, max=41061, avg=40971.18, stdev=44.95 00:27:28.344 lat (usec): min=40870, max=41079, avg=40991.37, stdev=44.55 00:27:28.344 clat percentiles (usec): 00:27:28.344 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:27:28.344 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:27:28.344 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:27:28.344 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:27:28.344 | 99.99th=[41157] 00:27:28.344 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:27:28.344 slat (nsec): min=6612, max=41437, avg=9657.07, stdev=3582.99 00:27:28.344 clat (usec): min=166, max=383, avg=221.94, stdev=26.26 00:27:28.344 lat (usec): min=174, max=399, avg=231.60, stdev=26.22 00:27:28.344 clat percentiles (usec): 00:27:28.344 | 1.00th=[ 169], 5.00th=[ 180], 10.00th=[ 196], 20.00th=[ 206], 00:27:28.344 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 217], 60.00th=[ 223], 00:27:28.344 | 70.00th=[ 231], 80.00th=[ 241], 90.00th=[ 260], 95.00th=[ 269], 00:27:28.344 | 99.00th=[ 289], 99.50th=[ 330], 99.90th=[ 383], 99.95th=[ 383], 00:27:28.344 | 99.99th=[ 383] 00:27:28.344 bw ( KiB/s): min= 4096, max= 4096, per=29.57%, avg=4096.00, stdev= 0.00, samples=1 00:27:28.344 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:27:28.344 lat (usec) : 250=82.58%, 500=13.30% 00:27:28.344 lat (msec) : 50=4.12% 00:27:28.344 cpu : usr=0.29%, sys=0.59%, ctx=535, majf=0, minf=1 00:27:28.344 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:28.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:28.344 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:28.344 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:28.344 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:28.344 job3: (groupid=0, jobs=1): err= 0: pid=2280235: Wed May 15 08:53:22 2024 00:27:28.344 read: IOPS=1991, BW=7964KiB/s (8155kB/s)(7972KiB/1001msec) 00:27:28.344 slat (usec): min=4, max=138, avg=12.26, stdev= 9.41 00:27:28.344 clat (usec): min=161, max=546, avg=283.58, stdev=41.03 00:27:28.344 lat (usec): min=237, max=579, avg=295.85, stdev=44.29 00:27:28.344 clat percentiles (usec): 00:27:28.344 | 1.00th=[ 239], 5.00th=[ 243], 10.00th=[ 247], 20.00th=[ 253], 00:27:28.344 | 30.00th=[ 258], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 277], 00:27:28.344 | 70.00th=[ 293], 80.00th=[ 314], 90.00th=[ 355], 95.00th=[ 375], 00:27:28.344 | 99.00th=[ 388], 99.50th=[ 404], 99.90th=[ 502], 99.95th=[ 545], 00:27:28.344 | 99.99th=[ 545] 00:27:28.344 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:27:28.344 slat (nsec): min=5559, max=47893, avg=9502.76, stdev=4947.90 00:27:28.344 clat (usec): min=142, max=357, avg=184.80, stdev=34.47 00:27:28.344 lat (usec): min=148, max=374, avg=194.30, stdev=37.29 00:27:28.344 clat percentiles (usec): 00:27:28.345 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 157], 00:27:28.345 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 174], 60.00th=[ 180], 00:27:28.345 | 70.00th=[ 192], 80.00th=[ 215], 90.00th=[ 233], 95.00th=[ 251], 00:27:28.345 | 99.00th=[ 302], 99.50th=[ 310], 99.90th=[ 343], 99.95th=[ 359], 00:27:28.345 | 99.99th=[ 359] 00:27:28.345 bw ( KiB/s): min= 8632, max= 8632, per=62.32%, avg=8632.00, stdev= 0.00, samples=1 00:27:28.345 iops : min= 2158, max= 2158, avg=2158.00, stdev= 0.00, samples=1 00:27:28.345 lat (usec) : 250=55.70%, 500=44.27%, 750=0.02% 00:27:28.345 cpu : usr=2.40%, sys=4.60%, ctx=4042, majf=0, minf=1 00:27:28.345 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:28.345 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:28.345 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:28.345 issued rwts: total=1993,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:28.345 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:28.345 00:27:28.345 Run status group 0 (all jobs): 00:27:28.345 READ: bw=7961KiB/s (8152kB/s), 85.0KiB/s-7964KiB/s (87.1kB/s-8155kB/s), io=8240KiB (8438kB), run=1001-1035msec 00:27:28.345 WRITE: bw=13.5MiB/s (14.2MB/s), 1979KiB/s-8184KiB/s (2026kB/s-8380kB/s), io=14.0MiB (14.7MB), run=1001-1035msec 00:27:28.345 00:27:28.345 Disk stats (read/write): 00:27:28.345 nvme0n1: ios=45/512, merge=0/0, ticks=1558/102, in_queue=1660, util=84.67% 00:27:28.345 nvme0n2: ios=39/512, merge=0/0, ticks=1590/104, in_queue=1694, util=88.59% 00:27:28.345 nvme0n3: ios=74/512, merge=0/0, ticks=781/108, in_queue=889, util=94.74% 00:27:28.345 nvme0n4: ios=1593/2015, merge=0/0, ticks=495/355, in_queue=850, util=95.64% 00:27:28.345 08:53:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:27:28.345 [global] 00:27:28.345 thread=1 00:27:28.345 invalidate=1 00:27:28.345 rw=write 00:27:28.345 time_based=1 00:27:28.345 runtime=1 00:27:28.345 ioengine=libaio 00:27:28.345 direct=1 00:27:28.345 bs=4096 00:27:28.345 iodepth=128 00:27:28.345 norandommap=0 00:27:28.345 numjobs=1 00:27:28.345 00:27:28.345 verify_dump=1 00:27:28.345 verify_backlog=512 00:27:28.345 verify_state_save=0 00:27:28.345 do_verify=1 00:27:28.345 verify=crc32c-intel 00:27:28.345 [job0] 00:27:28.345 filename=/dev/nvme0n1 00:27:28.345 [job1] 00:27:28.345 filename=/dev/nvme0n2 00:27:28.345 [job2] 00:27:28.345 filename=/dev/nvme0n3 00:27:28.345 [job3] 00:27:28.345 filename=/dev/nvme0n4 00:27:28.345 Could not set queue depth (nvme0n1) 00:27:28.345 Could not set queue depth (nvme0n2) 00:27:28.345 Could not set queue depth (nvme0n3) 00:27:28.345 Could not set queue depth (nvme0n4) 00:27:28.345 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:27:28.345 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:27:28.345 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:27:28.345 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:27:28.345 fio-3.35 00:27:28.345 Starting 4 threads 00:27:29.715 00:27:29.715 job0: (groupid=0, jobs=1): err= 0: pid=2280467: Wed May 15 08:53:24 2024 00:27:29.715 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:27:29.715 slat (usec): min=3, max=13922, avg=124.33, stdev=787.06 00:27:29.715 clat (usec): min=9342, max=35020, avg=15766.22, stdev=3537.60 00:27:29.715 lat (usec): min=9457, max=35037, avg=15890.55, stdev=3612.28 00:27:29.715 clat percentiles (usec): 00:27:29.715 | 1.00th=[10421], 5.00th=[11994], 10.00th=[12256], 20.00th=[12911], 00:27:29.715 | 30.00th=[13566], 40.00th=[13829], 50.00th=[14746], 60.00th=[16057], 00:27:29.715 | 70.00th=[16712], 80.00th=[18482], 90.00th=[20841], 95.00th=[21627], 00:27:29.715 | 99.00th=[26608], 99.50th=[27919], 99.90th=[29492], 99.95th=[30540], 00:27:29.715 | 99.99th=[34866] 00:27:29.715 write: IOPS=3715, BW=14.5MiB/s (15.2MB/s)(14.6MiB/1004msec); 0 zone resets 00:27:29.715 slat (usec): min=4, max=9310, avg=139.31, stdev=711.93 00:27:29.715 clat (usec): min=1080, max=50370, avg=18746.27, stdev=8117.86 00:27:29.715 lat (usec): min=1091, max=50388, avg=18885.57, stdev=8184.80 00:27:29.715 clat percentiles (usec): 00:27:29.715 | 1.00th=[ 8094], 5.00th=[10552], 10.00th=[11076], 20.00th=[12125], 00:27:29.715 | 30.00th=[13829], 40.00th=[14877], 50.00th=[17957], 60.00th=[19792], 00:27:29.715 | 70.00th=[20579], 80.00th=[21365], 90.00th=[29230], 95.00th=[38011], 00:27:29.715 | 99.00th=[45876], 99.50th=[47449], 99.90th=[50594], 99.95th=[50594], 00:27:29.715 | 99.99th=[50594] 00:27:29.715 bw ( KiB/s): min=12528, max=16384, per=23.78%, avg=14456.00, stdev=2726.60, samples=2 00:27:29.715 iops : min= 3132, max= 4096, avg=3614.00, stdev=681.65, samples=2 00:27:29.716 lat (msec) : 2=0.07%, 4=0.34%, 10=1.96%, 20=70.41%, 50=27.13% 00:27:29.716 lat (msec) : 100=0.10% 00:27:29.716 cpu : usr=4.79%, sys=5.98%, ctx=327, majf=0, minf=1 00:27:29.716 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:27:29.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:29.716 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:29.716 issued rwts: total=3584,3730,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:29.716 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:29.716 job1: (groupid=0, jobs=1): err= 0: pid=2280468: Wed May 15 08:53:24 2024 00:27:29.716 read: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec) 00:27:29.716 slat (usec): min=3, max=14803, avg=176.85, stdev=1184.02 00:27:29.716 clat (usec): min=8816, max=48347, avg=21207.94, stdev=7569.25 00:27:29.716 lat (usec): min=8823, max=49476, avg=21384.79, stdev=7692.30 00:27:29.716 clat percentiles (usec): 00:27:29.716 | 1.00th=[10159], 5.00th=[12911], 10.00th=[13042], 20.00th=[13566], 00:27:29.716 | 30.00th=[14091], 40.00th=[17957], 50.00th=[21103], 60.00th=[24511], 00:27:29.716 | 70.00th=[25297], 80.00th=[26084], 90.00th=[31065], 95.00th=[33817], 00:27:29.716 | 99.00th=[44303], 99.50th=[46400], 99.90th=[48497], 99.95th=[48497], 00:27:29.716 | 99.99th=[48497] 00:27:29.716 write: IOPS=2624, BW=10.3MiB/s (10.8MB/s)(10.3MiB/1007msec); 0 zone resets 00:27:29.716 slat (usec): min=4, max=10298, avg=198.17, stdev=790.14 00:27:29.716 clat (usec): min=6627, max=61777, avg=27622.95, stdev=12253.67 00:27:29.716 lat (usec): min=6646, max=61785, avg=27821.12, stdev=12341.52 00:27:29.716 clat percentiles (usec): 00:27:29.716 | 1.00th=[ 7635], 5.00th=[12911], 10.00th=[16188], 20.00th=[19530], 00:27:29.716 | 30.00th=[20055], 40.00th=[20579], 50.00th=[21365], 60.00th=[25297], 00:27:29.716 | 70.00th=[32375], 80.00th=[39060], 90.00th=[46400], 95.00th=[53740], 00:27:29.716 | 99.00th=[61080], 99.50th=[61604], 99.90th=[61604], 99.95th=[61604], 00:27:29.716 | 99.99th=[61604] 00:27:29.716 bw ( KiB/s): min=10120, max=10360, per=16.85%, avg=10240.00, stdev=169.71, samples=2 00:27:29.716 iops : min= 2530, max= 2590, avg=2560.00, stdev=42.43, samples=2 00:27:29.716 lat (msec) : 10=0.69%, 20=33.62%, 50=61.98%, 100=3.71% 00:27:29.716 cpu : usr=3.58%, sys=4.27%, ctx=339, majf=0, minf=1 00:27:29.716 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:27:29.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:29.716 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:29.716 issued rwts: total=2560,2643,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:29.716 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:29.716 job2: (groupid=0, jobs=1): err= 0: pid=2280469: Wed May 15 08:53:24 2024 00:27:29.716 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec) 00:27:29.716 slat (usec): min=3, max=5775, avg=96.27, stdev=499.01 00:27:29.716 clat (usec): min=7976, max=16591, avg=12289.58, stdev=1095.34 00:27:29.716 lat (usec): min=7993, max=17509, avg=12385.86, stdev=1142.60 00:27:29.716 clat percentiles (usec): 00:27:29.716 | 1.00th=[ 9503], 5.00th=[10421], 10.00th=[10814], 20.00th=[11469], 00:27:29.716 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12387], 60.00th=[12518], 00:27:29.716 | 70.00th=[12649], 80.00th=[12911], 90.00th=[13435], 95.00th=[14091], 00:27:29.716 | 99.00th=[15533], 99.50th=[16188], 99.90th=[16450], 99.95th=[16581], 00:27:29.716 | 99.99th=[16581] 00:27:29.716 write: IOPS=5384, BW=21.0MiB/s (22.1MB/s)(21.1MiB/1004msec); 0 zone resets 00:27:29.716 slat (usec): min=4, max=4721, avg=85.59, stdev=376.36 00:27:29.716 clat (usec): min=506, max=17987, avg=11772.41, stdev=1449.01 00:27:29.716 lat (usec): min=3767, max=18041, avg=11858.00, stdev=1450.24 00:27:29.716 clat percentiles (usec): 00:27:29.716 | 1.00th=[ 7635], 5.00th=[ 9372], 10.00th=[10290], 20.00th=[10945], 00:27:29.716 | 30.00th=[11600], 40.00th=[11863], 50.00th=[11994], 60.00th=[12125], 00:27:29.716 | 70.00th=[12256], 80.00th=[12387], 90.00th=[12911], 95.00th=[14222], 00:27:29.716 | 99.00th=[15270], 99.50th=[15664], 99.90th=[16319], 99.95th=[16319], 00:27:29.716 | 99.99th=[17957] 00:27:29.716 bw ( KiB/s): min=21064, max=21160, per=34.73%, avg=21112.00, stdev=67.88, samples=2 00:27:29.716 iops : min= 5266, max= 5290, avg=5278.00, stdev=16.97, samples=2 00:27:29.716 lat (usec) : 750=0.01% 00:27:29.716 lat (msec) : 4=0.09%, 10=5.12%, 20=94.78% 00:27:29.716 cpu : usr=7.08%, sys=8.47%, ctx=598, majf=0, minf=1 00:27:29.716 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:27:29.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:29.716 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:29.716 issued rwts: total=5120,5406,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:29.716 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:29.716 job3: (groupid=0, jobs=1): err= 0: pid=2280470: Wed May 15 08:53:24 2024 00:27:29.716 read: IOPS=3280, BW=12.8MiB/s (13.4MB/s)(13.0MiB/1011msec) 00:27:29.716 slat (usec): min=3, max=16778, avg=138.98, stdev=1047.38 00:27:29.716 clat (usec): min=6159, max=35722, avg=17624.37, stdev=4387.58 00:27:29.716 lat (usec): min=6193, max=35737, avg=17763.35, stdev=4468.87 00:27:29.716 clat percentiles (usec): 00:27:29.716 | 1.00th=[ 9634], 5.00th=[11600], 10.00th=[12518], 20.00th=[14746], 00:27:29.716 | 30.00th=[15139], 40.00th=[15926], 50.00th=[16909], 60.00th=[17957], 00:27:29.716 | 70.00th=[18744], 80.00th=[19792], 90.00th=[24511], 95.00th=[26870], 00:27:29.716 | 99.00th=[30802], 99.50th=[31327], 99.90th=[32375], 99.95th=[35390], 00:27:29.716 | 99.99th=[35914] 00:27:29.716 write: IOPS=3545, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1011msec); 0 zone resets 00:27:29.716 slat (usec): min=4, max=15124, avg=138.65, stdev=659.43 00:27:29.716 clat (usec): min=1870, max=75288, avg=19346.75, stdev=12013.04 00:27:29.716 lat (usec): min=1883, max=75303, avg=19485.39, stdev=12096.86 00:27:29.716 clat percentiles (usec): 00:27:29.716 | 1.00th=[ 5145], 5.00th=[ 9241], 10.00th=[10683], 20.00th=[13960], 00:27:29.716 | 30.00th=[15401], 40.00th=[15926], 50.00th=[16319], 60.00th=[17171], 00:27:29.716 | 70.00th=[18482], 80.00th=[19792], 90.00th=[27395], 95.00th=[55313], 00:27:29.716 | 99.00th=[66847], 99.50th=[70779], 99.90th=[74974], 99.95th=[74974], 00:27:29.716 | 99.99th=[74974] 00:27:29.716 bw ( KiB/s): min=12288, max=16384, per=23.59%, avg=14336.00, stdev=2896.31, samples=2 00:27:29.716 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:27:29.716 lat (msec) : 2=0.06%, 10=3.93%, 20=76.47%, 50=16.43%, 100=3.12% 00:27:29.716 cpu : usr=3.66%, sys=6.24%, ctx=464, majf=0, minf=1 00:27:29.716 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:27:29.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:29.716 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:29.716 issued rwts: total=3317,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:29.716 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:29.716 00:27:29.716 Run status group 0 (all jobs): 00:27:29.716 READ: bw=56.3MiB/s (59.1MB/s), 9.93MiB/s-19.9MiB/s (10.4MB/s-20.9MB/s), io=57.0MiB (59.7MB), run=1004-1011msec 00:27:29.716 WRITE: bw=59.4MiB/s (62.2MB/s), 10.3MiB/s-21.0MiB/s (10.8MB/s-22.1MB/s), io=60.0MiB (62.9MB), run=1004-1011msec 00:27:29.716 00:27:29.716 Disk stats (read/write): 00:27:29.716 nvme0n1: ios=2983/3072, merge=0/0, ticks=23261/29773, in_queue=53034, util=91.78% 00:27:29.716 nvme0n2: ios=2076/2463, merge=0/0, ticks=20402/32466, in_queue=52868, util=91.17% 00:27:29.716 nvme0n3: ios=4366/4608, merge=0/0, ticks=18277/16597, in_queue=34874, util=93.23% 00:27:29.716 nvme0n4: ios=2600/3071, merge=0/0, ticks=44635/51660, in_queue=96295, util=97.06% 00:27:29.716 08:53:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:27:29.716 [global] 00:27:29.716 thread=1 00:27:29.716 invalidate=1 00:27:29.716 rw=randwrite 00:27:29.716 time_based=1 00:27:29.716 runtime=1 00:27:29.716 ioengine=libaio 00:27:29.716 direct=1 00:27:29.716 bs=4096 00:27:29.716 iodepth=128 00:27:29.716 norandommap=0 00:27:29.716 numjobs=1 00:27:29.716 00:27:29.716 verify_dump=1 00:27:29.716 verify_backlog=512 00:27:29.716 verify_state_save=0 00:27:29.716 do_verify=1 00:27:29.716 verify=crc32c-intel 00:27:29.716 [job0] 00:27:29.716 filename=/dev/nvme0n1 00:27:29.716 [job1] 00:27:29.716 filename=/dev/nvme0n2 00:27:29.716 [job2] 00:27:29.716 filename=/dev/nvme0n3 00:27:29.716 [job3] 00:27:29.716 filename=/dev/nvme0n4 00:27:29.716 Could not set queue depth (nvme0n1) 00:27:29.716 Could not set queue depth (nvme0n2) 00:27:29.716 Could not set queue depth (nvme0n3) 00:27:29.716 Could not set queue depth (nvme0n4) 00:27:29.716 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:27:29.716 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:27:29.716 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:27:29.716 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:27:29.716 fio-3.35 00:27:29.716 Starting 4 threads 00:27:31.090 00:27:31.090 job0: (groupid=0, jobs=1): err= 0: pid=2280761: Wed May 15 08:53:25 2024 00:27:31.090 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:27:31.090 slat (usec): min=2, max=23755, avg=143.64, stdev=1063.75 00:27:31.090 clat (usec): min=2395, max=44424, avg=17690.38, stdev=6561.56 00:27:31.090 lat (usec): min=2400, max=51468, avg=17834.02, stdev=6648.59 00:27:31.090 clat percentiles (usec): 00:27:31.090 | 1.00th=[ 2606], 5.00th=[ 7701], 10.00th=[11076], 20.00th=[13304], 00:27:31.090 | 30.00th=[14877], 40.00th=[15664], 50.00th=[16581], 60.00th=[18482], 00:27:31.090 | 70.00th=[19530], 80.00th=[21365], 90.00th=[26346], 95.00th=[30016], 00:27:31.090 | 99.00th=[39060], 99.50th=[40633], 99.90th=[41681], 99.95th=[41681], 00:27:31.090 | 99.99th=[44303] 00:27:31.090 write: IOPS=3174, BW=12.4MiB/s (13.0MB/s)(12.4MiB/1003msec); 0 zone resets 00:27:31.090 slat (usec): min=3, max=31163, avg=164.75, stdev=1145.40 00:27:31.090 clat (usec): min=1631, max=63049, avg=22768.59, stdev=12907.65 00:27:31.090 lat (usec): min=1817, max=63064, avg=22933.34, stdev=13002.90 00:27:31.090 clat percentiles (usec): 00:27:31.090 | 1.00th=[ 2671], 5.00th=[ 6521], 10.00th=[ 9372], 20.00th=[11338], 00:27:31.090 | 30.00th=[13173], 40.00th=[16712], 50.00th=[21890], 60.00th=[23725], 00:27:31.090 | 70.00th=[26084], 80.00th=[31327], 90.00th=[42730], 95.00th=[50070], 00:27:31.090 | 99.00th=[58459], 99.50th=[58983], 99.90th=[63177], 99.95th=[63177], 00:27:31.090 | 99.99th=[63177] 00:27:31.090 bw ( KiB/s): min=11320, max=13312, per=19.78%, avg=12316.00, stdev=1408.56, samples=2 00:27:31.090 iops : min= 2830, max= 3328, avg=3079.00, stdev=352.14, samples=2 00:27:31.090 lat (msec) : 2=0.03%, 4=2.81%, 10=7.00%, 20=49.20%, 50=38.79% 00:27:31.090 lat (msec) : 100=2.16% 00:27:31.091 cpu : usr=3.09%, sys=2.79%, ctx=289, majf=0, minf=1 00:27:31.091 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:27:31.091 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.091 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:31.091 issued rwts: total=3072,3184,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:31.091 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:31.091 job1: (groupid=0, jobs=1): err= 0: pid=2280776: Wed May 15 08:53:25 2024 00:27:31.091 read: IOPS=5015, BW=19.6MiB/s (20.5MB/s)(19.7MiB/1006msec) 00:27:31.091 slat (usec): min=2, max=15068, avg=103.88, stdev=756.57 00:27:31.091 clat (usec): min=3281, max=39711, avg=13759.37, stdev=4427.95 00:27:31.091 lat (usec): min=4024, max=39727, avg=13863.25, stdev=4478.54 00:27:31.091 clat percentiles (usec): 00:27:31.091 | 1.00th=[ 7111], 5.00th=[ 9241], 10.00th=[10159], 20.00th=[10683], 00:27:31.091 | 30.00th=[11207], 40.00th=[11863], 50.00th=[12649], 60.00th=[13566], 00:27:31.091 | 70.00th=[14353], 80.00th=[16188], 90.00th=[19006], 95.00th=[22676], 00:27:31.091 | 99.00th=[31065], 99.50th=[31065], 99.90th=[33424], 99.95th=[33424], 00:27:31.091 | 99.99th=[39584] 00:27:31.091 write: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec); 0 zone resets 00:27:31.091 slat (usec): min=3, max=9946, avg=83.76, stdev=599.78 00:27:31.091 clat (usec): min=872, max=31063, avg=11288.01, stdev=2523.46 00:27:31.091 lat (usec): min=998, max=31069, avg=11371.76, stdev=2579.13 00:27:31.091 clat percentiles (usec): 00:27:31.091 | 1.00th=[ 3982], 5.00th=[ 6849], 10.00th=[ 8094], 20.00th=[10028], 00:27:31.091 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11338], 60.00th=[11731], 00:27:31.091 | 70.00th=[12256], 80.00th=[12911], 90.00th=[13829], 95.00th=[15008], 00:27:31.091 | 99.00th=[18744], 99.50th=[23987], 99.90th=[23987], 99.95th=[23987], 00:27:31.091 | 99.99th=[31065] 00:27:31.091 bw ( KiB/s): min=17856, max=23104, per=32.90%, avg=20480.00, stdev=3710.90, samples=2 00:27:31.091 iops : min= 4464, max= 5776, avg=5120.00, stdev=927.72, samples=2 00:27:31.091 lat (usec) : 1000=0.06% 00:27:31.091 lat (msec) : 2=0.04%, 4=0.43%, 10=13.29%, 20=81.75%, 50=4.43% 00:27:31.091 cpu : usr=6.47%, sys=6.67%, ctx=362, majf=0, minf=1 00:27:31.091 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:27:31.091 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.091 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:31.091 issued rwts: total=5046,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:31.091 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:31.091 job2: (groupid=0, jobs=1): err= 0: pid=2280814: Wed May 15 08:53:25 2024 00:27:31.091 read: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec) 00:27:31.091 slat (usec): min=2, max=12979, avg=136.16, stdev=852.92 00:27:31.091 clat (usec): min=4355, max=47131, avg=18410.56, stdev=7593.08 00:27:31.091 lat (usec): min=4361, max=53488, avg=18546.72, stdev=7668.47 00:27:31.091 clat percentiles (usec): 00:27:31.091 | 1.00th=[ 4424], 5.00th=[10159], 10.00th=[11469], 20.00th=[12387], 00:27:31.091 | 30.00th=[13698], 40.00th=[15270], 50.00th=[16188], 60.00th=[17433], 00:27:31.091 | 70.00th=[20055], 80.00th=[23987], 90.00th=[30802], 95.00th=[33817], 00:27:31.091 | 99.00th=[38011], 99.50th=[42730], 99.90th=[46924], 99.95th=[46924], 00:27:31.091 | 99.99th=[46924] 00:27:31.091 write: IOPS=3373, BW=13.2MiB/s (13.8MB/s)(13.3MiB/1006msec); 0 zone resets 00:27:31.091 slat (usec): min=3, max=14552, avg=152.39, stdev=732.27 00:27:31.091 clat (usec): min=938, max=57766, avg=20947.71, stdev=12454.88 00:27:31.091 lat (usec): min=963, max=57780, avg=21100.10, stdev=12538.74 00:27:31.091 clat percentiles (usec): 00:27:31.091 | 1.00th=[ 2474], 5.00th=[ 6259], 10.00th=[ 9503], 20.00th=[11469], 00:27:31.091 | 30.00th=[12518], 40.00th=[13566], 50.00th=[17433], 60.00th=[21103], 00:27:31.091 | 70.00th=[25560], 80.00th=[30802], 90.00th=[40633], 95.00th=[47449], 00:27:31.091 | 99.00th=[53216], 99.50th=[53740], 99.90th=[57934], 99.95th=[57934], 00:27:31.091 | 99.99th=[57934] 00:27:31.091 bw ( KiB/s): min=12288, max=13848, per=20.99%, avg=13068.00, stdev=1103.09, samples=2 00:27:31.091 iops : min= 3072, max= 3462, avg=3267.00, stdev=275.77, samples=2 00:27:31.091 lat (usec) : 1000=0.03% 00:27:31.091 lat (msec) : 2=0.12%, 4=1.16%, 10=5.57%, 20=56.12%, 50=34.83% 00:27:31.091 lat (msec) : 100=2.17% 00:27:31.091 cpu : usr=3.08%, sys=3.98%, ctx=421, majf=0, minf=1 00:27:31.091 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:27:31.091 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.091 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:31.091 issued rwts: total=3072,3394,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:31.091 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:31.091 job3: (groupid=0, jobs=1): err= 0: pid=2280822: Wed May 15 08:53:25 2024 00:27:31.091 read: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec) 00:27:31.091 slat (usec): min=3, max=10487, avg=107.02, stdev=649.83 00:27:31.091 clat (usec): min=5377, max=32476, avg=14024.31, stdev=4360.60 00:27:31.091 lat (usec): min=5393, max=32486, avg=14131.33, stdev=4415.22 00:27:31.091 clat percentiles (usec): 00:27:31.091 | 1.00th=[ 5473], 5.00th=[ 6521], 10.00th=[11207], 20.00th=[11863], 00:27:31.091 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12387], 60.00th=[12780], 00:27:31.091 | 70.00th=[14615], 80.00th=[17171], 90.00th=[19530], 95.00th=[22938], 00:27:31.091 | 99.00th=[26608], 99.50th=[28967], 99.90th=[29754], 99.95th=[31851], 00:27:31.091 | 99.99th=[32375] 00:27:31.091 write: IOPS=3947, BW=15.4MiB/s (16.2MB/s)(15.5MiB/1007msec); 0 zone resets 00:27:31.091 slat (usec): min=3, max=25785, avg=145.52, stdev=948.38 00:27:31.091 clat (usec): min=5229, max=47498, avg=19448.95, stdev=9646.72 00:27:31.091 lat (usec): min=5722, max=47508, avg=19594.47, stdev=9707.00 00:27:31.091 clat percentiles (usec): 00:27:31.091 | 1.00th=[ 8455], 5.00th=[11338], 10.00th=[11600], 20.00th=[11994], 00:27:31.091 | 30.00th=[12256], 40.00th=[12649], 50.00th=[13042], 60.00th=[21103], 00:27:31.091 | 70.00th=[22676], 80.00th=[27395], 90.00th=[35390], 95.00th=[40633], 00:27:31.091 | 99.00th=[44303], 99.50th=[45351], 99.90th=[47449], 99.95th=[47449], 00:27:31.091 | 99.99th=[47449] 00:27:31.091 bw ( KiB/s): min=12168, max=18616, per=24.72%, avg=15392.00, stdev=4559.42, samples=2 00:27:31.091 iops : min= 3042, max= 4654, avg=3848.00, stdev=1139.86, samples=2 00:27:31.091 lat (msec) : 10=5.34%, 20=68.94%, 50=25.72% 00:27:31.091 cpu : usr=4.57%, sys=5.67%, ctx=330, majf=0, minf=1 00:27:31.091 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:27:31.091 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.091 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:31.091 issued rwts: total=3584,3975,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:31.091 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:31.091 00:27:31.091 Run status group 0 (all jobs): 00:27:31.091 READ: bw=57.3MiB/s (60.1MB/s), 11.9MiB/s-19.6MiB/s (12.5MB/s-20.5MB/s), io=57.7MiB (60.5MB), run=1003-1007msec 00:27:31.091 WRITE: bw=60.8MiB/s (63.8MB/s), 12.4MiB/s-19.9MiB/s (13.0MB/s-20.8MB/s), io=61.2MiB (64.2MB), run=1003-1007msec 00:27:31.091 00:27:31.091 Disk stats (read/write): 00:27:31.091 nvme0n1: ios=2310/2560, merge=0/0, ticks=26888/33566, in_queue=60454, util=86.37% 00:27:31.091 nvme0n2: ios=4198/4608, merge=0/0, ticks=40117/34426, in_queue=74543, util=96.65% 00:27:31.091 nvme0n3: ios=2910/3072, merge=0/0, ticks=24160/32440, in_queue=56600, util=90.36% 00:27:31.091 nvme0n4: ios=3227/3584, merge=0/0, ticks=22422/31571, in_queue=53993, util=99.47% 00:27:31.091 08:53:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:27:31.091 08:53:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2280964 00:27:31.091 08:53:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:27:31.091 08:53:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:27:31.091 [global] 00:27:31.091 thread=1 00:27:31.091 invalidate=1 00:27:31.091 rw=read 00:27:31.091 time_based=1 00:27:31.091 runtime=10 00:27:31.091 ioengine=libaio 00:27:31.091 direct=1 00:27:31.091 bs=4096 00:27:31.091 iodepth=1 00:27:31.092 norandommap=1 00:27:31.092 numjobs=1 00:27:31.092 00:27:31.092 [job0] 00:27:31.092 filename=/dev/nvme0n1 00:27:31.092 [job1] 00:27:31.092 filename=/dev/nvme0n2 00:27:31.092 [job2] 00:27:31.092 filename=/dev/nvme0n3 00:27:31.092 [job3] 00:27:31.092 filename=/dev/nvme0n4 00:27:31.092 Could not set queue depth (nvme0n1) 00:27:31.092 Could not set queue depth (nvme0n2) 00:27:31.092 Could not set queue depth (nvme0n3) 00:27:31.092 Could not set queue depth (nvme0n4) 00:27:31.349 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:31.349 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:31.349 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:31.349 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:31.349 fio-3.35 00:27:31.349 Starting 4 threads 00:27:34.629 08:53:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:27:34.629 08:53:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:27:34.629 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=7086080, buflen=4096 00:27:34.629 fio: pid=2281055, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:27:34.630 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=42852352, buflen=4096 00:27:34.630 fio: pid=2281054, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:27:34.630 08:53:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:27:34.630 08:53:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:27:34.887 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=22421504, buflen=4096 00:27:34.887 fio: pid=2281052, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:27:34.887 08:53:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:27:34.887 08:53:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:27:35.145 08:53:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:27:35.145 08:53:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:27:35.145 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=372736, buflen=4096 00:27:35.145 fio: pid=2281053, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:27:35.145 00:27:35.145 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2281052: Wed May 15 08:53:29 2024 00:27:35.145 read: IOPS=1617, BW=6469KiB/s (6624kB/s)(21.4MiB/3385msec) 00:27:35.145 slat (usec): min=5, max=17485, avg=19.20, stdev=328.71 00:27:35.145 clat (usec): min=222, max=42137, avg=596.11, stdev=3558.06 00:27:35.145 lat (usec): min=228, max=52056, avg=615.30, stdev=3594.38 00:27:35.145 clat percentiles (usec): 00:27:35.145 | 1.00th=[ 239], 5.00th=[ 247], 10.00th=[ 253], 20.00th=[ 260], 00:27:35.145 | 30.00th=[ 265], 40.00th=[ 273], 50.00th=[ 281], 60.00th=[ 285], 00:27:35.145 | 70.00th=[ 293], 80.00th=[ 297], 90.00th=[ 306], 95.00th=[ 326], 00:27:35.145 | 99.00th=[ 775], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:27:35.145 | 99.99th=[42206] 00:27:35.145 bw ( KiB/s): min= 96, max=13328, per=37.42%, avg=7285.33, stdev=5860.21, samples=6 00:27:35.146 iops : min= 24, max= 3332, avg=1821.33, stdev=1465.05, samples=6 00:27:35.146 lat (usec) : 250=7.65%, 500=90.05%, 750=1.26%, 1000=0.22% 00:27:35.146 lat (msec) : 2=0.05%, 50=0.75% 00:27:35.146 cpu : usr=1.36%, sys=2.84%, ctx=5478, majf=0, minf=1 00:27:35.146 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:35.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:35.146 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:35.146 issued rwts: total=5475,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:35.146 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:35.146 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2281053: Wed May 15 08:53:29 2024 00:27:35.146 read: IOPS=25, BW=99.8KiB/s (102kB/s)(364KiB/3648msec) 00:27:35.146 slat (usec): min=9, max=17707, avg=297.18, stdev=2009.90 00:27:35.146 clat (usec): min=441, max=42105, avg=39770.33, stdev=7278.90 00:27:35.146 lat (usec): min=617, max=58783, avg=40070.18, stdev=7120.52 00:27:35.146 clat percentiles (usec): 00:27:35.146 | 1.00th=[ 441], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:27:35.146 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:27:35.146 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:27:35.146 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:27:35.146 | 99.99th=[42206] 00:27:35.146 bw ( KiB/s): min= 96, max= 112, per=0.50%, avg=98.71, stdev= 5.96, samples=7 00:27:35.146 iops : min= 24, max= 28, avg=24.57, stdev= 1.51, samples=7 00:27:35.146 lat (usec) : 500=1.09%, 750=1.09%, 1000=1.09% 00:27:35.146 lat (msec) : 50=95.65% 00:27:35.146 cpu : usr=0.08%, sys=0.00%, ctx=95, majf=0, minf=1 00:27:35.146 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:35.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:35.146 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:35.146 issued rwts: total=92,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:35.146 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:35.146 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2281054: Wed May 15 08:53:29 2024 00:27:35.146 read: IOPS=3354, BW=13.1MiB/s (13.7MB/s)(40.9MiB/3119msec) 00:27:35.146 slat (usec): min=5, max=10687, avg=13.55, stdev=142.77 00:27:35.146 clat (usec): min=219, max=1146, avg=281.71, stdev=55.04 00:27:35.146 lat (usec): min=226, max=11015, avg=295.26, stdev=154.37 00:27:35.146 clat percentiles (usec): 00:27:35.146 | 1.00th=[ 229], 5.00th=[ 237], 10.00th=[ 243], 20.00th=[ 253], 00:27:35.146 | 30.00th=[ 260], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 281], 00:27:35.146 | 70.00th=[ 289], 80.00th=[ 297], 90.00th=[ 310], 95.00th=[ 330], 00:27:35.146 | 99.00th=[ 523], 99.50th=[ 570], 99.90th=[ 922], 99.95th=[ 979], 00:27:35.146 | 99.99th=[ 1045] 00:27:35.146 bw ( KiB/s): min=12040, max=14800, per=68.68%, avg=13372.00, stdev=1030.59, samples=6 00:27:35.146 iops : min= 3010, max= 3700, avg=3343.00, stdev=257.65, samples=6 00:27:35.146 lat (usec) : 250=15.63%, 500=83.05%, 750=1.05%, 1000=0.23% 00:27:35.146 lat (msec) : 2=0.04% 00:27:35.146 cpu : usr=3.01%, sys=5.36%, ctx=10466, majf=0, minf=1 00:27:35.146 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:35.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:35.146 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:35.146 issued rwts: total=10463,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:35.146 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:35.146 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2281055: Wed May 15 08:53:29 2024 00:27:35.146 read: IOPS=597, BW=2390KiB/s (2448kB/s)(6920KiB/2895msec) 00:27:35.146 slat (nsec): min=4565, max=64878, avg=21898.56, stdev=11324.21 00:27:35.146 clat (usec): min=255, max=42331, avg=1646.15, stdev=7239.35 00:27:35.146 lat (usec): min=261, max=42345, avg=1668.05, stdev=7239.03 00:27:35.146 clat percentiles (usec): 00:27:35.146 | 1.00th=[ 269], 5.00th=[ 273], 10.00th=[ 281], 20.00th=[ 285], 00:27:35.146 | 30.00th=[ 297], 40.00th=[ 310], 50.00th=[ 330], 60.00th=[ 343], 00:27:35.146 | 70.00th=[ 347], 80.00th=[ 351], 90.00th=[ 375], 95.00th=[ 396], 00:27:35.146 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:27:35.146 | 99.99th=[42206] 00:27:35.146 bw ( KiB/s): min= 96, max=11352, per=14.13%, avg=2752.00, stdev=4886.78, samples=5 00:27:35.146 iops : min= 24, max= 2838, avg=688.00, stdev=1221.69, samples=5 00:27:35.146 lat (usec) : 500=96.71% 00:27:35.146 lat (msec) : 50=3.24% 00:27:35.146 cpu : usr=0.69%, sys=1.35%, ctx=1731, majf=0, minf=1 00:27:35.146 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:35.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:35.146 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:35.146 issued rwts: total=1731,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:35.146 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:35.146 00:27:35.146 Run status group 0 (all jobs): 00:27:35.146 READ: bw=19.0MiB/s (19.9MB/s), 99.8KiB/s-13.1MiB/s (102kB/s-13.7MB/s), io=69.4MiB (72.7MB), run=2895-3648msec 00:27:35.146 00:27:35.146 Disk stats (read/write): 00:27:35.146 nvme0n1: ios=5473/0, merge=0/0, ticks=3138/0, in_queue=3138, util=94.88% 00:27:35.146 nvme0n2: ios=89/0, merge=0/0, ticks=3537/0, in_queue=3537, util=95.95% 00:27:35.146 nvme0n3: ios=10456/0, merge=0/0, ticks=3958/0, in_queue=3958, util=99.50% 00:27:35.146 nvme0n4: ios=1729/0, merge=0/0, ticks=2793/0, in_queue=2793, util=96.75% 00:27:35.404 08:53:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:27:35.404 08:53:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:27:35.688 08:53:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:27:35.688 08:53:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:27:35.947 08:53:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:27:35.947 08:53:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:27:36.204 08:53:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:27:36.204 08:53:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:27:36.461 08:53:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:27:36.461 08:53:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 2280964 00:27:36.461 08:53:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:27:36.461 08:53:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:36.461 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:36.461 08:53:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:36.461 08:53:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # local i=0 00:27:36.461 08:53:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:27:36.461 08:53:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:36.461 08:53:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:27:36.461 08:53:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:36.461 08:53:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1228 -- # return 0 00:27:36.461 08:53:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:27:36.461 08:53:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:27:36.461 nvmf hotplug test: fio failed as expected 00:27:36.461 08:53:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:36.828 08:53:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:27:36.828 08:53:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:27:36.828 08:53:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:27:36.828 08:53:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:27:36.828 08:53:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:27:36.828 08:53:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:36.828 08:53:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:27:36.828 08:53:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:36.828 08:53:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:27:36.828 08:53:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:36.828 08:53:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:36.828 rmmod nvme_tcp 00:27:36.828 rmmod nvme_fabrics 00:27:36.828 rmmod nvme_keyring 00:27:36.828 08:53:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:36.828 08:53:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:27:36.828 08:53:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:27:36.828 08:53:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 2278938 ']' 00:27:36.828 08:53:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 2278938 00:27:36.828 08:53:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@947 -- # '[' -z 2278938 ']' 00:27:36.828 08:53:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # kill -0 2278938 00:27:36.828 08:53:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # uname 00:27:36.828 08:53:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:27:36.828 08:53:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2278938 00:27:36.828 08:53:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:27:36.828 08:53:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:27:36.828 08:53:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2278938' 00:27:36.828 killing process with pid 2278938 00:27:36.828 08:53:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # kill 2278938 00:27:36.828 [2024-05-15 08:53:31.506167] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:36.828 08:53:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@971 -- # wait 2278938 00:27:37.090 08:53:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:37.090 08:53:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:37.090 08:53:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:37.090 08:53:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:37.090 08:53:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:37.090 08:53:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:37.090 08:53:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:37.090 08:53:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:38.988 08:53:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:38.988 00:27:38.988 real 0m23.758s 00:27:38.988 user 1m20.580s 00:27:38.988 sys 0m7.229s 00:27:38.988 08:53:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # xtrace_disable 00:27:38.988 08:53:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:27:38.988 ************************************ 00:27:38.988 END TEST nvmf_fio_target 00:27:38.988 ************************************ 00:27:38.988 08:53:33 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:27:38.988 08:53:33 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:27:38.988 08:53:33 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:27:38.988 08:53:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:39.246 ************************************ 00:27:39.246 START TEST nvmf_bdevio 00:27:39.246 ************************************ 00:27:39.246 08:53:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:27:39.246 * Looking for test storage... 00:27:39.246 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:39.246 08:53:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:39.246 08:53:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:27:39.246 08:53:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:39.246 08:53:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:39.246 08:53:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:39.246 08:53:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:39.246 08:53:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:39.246 08:53:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:39.246 08:53:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:39.246 08:53:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:39.246 08:53:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:39.246 08:53:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:39.246 08:53:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:39.246 08:53:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:39.246 08:53:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:39.246 08:53:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:39.246 08:53:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:39.246 08:53:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:39.246 08:53:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:39.246 08:53:33 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:39.246 08:53:33 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:39.246 08:53:33 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:39.246 08:53:33 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.246 08:53:33 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.246 08:53:33 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.246 08:53:33 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:27:39.246 08:53:33 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.246 08:53:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:27:39.246 08:53:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:39.246 08:53:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:39.246 08:53:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:39.246 08:53:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:39.246 08:53:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:39.246 08:53:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:39.246 08:53:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:39.246 08:53:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:39.246 08:53:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:39.246 08:53:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:39.246 08:53:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:27:39.246 08:53:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:39.246 08:53:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:39.246 08:53:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:39.246 08:53:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:39.246 08:53:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:39.246 08:53:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:39.246 08:53:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:39.246 08:53:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:39.246 08:53:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:39.246 08:53:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:39.246 08:53:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:27:39.246 08:53:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:41.777 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:41.777 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:41.777 Found net devices under 0000:09:00.0: cvl_0_0 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:41.777 Found net devices under 0000:09:00.1: cvl_0_1 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:41.777 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:41.777 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:27:41.777 00:27:41.777 --- 10.0.0.2 ping statistics --- 00:27:41.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:41.777 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:41.777 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:41.777 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:27:41.777 00:27:41.777 --- 10.0.0.1 ping statistics --- 00:27:41.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:41.777 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:41.777 08:53:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@721 -- # xtrace_disable 00:27:41.778 08:53:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:27:41.778 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=2283958 00:27:41.778 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:27:41.778 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 2283958 00:27:41.778 08:53:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@828 -- # '[' -z 2283958 ']' 00:27:41.778 08:53:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:41.778 08:53:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local max_retries=100 00:27:41.778 08:53:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:41.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:41.778 08:53:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@837 -- # xtrace_disable 00:27:41.778 08:53:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:27:41.778 [2024-05-15 08:53:36.499158] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:27:41.778 [2024-05-15 08:53:36.499261] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:41.778 EAL: No free 2048 kB hugepages reported on node 1 00:27:42.036 [2024-05-15 08:53:36.572884] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:42.037 [2024-05-15 08:53:36.654792] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:42.037 [2024-05-15 08:53:36.654842] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:42.037 [2024-05-15 08:53:36.654866] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:42.037 [2024-05-15 08:53:36.654878] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:42.037 [2024-05-15 08:53:36.654888] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:42.037 [2024-05-15 08:53:36.654984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:42.037 [2024-05-15 08:53:36.655046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:27:42.037 [2024-05-15 08:53:36.655112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:27:42.037 [2024-05-15 08:53:36.655114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:42.037 08:53:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:27:42.037 08:53:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@861 -- # return 0 00:27:42.037 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:42.037 08:53:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@727 -- # xtrace_disable 00:27:42.037 08:53:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:27:42.037 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:42.037 08:53:36 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:42.037 08:53:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.037 08:53:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:27:42.037 [2024-05-15 08:53:36.812011] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:42.037 08:53:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.037 08:53:36 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:42.037 08:53:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.037 08:53:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:27:42.295 Malloc0 00:27:42.295 08:53:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.295 08:53:36 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:42.295 08:53:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.295 08:53:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:27:42.295 08:53:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.295 08:53:36 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:42.295 08:53:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.295 08:53:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:27:42.295 08:53:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.295 08:53:36 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:42.295 08:53:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.295 08:53:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:27:42.295 [2024-05-15 08:53:36.865592] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:42.295 [2024-05-15 08:53:36.865924] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:42.295 08:53:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.295 08:53:36 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:27:42.295 08:53:36 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:27:42.295 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:27:42.295 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:27:42.295 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:42.295 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:42.295 { 00:27:42.295 "params": { 00:27:42.295 "name": "Nvme$subsystem", 00:27:42.295 "trtype": "$TEST_TRANSPORT", 00:27:42.295 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:42.295 "adrfam": "ipv4", 00:27:42.295 "trsvcid": "$NVMF_PORT", 00:27:42.295 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:42.295 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:42.295 "hdgst": ${hdgst:-false}, 00:27:42.295 "ddgst": ${ddgst:-false} 00:27:42.295 }, 00:27:42.295 "method": "bdev_nvme_attach_controller" 00:27:42.295 } 00:27:42.295 EOF 00:27:42.295 )") 00:27:42.295 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:27:42.295 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:27:42.295 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:27:42.295 08:53:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:42.295 "params": { 00:27:42.295 "name": "Nvme1", 00:27:42.295 "trtype": "tcp", 00:27:42.295 "traddr": "10.0.0.2", 00:27:42.295 "adrfam": "ipv4", 00:27:42.295 "trsvcid": "4420", 00:27:42.295 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:42.295 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:42.295 "hdgst": false, 00:27:42.295 "ddgst": false 00:27:42.295 }, 00:27:42.295 "method": "bdev_nvme_attach_controller" 00:27:42.295 }' 00:27:42.295 [2024-05-15 08:53:36.910978] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:27:42.295 [2024-05-15 08:53:36.911065] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2283998 ] 00:27:42.295 EAL: No free 2048 kB hugepages reported on node 1 00:27:42.295 [2024-05-15 08:53:36.984866] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:42.295 [2024-05-15 08:53:37.071207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:42.295 [2024-05-15 08:53:37.071268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:42.295 [2024-05-15 08:53:37.071273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:42.862 I/O targets: 00:27:42.862 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:27:42.862 00:27:42.862 00:27:42.862 CUnit - A unit testing framework for C - Version 2.1-3 00:27:42.862 http://cunit.sourceforge.net/ 00:27:42.862 00:27:42.862 00:27:42.862 Suite: bdevio tests on: Nvme1n1 00:27:42.862 Test: blockdev write read block ...passed 00:27:42.862 Test: blockdev write zeroes read block ...passed 00:27:42.862 Test: blockdev write zeroes read no split ...passed 00:27:42.862 Test: blockdev write zeroes read split ...passed 00:27:42.862 Test: blockdev write zeroes read split partial ...passed 00:27:42.862 Test: blockdev reset ...[2024-05-15 08:53:37.568410] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:42.862 [2024-05-15 08:53:37.568522] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5348c0 (9): Bad file descriptor 00:27:42.862 [2024-05-15 08:53:37.638203] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:42.862 passed 00:27:43.120 Test: blockdev write read 8 blocks ...passed 00:27:43.120 Test: blockdev write read size > 128k ...passed 00:27:43.121 Test: blockdev write read invalid size ...passed 00:27:43.121 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:27:43.121 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:27:43.121 Test: blockdev write read max offset ...passed 00:27:43.121 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:27:43.121 Test: blockdev writev readv 8 blocks ...passed 00:27:43.121 Test: blockdev writev readv 30 x 1block ...passed 00:27:43.121 Test: blockdev writev readv block ...passed 00:27:43.121 Test: blockdev writev readv size > 128k ...passed 00:27:43.121 Test: blockdev writev readv size > 128k in two iovs ...passed 00:27:43.121 Test: blockdev comparev and writev ...[2024-05-15 08:53:37.853674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:27:43.121 [2024-05-15 08:53:37.853713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:43.121 [2024-05-15 08:53:37.853737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:27:43.121 [2024-05-15 08:53:37.853755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.121 [2024-05-15 08:53:37.854140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:27:43.121 [2024-05-15 08:53:37.854164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:43.121 [2024-05-15 08:53:37.854187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:27:43.121 [2024-05-15 08:53:37.854204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:43.121 [2024-05-15 08:53:37.854570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:27:43.121 [2024-05-15 08:53:37.854605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:43.121 [2024-05-15 08:53:37.854628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:27:43.121 [2024-05-15 08:53:37.854645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:43.121 [2024-05-15 08:53:37.854998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:27:43.121 [2024-05-15 08:53:37.855024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:43.121 [2024-05-15 08:53:37.855046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:27:43.121 [2024-05-15 08:53:37.855063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:43.121 passed 00:27:43.379 Test: blockdev nvme passthru rw ...passed 00:27:43.379 Test: blockdev nvme passthru vendor specific ...[2024-05-15 08:53:37.938504] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:43.379 [2024-05-15 08:53:37.938532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:43.379 [2024-05-15 08:53:37.938696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:43.379 [2024-05-15 08:53:37.938719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:43.379 [2024-05-15 08:53:37.938882] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:43.379 [2024-05-15 08:53:37.938904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:43.379 [2024-05-15 08:53:37.939068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:43.379 [2024-05-15 08:53:37.939092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:43.379 passed 00:27:43.379 Test: blockdev nvme admin passthru ...passed 00:27:43.379 Test: blockdev copy ...passed 00:27:43.379 00:27:43.379 Run Summary: Type Total Ran Passed Failed Inactive 00:27:43.379 suites 1 1 n/a 0 0 00:27:43.379 tests 23 23 23 0 0 00:27:43.379 asserts 152 152 152 0 n/a 00:27:43.379 00:27:43.379 Elapsed time = 1.164 seconds 00:27:43.379 08:53:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:43.637 08:53:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.637 08:53:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:27:43.637 08:53:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.637 08:53:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:27:43.637 08:53:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:27:43.637 08:53:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:43.637 08:53:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:27:43.637 08:53:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:43.637 08:53:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:27:43.637 08:53:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:43.637 08:53:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:43.637 rmmod nvme_tcp 00:27:43.637 rmmod nvme_fabrics 00:27:43.637 rmmod nvme_keyring 00:27:43.637 08:53:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:43.637 08:53:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:27:43.637 08:53:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:27:43.637 08:53:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 2283958 ']' 00:27:43.637 08:53:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 2283958 00:27:43.637 08:53:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@947 -- # '[' -z 2283958 ']' 00:27:43.637 08:53:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # kill -0 2283958 00:27:43.637 08:53:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # uname 00:27:43.637 08:53:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:27:43.637 08:53:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2283958 00:27:43.637 08:53:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # process_name=reactor_3 00:27:43.637 08:53:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' reactor_3 = sudo ']' 00:27:43.637 08:53:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2283958' 00:27:43.637 killing process with pid 2283958 00:27:43.637 08:53:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # kill 2283958 00:27:43.637 [2024-05-15 08:53:38.247006] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:43.637 08:53:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@971 -- # wait 2283958 00:27:43.897 08:53:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:43.897 08:53:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:43.897 08:53:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:43.897 08:53:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:43.897 08:53:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:43.897 08:53:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:43.897 08:53:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:43.897 08:53:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:45.832 08:53:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:45.832 00:27:45.832 real 0m6.713s 00:27:45.832 user 0m10.581s 00:27:45.832 sys 0m2.361s 00:27:45.832 08:53:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # xtrace_disable 00:27:45.832 08:53:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:27:45.832 ************************************ 00:27:45.832 END TEST nvmf_bdevio 00:27:45.832 ************************************ 00:27:45.832 08:53:40 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:27:45.832 08:53:40 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:27:45.832 08:53:40 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:27:45.832 08:53:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:45.832 ************************************ 00:27:45.832 START TEST nvmf_auth_target 00:27:45.832 ************************************ 00:27:45.832 08:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:27:46.090 * Looking for test storage... 00:27:46.090 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:46.090 08:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:46.090 08:53:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:27:46.090 08:53:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:46.090 08:53:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:46.090 08:53:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:46.090 08:53:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:46.090 08:53:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:46.090 08:53:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:46.090 08:53:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:46.090 08:53:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:46.090 08:53:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:46.090 08:53:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:46.090 08:53:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:46.090 08:53:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:46.090 08:53:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:46.090 08:53:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:46.090 08:53:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:46.090 08:53:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:46.090 08:53:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:46.090 08:53:40 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:46.090 08:53:40 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:46.090 08:53:40 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:46.090 08:53:40 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.090 08:53:40 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.091 08:53:40 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.091 08:53:40 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:27:46.091 08:53:40 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.091 08:53:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:27:46.091 08:53:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:46.091 08:53:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:46.091 08:53:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:46.091 08:53:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:46.091 08:53:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:46.091 08:53:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:46.091 08:53:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:46.091 08:53:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:46.091 08:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:46.091 08:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:46.091 08:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:27:46.091 08:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:46.091 08:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:27:46.091 08:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:27:46.091 08:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@57 -- # nvmftestinit 00:27:46.091 08:53:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:46.091 08:53:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:46.091 08:53:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:46.091 08:53:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:46.091 08:53:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:46.091 08:53:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:46.091 08:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:46.091 08:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:46.091 08:53:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:46.091 08:53:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:46.091 08:53:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:27:46.091 08:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:48.623 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:48.623 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:27:48.623 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:48.623 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:48.623 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:48.623 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:48.623 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:48.623 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:27:48.623 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:48.623 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:27:48.623 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:27:48.623 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:27:48.623 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:27:48.623 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:27:48.623 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:27:48.623 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:48.623 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:48.623 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:48.623 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:48.623 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:48.623 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:48.623 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:48.623 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:48.623 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:48.623 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:48.624 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:48.624 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:48.624 Found net devices under 0000:09:00.0: cvl_0_0 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:48.624 Found net devices under 0000:09:00.1: cvl_0_1 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:48.624 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:48.624 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:27:48.624 00:27:48.624 --- 10.0.0.2 ping statistics --- 00:27:48.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:48.624 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:48.624 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:48.624 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:27:48.624 00:27:48.624 --- 10.0.0.1 ping statistics --- 00:27:48.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:48.624 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@58 -- # nvmfappstart -L nvmf_auth 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@721 -- # xtrace_disable 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2286469 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2286469 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@828 -- # '[' -z 2286469 ']' 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local max_retries=100 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # xtrace_disable 00:27:48.624 08:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:48.883 08:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:27:48.883 08:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@861 -- # return 0 00:27:48.883 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:48.883 08:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@727 -- # xtrace_disable 00:27:48.883 08:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:48.883 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:48.883 08:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # hostpid=2286490 00:27:48.883 08:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:27:48.883 08:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:27:48.883 08:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # gen_dhchap_key null 48 00:27:48.883 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:27:48.883 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:48.883 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:27:48.883 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:27:48.883 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:27:48.883 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:48.883 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ab7b4618f8c8f65188de45288240c4332afedd24796be8cb 00:27:48.883 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:48.883 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.ndc 00:27:48.883 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ab7b4618f8c8f65188de45288240c4332afedd24796be8cb 0 00:27:48.883 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ab7b4618f8c8f65188de45288240c4332afedd24796be8cb 0 00:27:48.883 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:27:48.883 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:48.883 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ab7b4618f8c8f65188de45288240c4332afedd24796be8cb 00:27:48.883 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:27:48.883 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:27:49.142 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.ndc 00:27:49.142 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.ndc 00:27:49.142 08:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # keys[0]=/tmp/spdk.key-null.ndc 00:27:49.142 08:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@65 -- # gen_dhchap_key sha256 32 00:27:49.142 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:27:49.142 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:49.142 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:27:49.142 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:27:49.142 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:27:49.142 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:49.142 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7a23b1477237f62e99b54b71ab51c829 00:27:49.142 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:27:49.142 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.2Qr 00:27:49.142 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7a23b1477237f62e99b54b71ab51c829 1 00:27:49.142 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7a23b1477237f62e99b54b71ab51c829 1 00:27:49.142 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:27:49.142 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:49.142 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7a23b1477237f62e99b54b71ab51c829 00:27:49.142 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:27:49.142 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:27:49.142 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.2Qr 00:27:49.142 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.2Qr 00:27:49.142 08:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@65 -- # keys[1]=/tmp/spdk.key-sha256.2Qr 00:27:49.142 08:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@66 -- # gen_dhchap_key sha384 48 00:27:49.142 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:27:49.142 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:49.142 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:27:49.142 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:27:49.142 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:27:49.142 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:49.142 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e9b9b029f7189c305a942b076a75b7ce2fd29814f3191b93 00:27:49.142 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:27:49.142 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.veH 00:27:49.142 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e9b9b029f7189c305a942b076a75b7ce2fd29814f3191b93 2 00:27:49.142 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e9b9b029f7189c305a942b076a75b7ce2fd29814f3191b93 2 00:27:49.142 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:27:49.142 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:49.142 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e9b9b029f7189c305a942b076a75b7ce2fd29814f3191b93 00:27:49.142 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:27:49.142 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:27:49.142 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.veH 00:27:49.142 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.veH 00:27:49.142 08:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@66 -- # keys[2]=/tmp/spdk.key-sha384.veH 00:27:49.142 08:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:27:49.142 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:27:49.142 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:49.142 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:27:49.142 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:27:49.142 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:27:49.142 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:49.142 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ff383d32e6ff39d093d36c2b1415243ace1701ba8cde12ae7c34472ad53ae62f 00:27:49.143 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:49.143 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Xui 00:27:49.143 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ff383d32e6ff39d093d36c2b1415243ace1701ba8cde12ae7c34472ad53ae62f 3 00:27:49.143 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ff383d32e6ff39d093d36c2b1415243ace1701ba8cde12ae7c34472ad53ae62f 3 00:27:49.143 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:27:49.143 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:49.143 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ff383d32e6ff39d093d36c2b1415243ace1701ba8cde12ae7c34472ad53ae62f 00:27:49.143 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:27:49.143 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:27:49.143 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Xui 00:27:49.143 08:53:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Xui 00:27:49.143 08:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[3]=/tmp/spdk.key-sha512.Xui 00:27:49.143 08:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # waitforlisten 2286469 00:27:49.143 08:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@828 -- # '[' -z 2286469 ']' 00:27:49.143 08:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:49.143 08:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local max_retries=100 00:27:49.143 08:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:49.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:49.143 08:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # xtrace_disable 00:27:49.143 08:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:49.401 08:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:27:49.401 08:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@861 -- # return 0 00:27:49.401 08:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # waitforlisten 2286490 /var/tmp/host.sock 00:27:49.401 08:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@828 -- # '[' -z 2286490 ']' 00:27:49.401 08:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/host.sock 00:27:49.401 08:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local max_retries=100 00:27:49.401 08:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:27:49.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:27:49.401 08:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # xtrace_disable 00:27:49.401 08:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:49.659 08:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:27:49.659 08:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@861 -- # return 0 00:27:49.659 08:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@71 -- # rpc_cmd 00:27:49.659 08:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.659 08:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:49.659 08:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.659 08:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:27:49.659 08:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.ndc 00:27:49.659 08:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.659 08:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:49.659 08:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.659 08:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.ndc 00:27:49.659 08:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.ndc 00:27:49.917 08:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:27:49.917 08:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.2Qr 00:27:49.917 08:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.917 08:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:49.917 08:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.917 08:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.2Qr 00:27:49.917 08:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.2Qr 00:27:50.175 08:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:27:50.175 08:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.veH 00:27:50.175 08:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:50.175 08:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:50.175 08:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:50.176 08:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.veH 00:27:50.176 08:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.veH 00:27:50.433 08:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:27:50.433 08:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Xui 00:27:50.433 08:53:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:50.433 08:53:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:50.433 08:53:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:50.433 08:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Xui 00:27:50.433 08:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Xui 00:27:50.691 08:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:27:50.691 08:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:27:50.691 08:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:27:50.691 08:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:27:50.691 08:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:27:50.948 08:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 0 00:27:50.949 08:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:27:50.949 08:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:27:50.949 08:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:27:50.949 08:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:27:50.949 08:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:27:50.949 08:53:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:50.949 08:53:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:50.949 08:53:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:50.949 08:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:27:50.949 08:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:27:51.207 00:27:51.207 08:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:27:51.207 08:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:27:51.207 08:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:51.465 08:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.465 08:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:51.465 08:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:51.465 08:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:51.465 08:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:51.465 08:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:27:51.465 { 00:27:51.465 "cntlid": 1, 00:27:51.465 "qid": 0, 00:27:51.465 "state": "enabled", 00:27:51.465 "listen_address": { 00:27:51.465 "trtype": "TCP", 00:27:51.465 "adrfam": "IPv4", 00:27:51.465 "traddr": "10.0.0.2", 00:27:51.465 "trsvcid": "4420" 00:27:51.465 }, 00:27:51.465 "peer_address": { 00:27:51.465 "trtype": "TCP", 00:27:51.465 "adrfam": "IPv4", 00:27:51.465 "traddr": "10.0.0.1", 00:27:51.465 "trsvcid": "59118" 00:27:51.465 }, 00:27:51.465 "auth": { 00:27:51.465 "state": "completed", 00:27:51.465 "digest": "sha256", 00:27:51.465 "dhgroup": "null" 00:27:51.465 } 00:27:51.465 } 00:27:51.465 ]' 00:27:51.465 08:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:27:51.465 08:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:27:51.465 08:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:27:51.723 08:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:27:51.723 08:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:27:51.723 08:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:51.723 08:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:51.723 08:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:51.981 08:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:YWI3YjQ2MThmOGM4ZjY1MTg4ZGU0NTI4ODI0MGM0MzMyYWZlZGQyNDc5NmJlOGNivkRwUg==: 00:27:52.914 08:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:52.914 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:52.914 08:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:52.914 08:53:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:52.914 08:53:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:52.914 08:53:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:52.914 08:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:27:52.914 08:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:27:52.914 08:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:27:53.173 08:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 1 00:27:53.173 08:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:27:53.173 08:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:27:53.173 08:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:27:53.173 08:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:27:53.173 08:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:27:53.173 08:53:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.173 08:53:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:53.173 08:53:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.173 08:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:27:53.173 08:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:27:53.431 00:27:53.431 08:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:27:53.431 08:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:27:53.431 08:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:53.688 08:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.688 08:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:53.688 08:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.688 08:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:53.688 08:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.689 08:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:27:53.689 { 00:27:53.689 "cntlid": 3, 00:27:53.689 "qid": 0, 00:27:53.689 "state": "enabled", 00:27:53.689 "listen_address": { 00:27:53.689 "trtype": "TCP", 00:27:53.689 "adrfam": "IPv4", 00:27:53.689 "traddr": "10.0.0.2", 00:27:53.689 "trsvcid": "4420" 00:27:53.689 }, 00:27:53.689 "peer_address": { 00:27:53.689 "trtype": "TCP", 00:27:53.689 "adrfam": "IPv4", 00:27:53.689 "traddr": "10.0.0.1", 00:27:53.689 "trsvcid": "59156" 00:27:53.689 }, 00:27:53.689 "auth": { 00:27:53.689 "state": "completed", 00:27:53.689 "digest": "sha256", 00:27:53.689 "dhgroup": "null" 00:27:53.689 } 00:27:53.689 } 00:27:53.689 ]' 00:27:53.689 08:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:27:53.689 08:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:27:53.689 08:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:27:53.689 08:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:27:53.689 08:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:27:53.946 08:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:53.946 08:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:53.946 08:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:53.946 08:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:N2EyM2IxNDc3MjM3ZjYyZTk5YjU0YjcxYWI1MWM4Mjl3KYtU: 00:27:55.320 08:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:55.320 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:55.320 08:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:55.320 08:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:55.320 08:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:55.320 08:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:55.320 08:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:27:55.320 08:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:27:55.320 08:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:27:55.320 08:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 2 00:27:55.320 08:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:27:55.320 08:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:27:55.320 08:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:27:55.320 08:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:27:55.320 08:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:27:55.320 08:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:55.320 08:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:55.320 08:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:55.320 08:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:27:55.320 08:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:27:55.578 00:27:55.578 08:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:27:55.578 08:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:27:55.578 08:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:55.836 08:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.836 08:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:55.836 08:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:55.836 08:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:55.836 08:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:55.836 08:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:27:55.836 { 00:27:55.836 "cntlid": 5, 00:27:55.836 "qid": 0, 00:27:55.836 "state": "enabled", 00:27:55.836 "listen_address": { 00:27:55.836 "trtype": "TCP", 00:27:55.837 "adrfam": "IPv4", 00:27:55.837 "traddr": "10.0.0.2", 00:27:55.837 "trsvcid": "4420" 00:27:55.837 }, 00:27:55.837 "peer_address": { 00:27:55.837 "trtype": "TCP", 00:27:55.837 "adrfam": "IPv4", 00:27:55.837 "traddr": "10.0.0.1", 00:27:55.837 "trsvcid": "59198" 00:27:55.837 }, 00:27:55.837 "auth": { 00:27:55.837 "state": "completed", 00:27:55.837 "digest": "sha256", 00:27:55.837 "dhgroup": "null" 00:27:55.837 } 00:27:55.837 } 00:27:55.837 ]' 00:27:55.837 08:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:27:55.837 08:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:27:55.837 08:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:27:55.837 08:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:27:55.837 08:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:27:55.837 08:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:55.837 08:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:55.837 08:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:56.094 08:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZTliOWIwMjlmNzE4OWMzMDVhOTQyYjA3NmE3NWI3Y2UyZmQyOTgxNGYzMTkxYjkzn7qbcw==: 00:27:57.028 08:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:57.028 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:57.028 08:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:57.028 08:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:57.028 08:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:57.028 08:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:57.028 08:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:27:57.028 08:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:27:57.028 08:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:27:57.286 08:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 3 00:27:57.286 08:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:27:57.286 08:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:27:57.286 08:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:27:57.286 08:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:27:57.286 08:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:27:57.286 08:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:57.286 08:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:57.286 08:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:57.286 08:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:27:57.286 08:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:27:57.852 00:27:57.852 08:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:27:57.852 08:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:27:57.852 08:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:58.109 08:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.109 08:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:58.109 08:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:58.109 08:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:58.109 08:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:58.109 08:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:27:58.109 { 00:27:58.109 "cntlid": 7, 00:27:58.109 "qid": 0, 00:27:58.109 "state": "enabled", 00:27:58.109 "listen_address": { 00:27:58.109 "trtype": "TCP", 00:27:58.109 "adrfam": "IPv4", 00:27:58.109 "traddr": "10.0.0.2", 00:27:58.109 "trsvcid": "4420" 00:27:58.109 }, 00:27:58.109 "peer_address": { 00:27:58.109 "trtype": "TCP", 00:27:58.109 "adrfam": "IPv4", 00:27:58.109 "traddr": "10.0.0.1", 00:27:58.109 "trsvcid": "56566" 00:27:58.109 }, 00:27:58.109 "auth": { 00:27:58.109 "state": "completed", 00:27:58.109 "digest": "sha256", 00:27:58.109 "dhgroup": "null" 00:27:58.109 } 00:27:58.109 } 00:27:58.109 ]' 00:27:58.109 08:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:27:58.109 08:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:27:58.109 08:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:27:58.109 08:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:27:58.109 08:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:27:58.109 08:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:58.109 08:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:58.109 08:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:58.367 08:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZmYzODNkMzJlNmZmMzlkMDkzZDM2YzJiMTQxNTI0M2FjZTE3MDFiYThjZGUxMmFlN2MzNDQ3MmFkNTNhZTYyZgA1QD8=: 00:27:59.299 08:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:59.299 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:59.299 08:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:59.299 08:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:59.299 08:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:59.300 08:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:59.300 08:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:27:59.300 08:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:27:59.300 08:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:59.300 08:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:59.558 08:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 0 00:27:59.558 08:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:27:59.558 08:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:27:59.558 08:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:27:59.558 08:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:27:59.558 08:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:27:59.558 08:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:59.558 08:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:59.558 08:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:59.558 08:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:27:59.558 08:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:27:59.816 00:27:59.816 08:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:27:59.816 08:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:27:59.816 08:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:00.084 08:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.084 08:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:00.084 08:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:00.084 08:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:00.084 08:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:00.084 08:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:28:00.084 { 00:28:00.084 "cntlid": 9, 00:28:00.084 "qid": 0, 00:28:00.084 "state": "enabled", 00:28:00.084 "listen_address": { 00:28:00.084 "trtype": "TCP", 00:28:00.084 "adrfam": "IPv4", 00:28:00.084 "traddr": "10.0.0.2", 00:28:00.084 "trsvcid": "4420" 00:28:00.084 }, 00:28:00.084 "peer_address": { 00:28:00.084 "trtype": "TCP", 00:28:00.084 "adrfam": "IPv4", 00:28:00.084 "traddr": "10.0.0.1", 00:28:00.084 "trsvcid": "56604" 00:28:00.084 }, 00:28:00.084 "auth": { 00:28:00.084 "state": "completed", 00:28:00.084 "digest": "sha256", 00:28:00.084 "dhgroup": "ffdhe2048" 00:28:00.084 } 00:28:00.084 } 00:28:00.084 ]' 00:28:00.084 08:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:28:00.084 08:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:00.084 08:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:28:00.342 08:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:28:00.342 08:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:28:00.342 08:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:00.342 08:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:00.342 08:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:00.600 08:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:YWI3YjQ2MThmOGM4ZjY1MTg4ZGU0NTI4ODI0MGM0MzMyYWZlZGQyNDc5NmJlOGNivkRwUg==: 00:28:01.571 08:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:01.571 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:01.571 08:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:01.571 08:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:01.571 08:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:01.571 08:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:01.571 08:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:28:01.571 08:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:01.571 08:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:01.829 08:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 1 00:28:01.829 08:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:28:01.829 08:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:28:01.829 08:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:28:01.829 08:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:28:01.829 08:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:28:01.829 08:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:01.829 08:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:01.829 08:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:01.829 08:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:28:01.829 08:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:28:02.087 00:28:02.087 08:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:28:02.087 08:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:28:02.087 08:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:02.345 08:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.345 08:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:02.345 08:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:02.345 08:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:02.345 08:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:02.345 08:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:28:02.345 { 00:28:02.345 "cntlid": 11, 00:28:02.345 "qid": 0, 00:28:02.345 "state": "enabled", 00:28:02.345 "listen_address": { 00:28:02.345 "trtype": "TCP", 00:28:02.345 "adrfam": "IPv4", 00:28:02.345 "traddr": "10.0.0.2", 00:28:02.345 "trsvcid": "4420" 00:28:02.345 }, 00:28:02.345 "peer_address": { 00:28:02.345 "trtype": "TCP", 00:28:02.345 "adrfam": "IPv4", 00:28:02.345 "traddr": "10.0.0.1", 00:28:02.345 "trsvcid": "56638" 00:28:02.345 }, 00:28:02.345 "auth": { 00:28:02.345 "state": "completed", 00:28:02.345 "digest": "sha256", 00:28:02.345 "dhgroup": "ffdhe2048" 00:28:02.345 } 00:28:02.345 } 00:28:02.345 ]' 00:28:02.345 08:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:28:02.345 08:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:02.345 08:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:28:02.345 08:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:28:02.345 08:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:28:02.345 08:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:02.345 08:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:02.345 08:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:02.603 08:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:N2EyM2IxNDc3MjM3ZjYyZTk5YjU0YjcxYWI1MWM4Mjl3KYtU: 00:28:03.537 08:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:03.537 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:03.537 08:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:03.537 08:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:03.537 08:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:03.537 08:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:03.537 08:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:28:03.537 08:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:03.537 08:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:04.103 08:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 2 00:28:04.103 08:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:28:04.103 08:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:28:04.103 08:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:28:04.103 08:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:28:04.103 08:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:28:04.103 08:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.103 08:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:04.103 08:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.103 08:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:28:04.103 08:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:28:04.360 00:28:04.360 08:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:28:04.360 08:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:28:04.360 08:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:04.618 08:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.618 08:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:04.618 08:53:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.618 08:53:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:04.618 08:53:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.618 08:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:28:04.618 { 00:28:04.618 "cntlid": 13, 00:28:04.618 "qid": 0, 00:28:04.618 "state": "enabled", 00:28:04.618 "listen_address": { 00:28:04.618 "trtype": "TCP", 00:28:04.618 "adrfam": "IPv4", 00:28:04.618 "traddr": "10.0.0.2", 00:28:04.618 "trsvcid": "4420" 00:28:04.618 }, 00:28:04.618 "peer_address": { 00:28:04.618 "trtype": "TCP", 00:28:04.618 "adrfam": "IPv4", 00:28:04.618 "traddr": "10.0.0.1", 00:28:04.618 "trsvcid": "56668" 00:28:04.618 }, 00:28:04.618 "auth": { 00:28:04.618 "state": "completed", 00:28:04.618 "digest": "sha256", 00:28:04.618 "dhgroup": "ffdhe2048" 00:28:04.618 } 00:28:04.618 } 00:28:04.618 ]' 00:28:04.618 08:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:28:04.618 08:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:04.618 08:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:28:04.618 08:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:28:04.618 08:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:28:04.618 08:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:04.618 08:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:04.618 08:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:04.876 08:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZTliOWIwMjlmNzE4OWMzMDVhOTQyYjA3NmE3NWI3Y2UyZmQyOTgxNGYzMTkxYjkzn7qbcw==: 00:28:05.810 08:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:05.810 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:05.810 08:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:05.810 08:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:05.810 08:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:05.810 08:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:05.810 08:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:28:05.810 08:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:05.810 08:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:06.068 08:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 3 00:28:06.068 08:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:28:06.068 08:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:28:06.068 08:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:28:06.068 08:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:28:06.068 08:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:28:06.068 08:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:06.068 08:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:06.068 08:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:06.069 08:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:06.069 08:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:06.326 00:28:06.326 08:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:28:06.326 08:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:28:06.326 08:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:06.584 08:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.584 08:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:06.584 08:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:06.584 08:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:06.584 08:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:06.584 08:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:28:06.584 { 00:28:06.584 "cntlid": 15, 00:28:06.584 "qid": 0, 00:28:06.584 "state": "enabled", 00:28:06.584 "listen_address": { 00:28:06.584 "trtype": "TCP", 00:28:06.584 "adrfam": "IPv4", 00:28:06.584 "traddr": "10.0.0.2", 00:28:06.584 "trsvcid": "4420" 00:28:06.584 }, 00:28:06.584 "peer_address": { 00:28:06.584 "trtype": "TCP", 00:28:06.584 "adrfam": "IPv4", 00:28:06.584 "traddr": "10.0.0.1", 00:28:06.584 "trsvcid": "56700" 00:28:06.584 }, 00:28:06.584 "auth": { 00:28:06.584 "state": "completed", 00:28:06.584 "digest": "sha256", 00:28:06.584 "dhgroup": "ffdhe2048" 00:28:06.584 } 00:28:06.584 } 00:28:06.584 ]' 00:28:06.584 08:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:28:06.584 08:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:06.584 08:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:28:06.842 08:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:28:06.842 08:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:28:06.842 08:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:06.842 08:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:06.842 08:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:07.100 08:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZmYzODNkMzJlNmZmMzlkMDkzZDM2YzJiMTQxNTI0M2FjZTE3MDFiYThjZGUxMmFlN2MzNDQ3MmFkNTNhZTYyZgA1QD8=: 00:28:08.033 08:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:08.033 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:08.033 08:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:08.033 08:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:08.033 08:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:08.033 08:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:08.033 08:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:28:08.033 08:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:28:08.033 08:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:08.033 08:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:08.290 08:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 0 00:28:08.290 08:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:28:08.290 08:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:28:08.290 08:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:28:08.290 08:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:28:08.290 08:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:28:08.290 08:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:08.290 08:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:08.290 08:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:08.290 08:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:28:08.290 08:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:28:08.548 00:28:08.548 08:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:28:08.548 08:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:28:08.548 08:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:08.806 08:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.806 08:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:08.806 08:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:08.806 08:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:08.806 08:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:08.806 08:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:28:08.806 { 00:28:08.806 "cntlid": 17, 00:28:08.806 "qid": 0, 00:28:08.806 "state": "enabled", 00:28:08.806 "listen_address": { 00:28:08.806 "trtype": "TCP", 00:28:08.806 "adrfam": "IPv4", 00:28:08.806 "traddr": "10.0.0.2", 00:28:08.806 "trsvcid": "4420" 00:28:08.806 }, 00:28:08.806 "peer_address": { 00:28:08.806 "trtype": "TCP", 00:28:08.806 "adrfam": "IPv4", 00:28:08.806 "traddr": "10.0.0.1", 00:28:08.806 "trsvcid": "43102" 00:28:08.806 }, 00:28:08.806 "auth": { 00:28:08.806 "state": "completed", 00:28:08.806 "digest": "sha256", 00:28:08.806 "dhgroup": "ffdhe3072" 00:28:08.806 } 00:28:08.806 } 00:28:08.806 ]' 00:28:08.806 08:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:28:08.806 08:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:08.806 08:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:28:08.806 08:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:28:08.806 08:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:28:09.063 08:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:09.063 08:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:09.063 08:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:09.319 08:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:YWI3YjQ2MThmOGM4ZjY1MTg4ZGU0NTI4ODI0MGM0MzMyYWZlZGQyNDc5NmJlOGNivkRwUg==: 00:28:10.250 08:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:10.250 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:10.250 08:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:10.250 08:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:10.250 08:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:10.250 08:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:10.250 08:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:28:10.250 08:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:10.250 08:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:10.250 08:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 1 00:28:10.250 08:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:28:10.250 08:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:28:10.250 08:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:28:10.250 08:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:28:10.250 08:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:28:10.250 08:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:10.250 08:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:10.506 08:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:10.506 08:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:28:10.506 08:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:28:10.764 00:28:10.764 08:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:28:10.764 08:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:10.764 08:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:28:11.021 08:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.021 08:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:11.021 08:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:11.021 08:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:11.021 08:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:11.021 08:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:28:11.021 { 00:28:11.021 "cntlid": 19, 00:28:11.021 "qid": 0, 00:28:11.021 "state": "enabled", 00:28:11.021 "listen_address": { 00:28:11.021 "trtype": "TCP", 00:28:11.021 "adrfam": "IPv4", 00:28:11.021 "traddr": "10.0.0.2", 00:28:11.021 "trsvcid": "4420" 00:28:11.021 }, 00:28:11.021 "peer_address": { 00:28:11.021 "trtype": "TCP", 00:28:11.021 "adrfam": "IPv4", 00:28:11.021 "traddr": "10.0.0.1", 00:28:11.021 "trsvcid": "43128" 00:28:11.021 }, 00:28:11.021 "auth": { 00:28:11.021 "state": "completed", 00:28:11.021 "digest": "sha256", 00:28:11.021 "dhgroup": "ffdhe3072" 00:28:11.021 } 00:28:11.021 } 00:28:11.021 ]' 00:28:11.021 08:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:28:11.021 08:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:11.021 08:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:28:11.021 08:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:28:11.021 08:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:28:11.021 08:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:11.021 08:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:11.021 08:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:11.584 08:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:N2EyM2IxNDc3MjM3ZjYyZTk5YjU0YjcxYWI1MWM4Mjl3KYtU: 00:28:12.516 08:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:12.516 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:12.516 08:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:12.516 08:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:12.516 08:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:12.516 08:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:12.516 08:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:28:12.516 08:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:12.516 08:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:12.516 08:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 2 00:28:12.516 08:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:28:12.516 08:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:28:12.516 08:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:28:12.516 08:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:28:12.516 08:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:28:12.516 08:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:12.516 08:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:12.516 08:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:12.516 08:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:28:12.516 08:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:28:13.082 00:28:13.082 08:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:28:13.082 08:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:28:13.082 08:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:13.082 08:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.082 08:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:13.082 08:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:13.082 08:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:13.082 08:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:13.082 08:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:28:13.082 { 00:28:13.082 "cntlid": 21, 00:28:13.082 "qid": 0, 00:28:13.082 "state": "enabled", 00:28:13.082 "listen_address": { 00:28:13.082 "trtype": "TCP", 00:28:13.082 "adrfam": "IPv4", 00:28:13.082 "traddr": "10.0.0.2", 00:28:13.082 "trsvcid": "4420" 00:28:13.082 }, 00:28:13.082 "peer_address": { 00:28:13.082 "trtype": "TCP", 00:28:13.082 "adrfam": "IPv4", 00:28:13.082 "traddr": "10.0.0.1", 00:28:13.082 "trsvcid": "43160" 00:28:13.082 }, 00:28:13.082 "auth": { 00:28:13.082 "state": "completed", 00:28:13.082 "digest": "sha256", 00:28:13.082 "dhgroup": "ffdhe3072" 00:28:13.082 } 00:28:13.082 } 00:28:13.082 ]' 00:28:13.082 08:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:28:13.340 08:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:13.340 08:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:28:13.340 08:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:28:13.340 08:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:28:13.340 08:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:13.340 08:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:13.340 08:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:13.598 08:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZTliOWIwMjlmNzE4OWMzMDVhOTQyYjA3NmE3NWI3Y2UyZmQyOTgxNGYzMTkxYjkzn7qbcw==: 00:28:14.530 08:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:14.530 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:14.530 08:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:14.530 08:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:14.530 08:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:14.530 08:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:14.530 08:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:28:14.530 08:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:14.530 08:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:14.787 08:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 3 00:28:14.787 08:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:28:14.787 08:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:28:14.787 08:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:28:14.787 08:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:28:14.787 08:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:28:14.787 08:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:14.787 08:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:14.787 08:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:14.787 08:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:14.787 08:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:15.044 00:28:15.044 08:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:28:15.044 08:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:28:15.044 08:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:15.302 08:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.302 08:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:15.302 08:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:15.302 08:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:15.302 08:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:15.302 08:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:28:15.302 { 00:28:15.302 "cntlid": 23, 00:28:15.302 "qid": 0, 00:28:15.302 "state": "enabled", 00:28:15.302 "listen_address": { 00:28:15.302 "trtype": "TCP", 00:28:15.302 "adrfam": "IPv4", 00:28:15.302 "traddr": "10.0.0.2", 00:28:15.302 "trsvcid": "4420" 00:28:15.302 }, 00:28:15.302 "peer_address": { 00:28:15.302 "trtype": "TCP", 00:28:15.302 "adrfam": "IPv4", 00:28:15.303 "traddr": "10.0.0.1", 00:28:15.303 "trsvcid": "43184" 00:28:15.303 }, 00:28:15.303 "auth": { 00:28:15.303 "state": "completed", 00:28:15.303 "digest": "sha256", 00:28:15.303 "dhgroup": "ffdhe3072" 00:28:15.303 } 00:28:15.303 } 00:28:15.303 ]' 00:28:15.303 08:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:28:15.303 08:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:15.303 08:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:28:15.303 08:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:28:15.303 08:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:28:15.560 08:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:15.560 08:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:15.560 08:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:15.833 08:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZmYzODNkMzJlNmZmMzlkMDkzZDM2YzJiMTQxNTI0M2FjZTE3MDFiYThjZGUxMmFlN2MzNDQ3MmFkNTNhZTYyZgA1QD8=: 00:28:16.812 08:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:16.812 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:16.812 08:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:16.812 08:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:16.812 08:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:16.812 08:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:16.812 08:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:28:16.812 08:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:28:16.812 08:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:16.812 08:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:16.812 08:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 0 00:28:16.812 08:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:28:16.812 08:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:28:16.812 08:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:28:16.812 08:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:28:16.812 08:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:28:16.812 08:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:16.812 08:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:16.812 08:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:16.813 08:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:28:16.813 08:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:28:17.378 00:28:17.378 08:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:28:17.378 08:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:28:17.378 08:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:17.637 08:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.637 08:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:17.637 08:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:17.637 08:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:17.637 08:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:17.637 08:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:28:17.637 { 00:28:17.637 "cntlid": 25, 00:28:17.637 "qid": 0, 00:28:17.637 "state": "enabled", 00:28:17.637 "listen_address": { 00:28:17.637 "trtype": "TCP", 00:28:17.637 "adrfam": "IPv4", 00:28:17.637 "traddr": "10.0.0.2", 00:28:17.637 "trsvcid": "4420" 00:28:17.637 }, 00:28:17.637 "peer_address": { 00:28:17.637 "trtype": "TCP", 00:28:17.637 "adrfam": "IPv4", 00:28:17.637 "traddr": "10.0.0.1", 00:28:17.637 "trsvcid": "34324" 00:28:17.637 }, 00:28:17.637 "auth": { 00:28:17.637 "state": "completed", 00:28:17.637 "digest": "sha256", 00:28:17.637 "dhgroup": "ffdhe4096" 00:28:17.637 } 00:28:17.637 } 00:28:17.637 ]' 00:28:17.637 08:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:28:17.637 08:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:17.637 08:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:28:17.637 08:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:28:17.637 08:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:28:17.637 08:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:17.637 08:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:17.637 08:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:17.895 08:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:YWI3YjQ2MThmOGM4ZjY1MTg4ZGU0NTI4ODI0MGM0MzMyYWZlZGQyNDc5NmJlOGNivkRwUg==: 00:28:18.829 08:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:18.829 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:18.829 08:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:18.829 08:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:18.829 08:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:18.829 08:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:18.829 08:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:28:18.829 08:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:18.829 08:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:19.086 08:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 1 00:28:19.086 08:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:28:19.086 08:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:28:19.086 08:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:28:19.086 08:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:28:19.086 08:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:28:19.086 08:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:19.086 08:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:19.086 08:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:19.086 08:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:28:19.086 08:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:28:19.344 00:28:19.344 08:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:28:19.344 08:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:28:19.344 08:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:19.601 08:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.601 08:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:19.601 08:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:19.601 08:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:19.601 08:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:19.601 08:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:28:19.601 { 00:28:19.601 "cntlid": 27, 00:28:19.601 "qid": 0, 00:28:19.601 "state": "enabled", 00:28:19.601 "listen_address": { 00:28:19.601 "trtype": "TCP", 00:28:19.601 "adrfam": "IPv4", 00:28:19.601 "traddr": "10.0.0.2", 00:28:19.601 "trsvcid": "4420" 00:28:19.601 }, 00:28:19.601 "peer_address": { 00:28:19.601 "trtype": "TCP", 00:28:19.601 "adrfam": "IPv4", 00:28:19.601 "traddr": "10.0.0.1", 00:28:19.601 "trsvcid": "34350" 00:28:19.601 }, 00:28:19.601 "auth": { 00:28:19.601 "state": "completed", 00:28:19.601 "digest": "sha256", 00:28:19.601 "dhgroup": "ffdhe4096" 00:28:19.601 } 00:28:19.601 } 00:28:19.601 ]' 00:28:19.601 08:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:28:19.601 08:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:19.601 08:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:28:19.858 08:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:28:19.858 08:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:28:19.858 08:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:19.858 08:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:19.858 08:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:20.116 08:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:N2EyM2IxNDc3MjM3ZjYyZTk5YjU0YjcxYWI1MWM4Mjl3KYtU: 00:28:21.048 08:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:21.048 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:21.048 08:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:21.048 08:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:21.048 08:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:21.048 08:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:21.048 08:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:28:21.048 08:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:21.048 08:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:21.307 08:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 2 00:28:21.307 08:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:28:21.307 08:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:28:21.307 08:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:28:21.307 08:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:28:21.307 08:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:28:21.307 08:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:21.307 08:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:21.307 08:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:21.307 08:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:28:21.307 08:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:28:21.565 00:28:21.565 08:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:28:21.565 08:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:21.565 08:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:28:21.824 08:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:21.824 08:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:21.824 08:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:21.824 08:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:21.824 08:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:21.824 08:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:28:21.824 { 00:28:21.824 "cntlid": 29, 00:28:21.824 "qid": 0, 00:28:21.824 "state": "enabled", 00:28:21.824 "listen_address": { 00:28:21.824 "trtype": "TCP", 00:28:21.824 "adrfam": "IPv4", 00:28:21.824 "traddr": "10.0.0.2", 00:28:21.824 "trsvcid": "4420" 00:28:21.824 }, 00:28:21.824 "peer_address": { 00:28:21.824 "trtype": "TCP", 00:28:21.824 "adrfam": "IPv4", 00:28:21.824 "traddr": "10.0.0.1", 00:28:21.824 "trsvcid": "34378" 00:28:21.824 }, 00:28:21.824 "auth": { 00:28:21.824 "state": "completed", 00:28:21.824 "digest": "sha256", 00:28:21.824 "dhgroup": "ffdhe4096" 00:28:21.824 } 00:28:21.824 } 00:28:21.824 ]' 00:28:21.824 08:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:28:21.824 08:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:21.824 08:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:28:21.824 08:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:28:21.824 08:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:28:22.082 08:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:22.082 08:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:22.082 08:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:22.340 08:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZTliOWIwMjlmNzE4OWMzMDVhOTQyYjA3NmE3NWI3Y2UyZmQyOTgxNGYzMTkxYjkzn7qbcw==: 00:28:23.274 08:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:23.274 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:23.274 08:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:23.274 08:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:23.274 08:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:23.274 08:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:23.274 08:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:28:23.274 08:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:23.274 08:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:23.274 08:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 3 00:28:23.274 08:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:28:23.274 08:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:28:23.274 08:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:28:23.274 08:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:28:23.274 08:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:28:23.274 08:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:23.274 08:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:23.274 08:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:23.274 08:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:23.274 08:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:23.840 00:28:23.840 08:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:28:23.841 08:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:28:23.841 08:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:24.099 08:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:24.099 08:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:24.099 08:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:24.099 08:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:24.099 08:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:24.099 08:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:28:24.099 { 00:28:24.099 "cntlid": 31, 00:28:24.099 "qid": 0, 00:28:24.099 "state": "enabled", 00:28:24.099 "listen_address": { 00:28:24.099 "trtype": "TCP", 00:28:24.099 "adrfam": "IPv4", 00:28:24.099 "traddr": "10.0.0.2", 00:28:24.099 "trsvcid": "4420" 00:28:24.099 }, 00:28:24.099 "peer_address": { 00:28:24.099 "trtype": "TCP", 00:28:24.099 "adrfam": "IPv4", 00:28:24.099 "traddr": "10.0.0.1", 00:28:24.099 "trsvcid": "34422" 00:28:24.099 }, 00:28:24.099 "auth": { 00:28:24.099 "state": "completed", 00:28:24.099 "digest": "sha256", 00:28:24.099 "dhgroup": "ffdhe4096" 00:28:24.099 } 00:28:24.099 } 00:28:24.099 ]' 00:28:24.099 08:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:28:24.099 08:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:24.099 08:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:28:24.099 08:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:28:24.099 08:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:28:24.099 08:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:24.099 08:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:24.099 08:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:24.357 08:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZmYzODNkMzJlNmZmMzlkMDkzZDM2YzJiMTQxNTI0M2FjZTE3MDFiYThjZGUxMmFlN2MzNDQ3MmFkNTNhZTYyZgA1QD8=: 00:28:25.291 08:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:25.291 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:25.291 08:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:25.291 08:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:25.291 08:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:25.291 08:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:25.291 08:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:28:25.291 08:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:28:25.291 08:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:25.291 08:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:25.548 08:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 0 00:28:25.549 08:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:28:25.549 08:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:28:25.549 08:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:28:25.549 08:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:28:25.549 08:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:28:25.549 08:54:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:25.549 08:54:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:25.549 08:54:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:25.549 08:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:28:25.549 08:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:28:26.114 00:28:26.114 08:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:28:26.114 08:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:26.114 08:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:28:26.373 08:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:26.373 08:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:26.373 08:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:26.373 08:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:26.373 08:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:26.373 08:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:28:26.373 { 00:28:26.373 "cntlid": 33, 00:28:26.373 "qid": 0, 00:28:26.373 "state": "enabled", 00:28:26.373 "listen_address": { 00:28:26.373 "trtype": "TCP", 00:28:26.373 "adrfam": "IPv4", 00:28:26.373 "traddr": "10.0.0.2", 00:28:26.373 "trsvcid": "4420" 00:28:26.373 }, 00:28:26.373 "peer_address": { 00:28:26.373 "trtype": "TCP", 00:28:26.373 "adrfam": "IPv4", 00:28:26.373 "traddr": "10.0.0.1", 00:28:26.373 "trsvcid": "34444" 00:28:26.373 }, 00:28:26.373 "auth": { 00:28:26.373 "state": "completed", 00:28:26.373 "digest": "sha256", 00:28:26.373 "dhgroup": "ffdhe6144" 00:28:26.373 } 00:28:26.373 } 00:28:26.373 ]' 00:28:26.373 08:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:28:26.373 08:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:26.373 08:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:28:26.373 08:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:28:26.373 08:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:28:26.631 08:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:26.631 08:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:26.631 08:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:26.889 08:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:YWI3YjQ2MThmOGM4ZjY1MTg4ZGU0NTI4ODI0MGM0MzMyYWZlZGQyNDc5NmJlOGNivkRwUg==: 00:28:27.823 08:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:27.823 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:27.823 08:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:27.823 08:54:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:27.823 08:54:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:27.823 08:54:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:27.823 08:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:28:27.823 08:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:27.823 08:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:28.081 08:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 1 00:28:28.081 08:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:28:28.081 08:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:28:28.081 08:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:28:28.081 08:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:28:28.081 08:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:28:28.081 08:54:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:28.081 08:54:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:28.081 08:54:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:28.081 08:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:28:28.081 08:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:28:28.647 00:28:28.647 08:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:28:28.647 08:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:28:28.647 08:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:28.904 08:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:28.905 08:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:28.905 08:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:28.905 08:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:28.905 08:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:28.905 08:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:28:28.905 { 00:28:28.905 "cntlid": 35, 00:28:28.905 "qid": 0, 00:28:28.905 "state": "enabled", 00:28:28.905 "listen_address": { 00:28:28.905 "trtype": "TCP", 00:28:28.905 "adrfam": "IPv4", 00:28:28.905 "traddr": "10.0.0.2", 00:28:28.905 "trsvcid": "4420" 00:28:28.905 }, 00:28:28.905 "peer_address": { 00:28:28.905 "trtype": "TCP", 00:28:28.905 "adrfam": "IPv4", 00:28:28.905 "traddr": "10.0.0.1", 00:28:28.905 "trsvcid": "54090" 00:28:28.905 }, 00:28:28.905 "auth": { 00:28:28.905 "state": "completed", 00:28:28.905 "digest": "sha256", 00:28:28.905 "dhgroup": "ffdhe6144" 00:28:28.905 } 00:28:28.905 } 00:28:28.905 ]' 00:28:28.905 08:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:28:28.905 08:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:28.905 08:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:28:28.905 08:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:28:28.905 08:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:28:28.905 08:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:28.905 08:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:28.905 08:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:29.163 08:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:N2EyM2IxNDc3MjM3ZjYyZTk5YjU0YjcxYWI1MWM4Mjl3KYtU: 00:28:30.097 08:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:30.097 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:30.097 08:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:30.097 08:54:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:30.097 08:54:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:30.097 08:54:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:30.097 08:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:28:30.097 08:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:30.097 08:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:30.354 08:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 2 00:28:30.354 08:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:28:30.354 08:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:28:30.354 08:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:28:30.354 08:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:28:30.354 08:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:28:30.354 08:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:30.354 08:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:30.354 08:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:30.354 08:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:28:30.354 08:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:28:30.944 00:28:30.944 08:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:28:30.944 08:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:28:30.944 08:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:31.228 08:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.228 08:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:31.228 08:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:31.228 08:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:31.228 08:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:31.228 08:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:28:31.228 { 00:28:31.228 "cntlid": 37, 00:28:31.228 "qid": 0, 00:28:31.228 "state": "enabled", 00:28:31.228 "listen_address": { 00:28:31.228 "trtype": "TCP", 00:28:31.228 "adrfam": "IPv4", 00:28:31.228 "traddr": "10.0.0.2", 00:28:31.228 "trsvcid": "4420" 00:28:31.228 }, 00:28:31.228 "peer_address": { 00:28:31.228 "trtype": "TCP", 00:28:31.228 "adrfam": "IPv4", 00:28:31.228 "traddr": "10.0.0.1", 00:28:31.228 "trsvcid": "54120" 00:28:31.228 }, 00:28:31.228 "auth": { 00:28:31.228 "state": "completed", 00:28:31.228 "digest": "sha256", 00:28:31.228 "dhgroup": "ffdhe6144" 00:28:31.228 } 00:28:31.228 } 00:28:31.228 ]' 00:28:31.228 08:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:28:31.228 08:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:31.228 08:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:28:31.228 08:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:28:31.228 08:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:28:31.486 08:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:31.486 08:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:31.486 08:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:31.486 08:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZTliOWIwMjlmNzE4OWMzMDVhOTQyYjA3NmE3NWI3Y2UyZmQyOTgxNGYzMTkxYjkzn7qbcw==: 00:28:32.418 08:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:32.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:32.418 08:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:32.418 08:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:32.418 08:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:32.418 08:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:32.418 08:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:28:32.418 08:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:32.418 08:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:32.676 08:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 3 00:28:32.676 08:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:28:32.676 08:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:28:32.676 08:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:28:32.676 08:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:28:32.676 08:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:28:32.676 08:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:32.676 08:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:32.676 08:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:32.676 08:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:32.676 08:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:33.241 00:28:33.241 08:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:28:33.241 08:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:28:33.241 08:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:33.499 08:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:33.499 08:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:33.499 08:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:33.499 08:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:33.499 08:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:33.499 08:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:28:33.499 { 00:28:33.499 "cntlid": 39, 00:28:33.499 "qid": 0, 00:28:33.499 "state": "enabled", 00:28:33.499 "listen_address": { 00:28:33.499 "trtype": "TCP", 00:28:33.499 "adrfam": "IPv4", 00:28:33.499 "traddr": "10.0.0.2", 00:28:33.499 "trsvcid": "4420" 00:28:33.499 }, 00:28:33.499 "peer_address": { 00:28:33.499 "trtype": "TCP", 00:28:33.499 "adrfam": "IPv4", 00:28:33.499 "traddr": "10.0.0.1", 00:28:33.499 "trsvcid": "54148" 00:28:33.499 }, 00:28:33.499 "auth": { 00:28:33.499 "state": "completed", 00:28:33.499 "digest": "sha256", 00:28:33.499 "dhgroup": "ffdhe6144" 00:28:33.499 } 00:28:33.499 } 00:28:33.499 ]' 00:28:33.499 08:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:28:33.757 08:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:33.757 08:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:28:33.757 08:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:28:33.757 08:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:28:33.757 08:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:33.757 08:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:33.757 08:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:34.014 08:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZmYzODNkMzJlNmZmMzlkMDkzZDM2YzJiMTQxNTI0M2FjZTE3MDFiYThjZGUxMmFlN2MzNDQ3MmFkNTNhZTYyZgA1QD8=: 00:28:34.948 08:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:34.948 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:34.948 08:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:34.948 08:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:34.948 08:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:34.948 08:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:34.948 08:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:28:34.948 08:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:28:34.948 08:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:34.948 08:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:35.211 08:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 0 00:28:35.211 08:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:28:35.211 08:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:28:35.211 08:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:28:35.211 08:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:28:35.211 08:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:28:35.211 08:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:35.211 08:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:35.211 08:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:35.211 08:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:28:35.211 08:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:28:36.146 00:28:36.146 08:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:28:36.146 08:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:28:36.146 08:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:36.404 08:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:36.404 08:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:36.404 08:54:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:36.404 08:54:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:36.404 08:54:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:36.404 08:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:28:36.404 { 00:28:36.404 "cntlid": 41, 00:28:36.404 "qid": 0, 00:28:36.404 "state": "enabled", 00:28:36.404 "listen_address": { 00:28:36.404 "trtype": "TCP", 00:28:36.404 "adrfam": "IPv4", 00:28:36.404 "traddr": "10.0.0.2", 00:28:36.404 "trsvcid": "4420" 00:28:36.404 }, 00:28:36.404 "peer_address": { 00:28:36.404 "trtype": "TCP", 00:28:36.404 "adrfam": "IPv4", 00:28:36.404 "traddr": "10.0.0.1", 00:28:36.404 "trsvcid": "54184" 00:28:36.404 }, 00:28:36.404 "auth": { 00:28:36.404 "state": "completed", 00:28:36.404 "digest": "sha256", 00:28:36.404 "dhgroup": "ffdhe8192" 00:28:36.404 } 00:28:36.404 } 00:28:36.404 ]' 00:28:36.404 08:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:28:36.404 08:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:36.404 08:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:28:36.404 08:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:28:36.404 08:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:28:36.404 08:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:36.404 08:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:36.404 08:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:36.661 08:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:YWI3YjQ2MThmOGM4ZjY1MTg4ZGU0NTI4ODI0MGM0MzMyYWZlZGQyNDc5NmJlOGNivkRwUg==: 00:28:37.594 08:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:37.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:37.594 08:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:37.594 08:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:37.594 08:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:37.594 08:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:37.594 08:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:28:37.594 08:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:37.594 08:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:37.852 08:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 1 00:28:37.852 08:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:28:37.852 08:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:28:37.852 08:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:28:37.852 08:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:28:37.852 08:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:28:37.852 08:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:37.852 08:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:37.852 08:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:37.852 08:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:28:37.852 08:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:28:38.783 00:28:38.783 08:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:28:38.783 08:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:28:38.783 08:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:39.041 08:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:39.041 08:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:39.041 08:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:39.041 08:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:39.041 08:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:39.041 08:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:28:39.041 { 00:28:39.041 "cntlid": 43, 00:28:39.041 "qid": 0, 00:28:39.041 "state": "enabled", 00:28:39.041 "listen_address": { 00:28:39.041 "trtype": "TCP", 00:28:39.041 "adrfam": "IPv4", 00:28:39.041 "traddr": "10.0.0.2", 00:28:39.041 "trsvcid": "4420" 00:28:39.041 }, 00:28:39.041 "peer_address": { 00:28:39.041 "trtype": "TCP", 00:28:39.041 "adrfam": "IPv4", 00:28:39.041 "traddr": "10.0.0.1", 00:28:39.041 "trsvcid": "44338" 00:28:39.041 }, 00:28:39.041 "auth": { 00:28:39.041 "state": "completed", 00:28:39.041 "digest": "sha256", 00:28:39.041 "dhgroup": "ffdhe8192" 00:28:39.041 } 00:28:39.041 } 00:28:39.041 ]' 00:28:39.041 08:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:28:39.041 08:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:39.041 08:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:28:39.041 08:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:28:39.041 08:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:28:39.041 08:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:39.041 08:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:39.041 08:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:39.298 08:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:N2EyM2IxNDc3MjM3ZjYyZTk5YjU0YjcxYWI1MWM4Mjl3KYtU: 00:28:40.230 08:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:40.230 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:40.230 08:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:40.230 08:54:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:40.230 08:54:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:40.230 08:54:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:40.230 08:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:28:40.230 08:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:40.230 08:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:40.487 08:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 2 00:28:40.487 08:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:28:40.487 08:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:28:40.487 08:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:28:40.487 08:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:28:40.487 08:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:28:40.487 08:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:40.487 08:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:40.487 08:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:40.487 08:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:28:40.487 08:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:28:41.419 00:28:41.419 08:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:28:41.419 08:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:41.419 08:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:28:41.675 08:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.675 08:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:41.675 08:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:41.676 08:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:41.676 08:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:41.676 08:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:28:41.676 { 00:28:41.676 "cntlid": 45, 00:28:41.676 "qid": 0, 00:28:41.676 "state": "enabled", 00:28:41.676 "listen_address": { 00:28:41.676 "trtype": "TCP", 00:28:41.676 "adrfam": "IPv4", 00:28:41.676 "traddr": "10.0.0.2", 00:28:41.676 "trsvcid": "4420" 00:28:41.676 }, 00:28:41.676 "peer_address": { 00:28:41.676 "trtype": "TCP", 00:28:41.676 "adrfam": "IPv4", 00:28:41.676 "traddr": "10.0.0.1", 00:28:41.676 "trsvcid": "44378" 00:28:41.676 }, 00:28:41.676 "auth": { 00:28:41.676 "state": "completed", 00:28:41.676 "digest": "sha256", 00:28:41.676 "dhgroup": "ffdhe8192" 00:28:41.676 } 00:28:41.676 } 00:28:41.676 ]' 00:28:41.676 08:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:28:41.676 08:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:41.676 08:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:28:41.676 08:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:28:41.676 08:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:28:41.676 08:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:41.676 08:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:41.676 08:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:41.933 08:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZTliOWIwMjlmNzE4OWMzMDVhOTQyYjA3NmE3NWI3Y2UyZmQyOTgxNGYzMTkxYjkzn7qbcw==: 00:28:42.866 08:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:42.866 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:42.866 08:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:42.866 08:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:42.866 08:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:42.866 08:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:42.866 08:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:28:42.866 08:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:42.866 08:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:43.431 08:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 3 00:28:43.431 08:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:28:43.431 08:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:28:43.431 08:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:28:43.431 08:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:28:43.431 08:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:28:43.431 08:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:43.431 08:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:43.431 08:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:43.431 08:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:43.431 08:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:44.364 00:28:44.364 08:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:28:44.364 08:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:28:44.364 08:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:44.621 08:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:44.621 08:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:44.621 08:54:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:44.621 08:54:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:44.621 08:54:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:44.621 08:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:28:44.621 { 00:28:44.621 "cntlid": 47, 00:28:44.621 "qid": 0, 00:28:44.621 "state": "enabled", 00:28:44.621 "listen_address": { 00:28:44.621 "trtype": "TCP", 00:28:44.621 "adrfam": "IPv4", 00:28:44.621 "traddr": "10.0.0.2", 00:28:44.621 "trsvcid": "4420" 00:28:44.621 }, 00:28:44.621 "peer_address": { 00:28:44.621 "trtype": "TCP", 00:28:44.621 "adrfam": "IPv4", 00:28:44.621 "traddr": "10.0.0.1", 00:28:44.621 "trsvcid": "44400" 00:28:44.621 }, 00:28:44.621 "auth": { 00:28:44.621 "state": "completed", 00:28:44.621 "digest": "sha256", 00:28:44.621 "dhgroup": "ffdhe8192" 00:28:44.621 } 00:28:44.621 } 00:28:44.621 ]' 00:28:44.621 08:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:28:44.621 08:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:44.621 08:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:28:44.621 08:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:28:44.621 08:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:28:44.621 08:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:44.621 08:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:44.621 08:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:44.878 08:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZmYzODNkMzJlNmZmMzlkMDkzZDM2YzJiMTQxNTI0M2FjZTE3MDFiYThjZGUxMmFlN2MzNDQ3MmFkNTNhZTYyZgA1QD8=: 00:28:45.810 08:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:45.810 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:45.810 08:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:45.810 08:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:45.810 08:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:45.810 08:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:45.810 08:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:28:45.810 08:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:28:45.810 08:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:28:45.810 08:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:28:45.810 08:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:28:46.067 08:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 0 00:28:46.067 08:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:28:46.067 08:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:28:46.067 08:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:28:46.067 08:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:28:46.067 08:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:28:46.067 08:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:46.067 08:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:46.067 08:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:46.067 08:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:28:46.067 08:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:28:46.348 00:28:46.348 08:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:28:46.348 08:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:46.348 08:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:28:46.650 08:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:46.650 08:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:46.650 08:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:46.650 08:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:46.650 08:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:46.650 08:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:28:46.650 { 00:28:46.650 "cntlid": 49, 00:28:46.650 "qid": 0, 00:28:46.650 "state": "enabled", 00:28:46.650 "listen_address": { 00:28:46.650 "trtype": "TCP", 00:28:46.650 "adrfam": "IPv4", 00:28:46.650 "traddr": "10.0.0.2", 00:28:46.650 "trsvcid": "4420" 00:28:46.650 }, 00:28:46.650 "peer_address": { 00:28:46.650 "trtype": "TCP", 00:28:46.650 "adrfam": "IPv4", 00:28:46.650 "traddr": "10.0.0.1", 00:28:46.650 "trsvcid": "44430" 00:28:46.650 }, 00:28:46.650 "auth": { 00:28:46.650 "state": "completed", 00:28:46.650 "digest": "sha384", 00:28:46.650 "dhgroup": "null" 00:28:46.650 } 00:28:46.650 } 00:28:46.650 ]' 00:28:46.650 08:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:28:46.650 08:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:28:46.650 08:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:28:46.650 08:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:28:46.650 08:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:28:46.650 08:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:46.650 08:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:46.650 08:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:46.908 08:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:YWI3YjQ2MThmOGM4ZjY1MTg4ZGU0NTI4ODI0MGM0MzMyYWZlZGQyNDc5NmJlOGNivkRwUg==: 00:28:47.839 08:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:47.839 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:47.839 08:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:47.839 08:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:47.839 08:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:47.839 08:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:47.839 08:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:28:47.840 08:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:28:47.840 08:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:28:48.097 08:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 1 00:28:48.097 08:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:28:48.097 08:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:28:48.097 08:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:28:48.097 08:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:28:48.097 08:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:28:48.097 08:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:48.097 08:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:48.097 08:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:48.097 08:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:28:48.097 08:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:28:48.663 00:28:48.663 08:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:28:48.663 08:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:28:48.663 08:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:48.921 08:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:48.921 08:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:48.921 08:54:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:48.921 08:54:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:48.921 08:54:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:48.921 08:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:28:48.921 { 00:28:48.921 "cntlid": 51, 00:28:48.921 "qid": 0, 00:28:48.921 "state": "enabled", 00:28:48.921 "listen_address": { 00:28:48.921 "trtype": "TCP", 00:28:48.921 "adrfam": "IPv4", 00:28:48.921 "traddr": "10.0.0.2", 00:28:48.921 "trsvcid": "4420" 00:28:48.921 }, 00:28:48.921 "peer_address": { 00:28:48.921 "trtype": "TCP", 00:28:48.921 "adrfam": "IPv4", 00:28:48.921 "traddr": "10.0.0.1", 00:28:48.921 "trsvcid": "41612" 00:28:48.921 }, 00:28:48.921 "auth": { 00:28:48.921 "state": "completed", 00:28:48.921 "digest": "sha384", 00:28:48.921 "dhgroup": "null" 00:28:48.921 } 00:28:48.921 } 00:28:48.921 ]' 00:28:48.921 08:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:28:48.921 08:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:28:48.921 08:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:28:48.921 08:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:28:48.921 08:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:28:48.921 08:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:48.921 08:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:48.921 08:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:49.179 08:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:N2EyM2IxNDc3MjM3ZjYyZTk5YjU0YjcxYWI1MWM4Mjl3KYtU: 00:28:50.115 08:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:50.115 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:50.115 08:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:50.115 08:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:50.115 08:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:50.115 08:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:50.115 08:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:28:50.115 08:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:28:50.115 08:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:28:50.372 08:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 2 00:28:50.372 08:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:28:50.372 08:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:28:50.372 08:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:28:50.372 08:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:28:50.372 08:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:28:50.372 08:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:50.372 08:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:50.372 08:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:50.372 08:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:28:50.372 08:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:28:50.629 00:28:50.629 08:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:28:50.629 08:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:28:50.629 08:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:50.887 08:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:50.887 08:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:50.887 08:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:50.887 08:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:50.887 08:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:50.887 08:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:28:50.887 { 00:28:50.887 "cntlid": 53, 00:28:50.887 "qid": 0, 00:28:50.887 "state": "enabled", 00:28:50.887 "listen_address": { 00:28:50.887 "trtype": "TCP", 00:28:50.887 "adrfam": "IPv4", 00:28:50.887 "traddr": "10.0.0.2", 00:28:50.887 "trsvcid": "4420" 00:28:50.887 }, 00:28:50.887 "peer_address": { 00:28:50.887 "trtype": "TCP", 00:28:50.887 "adrfam": "IPv4", 00:28:50.887 "traddr": "10.0.0.1", 00:28:50.887 "trsvcid": "41660" 00:28:50.887 }, 00:28:50.887 "auth": { 00:28:50.887 "state": "completed", 00:28:50.887 "digest": "sha384", 00:28:50.887 "dhgroup": "null" 00:28:50.887 } 00:28:50.887 } 00:28:50.887 ]' 00:28:50.887 08:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:28:50.887 08:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:28:50.887 08:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:28:50.887 08:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:28:50.887 08:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:28:51.144 08:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:51.144 08:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:51.144 08:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:51.402 08:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZTliOWIwMjlmNzE4OWMzMDVhOTQyYjA3NmE3NWI3Y2UyZmQyOTgxNGYzMTkxYjkzn7qbcw==: 00:28:52.334 08:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:52.334 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:52.334 08:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:52.334 08:54:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:52.334 08:54:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:52.334 08:54:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:52.334 08:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:28:52.334 08:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:28:52.334 08:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:28:52.592 08:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 3 00:28:52.592 08:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:28:52.592 08:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:28:52.592 08:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:28:52.592 08:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:28:52.592 08:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:28:52.592 08:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:52.592 08:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:52.592 08:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:52.592 08:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:52.592 08:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:52.850 00:28:52.850 08:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:28:52.850 08:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:28:52.850 08:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:53.107 08:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:53.107 08:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:53.107 08:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:53.107 08:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:53.107 08:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:53.107 08:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:28:53.107 { 00:28:53.107 "cntlid": 55, 00:28:53.107 "qid": 0, 00:28:53.107 "state": "enabled", 00:28:53.107 "listen_address": { 00:28:53.107 "trtype": "TCP", 00:28:53.107 "adrfam": "IPv4", 00:28:53.107 "traddr": "10.0.0.2", 00:28:53.107 "trsvcid": "4420" 00:28:53.107 }, 00:28:53.107 "peer_address": { 00:28:53.107 "trtype": "TCP", 00:28:53.107 "adrfam": "IPv4", 00:28:53.107 "traddr": "10.0.0.1", 00:28:53.107 "trsvcid": "41682" 00:28:53.107 }, 00:28:53.107 "auth": { 00:28:53.107 "state": "completed", 00:28:53.107 "digest": "sha384", 00:28:53.107 "dhgroup": "null" 00:28:53.107 } 00:28:53.107 } 00:28:53.107 ]' 00:28:53.107 08:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:28:53.107 08:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:28:53.107 08:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:28:53.107 08:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:28:53.107 08:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:28:53.107 08:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:53.107 08:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:53.107 08:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:53.365 08:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZmYzODNkMzJlNmZmMzlkMDkzZDM2YzJiMTQxNTI0M2FjZTE3MDFiYThjZGUxMmFlN2MzNDQ3MmFkNTNhZTYyZgA1QD8=: 00:28:54.737 08:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:54.737 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:54.737 08:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:54.737 08:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:54.737 08:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:54.737 08:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:54.737 08:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:28:54.737 08:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:28:54.737 08:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:54.737 08:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:54.737 08:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 0 00:28:54.737 08:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:28:54.737 08:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:28:54.737 08:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:28:54.737 08:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:28:54.737 08:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:28:54.737 08:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:54.737 08:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:54.737 08:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:54.737 08:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:28:54.737 08:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:28:54.994 00:28:54.994 08:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:28:54.994 08:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:28:54.994 08:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:55.252 08:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:55.252 08:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:55.252 08:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:55.252 08:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:55.252 08:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:55.252 08:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:28:55.252 { 00:28:55.252 "cntlid": 57, 00:28:55.252 "qid": 0, 00:28:55.252 "state": "enabled", 00:28:55.252 "listen_address": { 00:28:55.252 "trtype": "TCP", 00:28:55.252 "adrfam": "IPv4", 00:28:55.252 "traddr": "10.0.0.2", 00:28:55.252 "trsvcid": "4420" 00:28:55.252 }, 00:28:55.252 "peer_address": { 00:28:55.252 "trtype": "TCP", 00:28:55.252 "adrfam": "IPv4", 00:28:55.252 "traddr": "10.0.0.1", 00:28:55.252 "trsvcid": "41700" 00:28:55.252 }, 00:28:55.252 "auth": { 00:28:55.252 "state": "completed", 00:28:55.252 "digest": "sha384", 00:28:55.252 "dhgroup": "ffdhe2048" 00:28:55.252 } 00:28:55.252 } 00:28:55.253 ]' 00:28:55.253 08:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:28:55.253 08:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:28:55.253 08:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:28:55.510 08:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:28:55.510 08:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:28:55.510 08:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:55.510 08:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:55.510 08:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:55.768 08:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:YWI3YjQ2MThmOGM4ZjY1MTg4ZGU0NTI4ODI0MGM0MzMyYWZlZGQyNDc5NmJlOGNivkRwUg==: 00:28:56.701 08:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:56.701 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:56.701 08:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:56.701 08:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:56.701 08:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:56.701 08:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:56.701 08:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:28:56.701 08:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:56.701 08:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:56.958 08:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 1 00:28:56.958 08:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:28:56.958 08:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:28:56.958 08:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:28:56.958 08:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:28:56.958 08:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:28:56.958 08:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:56.958 08:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:56.958 08:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:56.958 08:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:28:56.958 08:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:28:57.215 00:28:57.215 08:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:28:57.215 08:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:57.216 08:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:28:57.473 08:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:57.473 08:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:57.473 08:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:57.473 08:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:57.473 08:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:57.473 08:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:28:57.473 { 00:28:57.473 "cntlid": 59, 00:28:57.473 "qid": 0, 00:28:57.473 "state": "enabled", 00:28:57.473 "listen_address": { 00:28:57.473 "trtype": "TCP", 00:28:57.473 "adrfam": "IPv4", 00:28:57.473 "traddr": "10.0.0.2", 00:28:57.473 "trsvcid": "4420" 00:28:57.473 }, 00:28:57.473 "peer_address": { 00:28:57.473 "trtype": "TCP", 00:28:57.473 "adrfam": "IPv4", 00:28:57.473 "traddr": "10.0.0.1", 00:28:57.473 "trsvcid": "57680" 00:28:57.473 }, 00:28:57.473 "auth": { 00:28:57.473 "state": "completed", 00:28:57.473 "digest": "sha384", 00:28:57.473 "dhgroup": "ffdhe2048" 00:28:57.473 } 00:28:57.473 } 00:28:57.473 ]' 00:28:57.473 08:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:28:57.473 08:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:28:57.473 08:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:28:57.731 08:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:28:57.731 08:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:28:57.731 08:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:57.731 08:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:57.731 08:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:57.989 08:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:N2EyM2IxNDc3MjM3ZjYyZTk5YjU0YjcxYWI1MWM4Mjl3KYtU: 00:28:58.924 08:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:58.924 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:58.924 08:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:58.924 08:54:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:58.924 08:54:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:58.924 08:54:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:58.925 08:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:28:58.925 08:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:58.925 08:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:58.925 08:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 2 00:28:58.925 08:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:28:58.925 08:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:28:58.925 08:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:28:58.925 08:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:28:58.925 08:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:28:58.925 08:54:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:58.925 08:54:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:59.182 08:54:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:59.182 08:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:28:59.182 08:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:28:59.440 00:28:59.440 08:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:28:59.440 08:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:28:59.440 08:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:59.698 08:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:59.698 08:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:59.698 08:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:59.698 08:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:59.698 08:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:59.698 08:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:28:59.698 { 00:28:59.698 "cntlid": 61, 00:28:59.698 "qid": 0, 00:28:59.698 "state": "enabled", 00:28:59.698 "listen_address": { 00:28:59.698 "trtype": "TCP", 00:28:59.698 "adrfam": "IPv4", 00:28:59.698 "traddr": "10.0.0.2", 00:28:59.698 "trsvcid": "4420" 00:28:59.698 }, 00:28:59.698 "peer_address": { 00:28:59.698 "trtype": "TCP", 00:28:59.698 "adrfam": "IPv4", 00:28:59.698 "traddr": "10.0.0.1", 00:28:59.698 "trsvcid": "57716" 00:28:59.698 }, 00:28:59.698 "auth": { 00:28:59.698 "state": "completed", 00:28:59.698 "digest": "sha384", 00:28:59.698 "dhgroup": "ffdhe2048" 00:28:59.698 } 00:28:59.698 } 00:28:59.698 ]' 00:28:59.698 08:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:28:59.698 08:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:28:59.698 08:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:28:59.698 08:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:28:59.698 08:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:28:59.698 08:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:59.698 08:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:59.698 08:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:59.955 08:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZTliOWIwMjlmNzE4OWMzMDVhOTQyYjA3NmE3NWI3Y2UyZmQyOTgxNGYzMTkxYjkzn7qbcw==: 00:29:00.888 08:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:00.888 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:00.888 08:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:00.888 08:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:00.888 08:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:00.888 08:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:00.888 08:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:29:00.888 08:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:00.888 08:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:01.146 08:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 3 00:29:01.146 08:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:29:01.146 08:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:29:01.146 08:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:29:01.146 08:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:29:01.146 08:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:29:01.146 08:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:01.146 08:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:01.146 08:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:01.146 08:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:01.146 08:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:01.416 00:29:01.416 08:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:29:01.416 08:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:01.416 08:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:29:01.701 08:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:01.701 08:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:01.701 08:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:01.701 08:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:01.701 08:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:01.701 08:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:29:01.701 { 00:29:01.701 "cntlid": 63, 00:29:01.701 "qid": 0, 00:29:01.701 "state": "enabled", 00:29:01.701 "listen_address": { 00:29:01.701 "trtype": "TCP", 00:29:01.701 "adrfam": "IPv4", 00:29:01.702 "traddr": "10.0.0.2", 00:29:01.702 "trsvcid": "4420" 00:29:01.702 }, 00:29:01.702 "peer_address": { 00:29:01.702 "trtype": "TCP", 00:29:01.702 "adrfam": "IPv4", 00:29:01.702 "traddr": "10.0.0.1", 00:29:01.702 "trsvcid": "57738" 00:29:01.702 }, 00:29:01.702 "auth": { 00:29:01.702 "state": "completed", 00:29:01.702 "digest": "sha384", 00:29:01.702 "dhgroup": "ffdhe2048" 00:29:01.702 } 00:29:01.702 } 00:29:01.702 ]' 00:29:01.702 08:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:29:01.702 08:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:01.702 08:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:29:01.959 08:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:29:01.959 08:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:29:01.959 08:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:01.959 08:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:01.959 08:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:02.217 08:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZmYzODNkMzJlNmZmMzlkMDkzZDM2YzJiMTQxNTI0M2FjZTE3MDFiYThjZGUxMmFlN2MzNDQ3MmFkNTNhZTYyZgA1QD8=: 00:29:03.150 08:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:03.150 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:03.150 08:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:03.150 08:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:03.150 08:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:03.150 08:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:03.150 08:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:29:03.150 08:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:29:03.150 08:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:03.150 08:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:03.408 08:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 0 00:29:03.408 08:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:29:03.408 08:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:29:03.408 08:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:29:03.408 08:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:29:03.408 08:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:29:03.408 08:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:03.408 08:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:03.408 08:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:03.408 08:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:29:03.408 08:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:29:03.666 00:29:03.666 08:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:29:03.666 08:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:29:03.666 08:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:03.923 08:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:03.923 08:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:03.923 08:54:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:03.923 08:54:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:03.924 08:54:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:03.924 08:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:29:03.924 { 00:29:03.924 "cntlid": 65, 00:29:03.924 "qid": 0, 00:29:03.924 "state": "enabled", 00:29:03.924 "listen_address": { 00:29:03.924 "trtype": "TCP", 00:29:03.924 "adrfam": "IPv4", 00:29:03.924 "traddr": "10.0.0.2", 00:29:03.924 "trsvcid": "4420" 00:29:03.924 }, 00:29:03.924 "peer_address": { 00:29:03.924 "trtype": "TCP", 00:29:03.924 "adrfam": "IPv4", 00:29:03.924 "traddr": "10.0.0.1", 00:29:03.924 "trsvcid": "57778" 00:29:03.924 }, 00:29:03.924 "auth": { 00:29:03.924 "state": "completed", 00:29:03.924 "digest": "sha384", 00:29:03.924 "dhgroup": "ffdhe3072" 00:29:03.924 } 00:29:03.924 } 00:29:03.924 ]' 00:29:03.924 08:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:29:03.924 08:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:03.924 08:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:29:03.924 08:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:29:03.924 08:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:29:03.924 08:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:03.924 08:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:03.924 08:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:04.181 08:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:YWI3YjQ2MThmOGM4ZjY1MTg4ZGU0NTI4ODI0MGM0MzMyYWZlZGQyNDc5NmJlOGNivkRwUg==: 00:29:05.115 08:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:05.115 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:05.115 08:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:05.115 08:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:05.115 08:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:05.115 08:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:05.115 08:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:29:05.115 08:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:05.115 08:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:05.372 08:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 1 00:29:05.372 08:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:29:05.372 08:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:29:05.372 08:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:29:05.372 08:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:29:05.372 08:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:29:05.372 08:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:05.372 08:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:05.372 08:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:05.372 08:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:29:05.372 08:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:29:05.938 00:29:05.938 08:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:29:05.938 08:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:29:05.938 08:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:05.938 08:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:05.938 08:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:05.938 08:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:05.938 08:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:06.196 08:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:06.196 08:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:29:06.196 { 00:29:06.196 "cntlid": 67, 00:29:06.196 "qid": 0, 00:29:06.196 "state": "enabled", 00:29:06.196 "listen_address": { 00:29:06.196 "trtype": "TCP", 00:29:06.196 "adrfam": "IPv4", 00:29:06.196 "traddr": "10.0.0.2", 00:29:06.196 "trsvcid": "4420" 00:29:06.196 }, 00:29:06.196 "peer_address": { 00:29:06.196 "trtype": "TCP", 00:29:06.196 "adrfam": "IPv4", 00:29:06.196 "traddr": "10.0.0.1", 00:29:06.196 "trsvcid": "57814" 00:29:06.196 }, 00:29:06.196 "auth": { 00:29:06.196 "state": "completed", 00:29:06.196 "digest": "sha384", 00:29:06.196 "dhgroup": "ffdhe3072" 00:29:06.196 } 00:29:06.196 } 00:29:06.196 ]' 00:29:06.196 08:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:29:06.196 08:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:06.196 08:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:29:06.196 08:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:29:06.196 08:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:29:06.196 08:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:06.196 08:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:06.196 08:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:06.454 08:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:N2EyM2IxNDc3MjM3ZjYyZTk5YjU0YjcxYWI1MWM4Mjl3KYtU: 00:29:07.386 08:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:07.386 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:07.386 08:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:07.386 08:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:07.386 08:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:07.386 08:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:07.386 08:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:29:07.386 08:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:07.386 08:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:07.644 08:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 2 00:29:07.644 08:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:29:07.644 08:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:29:07.644 08:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:29:07.644 08:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:29:07.644 08:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:29:07.644 08:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:07.644 08:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:07.644 08:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:07.644 08:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:29:07.644 08:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:29:07.900 00:29:07.900 08:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:29:07.900 08:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:29:07.900 08:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:08.157 08:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:08.157 08:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:08.157 08:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:08.157 08:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:08.157 08:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:08.157 08:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:29:08.157 { 00:29:08.157 "cntlid": 69, 00:29:08.157 "qid": 0, 00:29:08.157 "state": "enabled", 00:29:08.157 "listen_address": { 00:29:08.157 "trtype": "TCP", 00:29:08.157 "adrfam": "IPv4", 00:29:08.157 "traddr": "10.0.0.2", 00:29:08.157 "trsvcid": "4420" 00:29:08.157 }, 00:29:08.157 "peer_address": { 00:29:08.157 "trtype": "TCP", 00:29:08.157 "adrfam": "IPv4", 00:29:08.157 "traddr": "10.0.0.1", 00:29:08.157 "trsvcid": "45394" 00:29:08.157 }, 00:29:08.157 "auth": { 00:29:08.157 "state": "completed", 00:29:08.157 "digest": "sha384", 00:29:08.157 "dhgroup": "ffdhe3072" 00:29:08.157 } 00:29:08.157 } 00:29:08.157 ]' 00:29:08.157 08:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:29:08.415 08:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:08.415 08:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:29:08.415 08:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:29:08.415 08:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:29:08.415 08:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:08.415 08:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:08.415 08:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:08.672 08:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZTliOWIwMjlmNzE4OWMzMDVhOTQyYjA3NmE3NWI3Y2UyZmQyOTgxNGYzMTkxYjkzn7qbcw==: 00:29:09.605 08:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:09.605 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:09.605 08:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:09.605 08:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:09.605 08:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:09.605 08:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:09.605 08:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:29:09.605 08:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:09.605 08:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:09.862 08:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 3 00:29:09.862 08:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:29:09.862 08:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:29:09.862 08:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:29:09.862 08:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:29:09.862 08:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:29:09.862 08:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:09.862 08:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:09.862 08:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:09.862 08:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:09.862 08:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:10.119 00:29:10.119 08:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:29:10.119 08:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:10.119 08:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:29:10.376 08:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:10.376 08:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:10.376 08:55:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:10.376 08:55:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:10.633 08:55:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:10.633 08:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:29:10.633 { 00:29:10.633 "cntlid": 71, 00:29:10.633 "qid": 0, 00:29:10.633 "state": "enabled", 00:29:10.633 "listen_address": { 00:29:10.633 "trtype": "TCP", 00:29:10.633 "adrfam": "IPv4", 00:29:10.633 "traddr": "10.0.0.2", 00:29:10.633 "trsvcid": "4420" 00:29:10.633 }, 00:29:10.633 "peer_address": { 00:29:10.633 "trtype": "TCP", 00:29:10.633 "adrfam": "IPv4", 00:29:10.633 "traddr": "10.0.0.1", 00:29:10.633 "trsvcid": "45422" 00:29:10.633 }, 00:29:10.633 "auth": { 00:29:10.633 "state": "completed", 00:29:10.633 "digest": "sha384", 00:29:10.633 "dhgroup": "ffdhe3072" 00:29:10.633 } 00:29:10.633 } 00:29:10.633 ]' 00:29:10.633 08:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:29:10.633 08:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:10.633 08:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:29:10.633 08:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:29:10.633 08:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:29:10.633 08:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:10.633 08:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:10.633 08:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:10.890 08:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZmYzODNkMzJlNmZmMzlkMDkzZDM2YzJiMTQxNTI0M2FjZTE3MDFiYThjZGUxMmFlN2MzNDQ3MmFkNTNhZTYyZgA1QD8=: 00:29:11.824 08:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:11.824 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:11.824 08:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:11.824 08:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:11.824 08:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:11.824 08:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:11.824 08:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:29:11.824 08:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:29:11.824 08:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:11.824 08:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:12.081 08:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 0 00:29:12.081 08:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:29:12.081 08:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:29:12.081 08:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:29:12.081 08:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:29:12.081 08:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:29:12.081 08:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:12.081 08:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:12.081 08:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:12.081 08:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:29:12.081 08:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:29:12.339 00:29:12.339 08:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:29:12.339 08:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:29:12.339 08:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:12.597 08:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:12.597 08:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:12.597 08:55:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:12.597 08:55:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:12.597 08:55:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:12.597 08:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:29:12.597 { 00:29:12.597 "cntlid": 73, 00:29:12.597 "qid": 0, 00:29:12.597 "state": "enabled", 00:29:12.597 "listen_address": { 00:29:12.597 "trtype": "TCP", 00:29:12.597 "adrfam": "IPv4", 00:29:12.597 "traddr": "10.0.0.2", 00:29:12.597 "trsvcid": "4420" 00:29:12.597 }, 00:29:12.597 "peer_address": { 00:29:12.597 "trtype": "TCP", 00:29:12.597 "adrfam": "IPv4", 00:29:12.597 "traddr": "10.0.0.1", 00:29:12.597 "trsvcid": "45442" 00:29:12.597 }, 00:29:12.597 "auth": { 00:29:12.597 "state": "completed", 00:29:12.597 "digest": "sha384", 00:29:12.597 "dhgroup": "ffdhe4096" 00:29:12.597 } 00:29:12.597 } 00:29:12.597 ]' 00:29:12.597 08:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:29:12.854 08:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:12.854 08:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:29:12.854 08:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:29:12.854 08:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:29:12.854 08:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:12.854 08:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:12.854 08:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:13.112 08:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:YWI3YjQ2MThmOGM4ZjY1MTg4ZGU0NTI4ODI0MGM0MzMyYWZlZGQyNDc5NmJlOGNivkRwUg==: 00:29:14.045 08:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:14.045 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:14.045 08:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:14.045 08:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:14.045 08:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:14.045 08:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:14.045 08:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:29:14.045 08:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:14.045 08:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:14.302 08:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 1 00:29:14.302 08:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:29:14.302 08:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:29:14.302 08:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:29:14.302 08:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:29:14.302 08:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:29:14.302 08:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:14.302 08:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:14.302 08:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:14.302 08:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:29:14.302 08:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:29:14.560 00:29:14.560 08:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:29:14.560 08:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:29:14.560 08:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:14.818 08:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:14.818 08:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:14.818 08:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:14.818 08:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:14.818 08:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:14.818 08:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:29:14.818 { 00:29:14.818 "cntlid": 75, 00:29:14.818 "qid": 0, 00:29:14.818 "state": "enabled", 00:29:14.818 "listen_address": { 00:29:14.818 "trtype": "TCP", 00:29:14.818 "adrfam": "IPv4", 00:29:14.818 "traddr": "10.0.0.2", 00:29:14.818 "trsvcid": "4420" 00:29:14.818 }, 00:29:14.818 "peer_address": { 00:29:14.818 "trtype": "TCP", 00:29:14.818 "adrfam": "IPv4", 00:29:14.818 "traddr": "10.0.0.1", 00:29:14.818 "trsvcid": "45464" 00:29:14.818 }, 00:29:14.818 "auth": { 00:29:14.818 "state": "completed", 00:29:14.818 "digest": "sha384", 00:29:14.818 "dhgroup": "ffdhe4096" 00:29:14.818 } 00:29:14.818 } 00:29:14.818 ]' 00:29:14.818 08:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:29:15.075 08:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:15.075 08:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:29:15.075 08:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:29:15.075 08:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:29:15.075 08:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:15.075 08:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:15.075 08:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:15.333 08:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:N2EyM2IxNDc3MjM3ZjYyZTk5YjU0YjcxYWI1MWM4Mjl3KYtU: 00:29:16.279 08:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:16.279 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:16.279 08:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:16.279 08:55:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:16.279 08:55:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:16.279 08:55:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:16.279 08:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:29:16.279 08:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:16.279 08:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:16.555 08:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 2 00:29:16.555 08:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:29:16.555 08:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:29:16.555 08:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:29:16.555 08:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:29:16.555 08:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:29:16.555 08:55:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:16.555 08:55:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:16.555 08:55:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:16.555 08:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:29:16.555 08:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:29:16.874 00:29:16.874 08:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:29:16.874 08:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:29:16.874 08:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:17.130 08:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:17.130 08:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:17.130 08:55:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:17.130 08:55:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:17.130 08:55:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:17.130 08:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:29:17.130 { 00:29:17.130 "cntlid": 77, 00:29:17.130 "qid": 0, 00:29:17.130 "state": "enabled", 00:29:17.130 "listen_address": { 00:29:17.130 "trtype": "TCP", 00:29:17.130 "adrfam": "IPv4", 00:29:17.130 "traddr": "10.0.0.2", 00:29:17.130 "trsvcid": "4420" 00:29:17.130 }, 00:29:17.130 "peer_address": { 00:29:17.130 "trtype": "TCP", 00:29:17.130 "adrfam": "IPv4", 00:29:17.130 "traddr": "10.0.0.1", 00:29:17.130 "trsvcid": "42224" 00:29:17.130 }, 00:29:17.130 "auth": { 00:29:17.130 "state": "completed", 00:29:17.130 "digest": "sha384", 00:29:17.130 "dhgroup": "ffdhe4096" 00:29:17.130 } 00:29:17.130 } 00:29:17.130 ]' 00:29:17.130 08:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:29:17.386 08:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:17.386 08:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:29:17.386 08:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:29:17.386 08:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:29:17.386 08:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:17.386 08:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:17.386 08:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:17.642 08:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZTliOWIwMjlmNzE4OWMzMDVhOTQyYjA3NmE3NWI3Y2UyZmQyOTgxNGYzMTkxYjkzn7qbcw==: 00:29:18.573 08:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:18.573 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:18.573 08:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:18.573 08:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:18.573 08:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:18.573 08:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:18.573 08:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:29:18.573 08:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:18.573 08:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:18.831 08:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 3 00:29:18.831 08:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:29:18.831 08:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:29:18.831 08:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:29:18.831 08:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:29:18.831 08:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:29:18.831 08:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:18.831 08:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:18.831 08:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:18.831 08:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:18.831 08:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:19.088 00:29:19.088 08:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:29:19.088 08:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:29:19.088 08:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:19.345 08:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:19.345 08:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:19.345 08:55:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:19.345 08:55:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:19.345 08:55:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:19.345 08:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:29:19.345 { 00:29:19.345 "cntlid": 79, 00:29:19.345 "qid": 0, 00:29:19.345 "state": "enabled", 00:29:19.345 "listen_address": { 00:29:19.345 "trtype": "TCP", 00:29:19.345 "adrfam": "IPv4", 00:29:19.345 "traddr": "10.0.0.2", 00:29:19.345 "trsvcid": "4420" 00:29:19.345 }, 00:29:19.345 "peer_address": { 00:29:19.345 "trtype": "TCP", 00:29:19.345 "adrfam": "IPv4", 00:29:19.345 "traddr": "10.0.0.1", 00:29:19.345 "trsvcid": "42240" 00:29:19.345 }, 00:29:19.345 "auth": { 00:29:19.345 "state": "completed", 00:29:19.345 "digest": "sha384", 00:29:19.345 "dhgroup": "ffdhe4096" 00:29:19.345 } 00:29:19.345 } 00:29:19.345 ]' 00:29:19.345 08:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:29:19.602 08:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:19.602 08:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:29:19.602 08:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:29:19.602 08:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:29:19.602 08:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:19.602 08:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:19.602 08:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:19.860 08:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZmYzODNkMzJlNmZmMzlkMDkzZDM2YzJiMTQxNTI0M2FjZTE3MDFiYThjZGUxMmFlN2MzNDQ3MmFkNTNhZTYyZgA1QD8=: 00:29:20.793 08:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:20.793 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:20.793 08:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:20.793 08:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:20.793 08:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:20.793 08:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:20.793 08:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:29:20.793 08:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:29:20.793 08:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:20.793 08:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:21.050 08:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 0 00:29:21.050 08:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:29:21.050 08:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:29:21.050 08:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:29:21.050 08:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:29:21.050 08:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:29:21.050 08:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:21.050 08:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:21.050 08:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:21.051 08:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:29:21.051 08:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:29:21.618 00:29:21.618 08:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:29:21.618 08:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:21.618 08:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:29:21.876 08:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:21.876 08:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:21.876 08:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:21.876 08:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:21.876 08:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:21.876 08:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:29:21.876 { 00:29:21.876 "cntlid": 81, 00:29:21.876 "qid": 0, 00:29:21.876 "state": "enabled", 00:29:21.876 "listen_address": { 00:29:21.876 "trtype": "TCP", 00:29:21.876 "adrfam": "IPv4", 00:29:21.876 "traddr": "10.0.0.2", 00:29:21.876 "trsvcid": "4420" 00:29:21.876 }, 00:29:21.876 "peer_address": { 00:29:21.876 "trtype": "TCP", 00:29:21.876 "adrfam": "IPv4", 00:29:21.876 "traddr": "10.0.0.1", 00:29:21.876 "trsvcid": "42278" 00:29:21.876 }, 00:29:21.876 "auth": { 00:29:21.876 "state": "completed", 00:29:21.876 "digest": "sha384", 00:29:21.876 "dhgroup": "ffdhe6144" 00:29:21.876 } 00:29:21.876 } 00:29:21.876 ]' 00:29:21.876 08:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:29:21.876 08:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:22.134 08:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:29:22.134 08:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:29:22.134 08:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:29:22.134 08:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:22.134 08:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:22.134 08:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:22.391 08:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:YWI3YjQ2MThmOGM4ZjY1MTg4ZGU0NTI4ODI0MGM0MzMyYWZlZGQyNDc5NmJlOGNivkRwUg==: 00:29:23.324 08:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:23.324 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:23.324 08:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:23.324 08:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:23.324 08:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:23.324 08:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:23.324 08:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:29:23.324 08:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:23.324 08:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:23.582 08:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 1 00:29:23.582 08:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:29:23.582 08:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:29:23.582 08:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:29:23.582 08:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:29:23.582 08:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:29:23.582 08:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:23.582 08:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:23.582 08:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:23.582 08:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:29:23.582 08:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:29:24.149 00:29:24.149 08:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:29:24.149 08:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:29:24.149 08:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:24.407 08:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:24.407 08:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:24.407 08:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:24.407 08:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:24.407 08:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:24.407 08:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:29:24.407 { 00:29:24.407 "cntlid": 83, 00:29:24.407 "qid": 0, 00:29:24.407 "state": "enabled", 00:29:24.407 "listen_address": { 00:29:24.407 "trtype": "TCP", 00:29:24.407 "adrfam": "IPv4", 00:29:24.407 "traddr": "10.0.0.2", 00:29:24.407 "trsvcid": "4420" 00:29:24.407 }, 00:29:24.407 "peer_address": { 00:29:24.407 "trtype": "TCP", 00:29:24.407 "adrfam": "IPv4", 00:29:24.407 "traddr": "10.0.0.1", 00:29:24.407 "trsvcid": "42298" 00:29:24.407 }, 00:29:24.407 "auth": { 00:29:24.407 "state": "completed", 00:29:24.407 "digest": "sha384", 00:29:24.407 "dhgroup": "ffdhe6144" 00:29:24.407 } 00:29:24.407 } 00:29:24.407 ]' 00:29:24.407 08:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:29:24.407 08:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:24.407 08:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:29:24.407 08:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:29:24.407 08:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:29:24.407 08:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:24.407 08:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:24.407 08:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:24.665 08:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:N2EyM2IxNDc3MjM3ZjYyZTk5YjU0YjcxYWI1MWM4Mjl3KYtU: 00:29:25.599 08:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:25.599 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:25.599 08:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:25.599 08:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:25.599 08:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:25.599 08:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:25.599 08:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:29:25.599 08:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:25.599 08:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:25.857 08:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 2 00:29:25.857 08:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:29:25.857 08:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:29:25.857 08:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:29:25.857 08:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:29:25.857 08:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:29:25.857 08:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:25.857 08:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:25.857 08:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:25.857 08:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:29:25.857 08:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:29:26.423 00:29:26.423 08:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:29:26.423 08:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:29:26.423 08:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:26.681 08:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:26.681 08:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:26.681 08:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:26.681 08:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:26.681 08:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:26.681 08:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:29:26.681 { 00:29:26.681 "cntlid": 85, 00:29:26.681 "qid": 0, 00:29:26.681 "state": "enabled", 00:29:26.681 "listen_address": { 00:29:26.681 "trtype": "TCP", 00:29:26.681 "adrfam": "IPv4", 00:29:26.681 "traddr": "10.0.0.2", 00:29:26.681 "trsvcid": "4420" 00:29:26.681 }, 00:29:26.681 "peer_address": { 00:29:26.681 "trtype": "TCP", 00:29:26.681 "adrfam": "IPv4", 00:29:26.681 "traddr": "10.0.0.1", 00:29:26.681 "trsvcid": "42336" 00:29:26.681 }, 00:29:26.681 "auth": { 00:29:26.681 "state": "completed", 00:29:26.681 "digest": "sha384", 00:29:26.681 "dhgroup": "ffdhe6144" 00:29:26.681 } 00:29:26.681 } 00:29:26.681 ]' 00:29:26.681 08:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:29:26.681 08:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:26.681 08:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:29:26.681 08:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:29:26.681 08:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:29:26.939 08:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:26.939 08:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:26.939 08:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:27.197 08:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZTliOWIwMjlmNzE4OWMzMDVhOTQyYjA3NmE3NWI3Y2UyZmQyOTgxNGYzMTkxYjkzn7qbcw==: 00:29:28.130 08:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:28.130 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:28.130 08:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:28.130 08:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:28.130 08:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:28.130 08:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:28.130 08:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:29:28.130 08:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:28.130 08:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:28.388 08:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 3 00:29:28.388 08:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:29:28.388 08:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:29:28.388 08:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:29:28.388 08:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:29:28.388 08:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:29:28.388 08:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:28.388 08:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:28.388 08:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:28.388 08:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:28.388 08:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:28.952 00:29:28.952 08:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:29:28.952 08:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:29:28.952 08:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:29.211 08:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:29.211 08:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:29.211 08:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:29.211 08:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:29.211 08:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:29.211 08:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:29:29.211 { 00:29:29.211 "cntlid": 87, 00:29:29.211 "qid": 0, 00:29:29.211 "state": "enabled", 00:29:29.211 "listen_address": { 00:29:29.211 "trtype": "TCP", 00:29:29.211 "adrfam": "IPv4", 00:29:29.211 "traddr": "10.0.0.2", 00:29:29.211 "trsvcid": "4420" 00:29:29.211 }, 00:29:29.211 "peer_address": { 00:29:29.211 "trtype": "TCP", 00:29:29.211 "adrfam": "IPv4", 00:29:29.211 "traddr": "10.0.0.1", 00:29:29.211 "trsvcid": "57860" 00:29:29.211 }, 00:29:29.211 "auth": { 00:29:29.211 "state": "completed", 00:29:29.211 "digest": "sha384", 00:29:29.211 "dhgroup": "ffdhe6144" 00:29:29.211 } 00:29:29.211 } 00:29:29.211 ]' 00:29:29.211 08:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:29:29.211 08:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:29.211 08:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:29:29.211 08:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:29:29.211 08:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:29:29.211 08:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:29.211 08:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:29.211 08:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:29.469 08:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZmYzODNkMzJlNmZmMzlkMDkzZDM2YzJiMTQxNTI0M2FjZTE3MDFiYThjZGUxMmFlN2MzNDQ3MmFkNTNhZTYyZgA1QD8=: 00:29:30.403 08:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:30.403 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:30.403 08:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:30.403 08:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:30.403 08:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:30.403 08:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:30.403 08:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:29:30.403 08:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:29:30.403 08:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:30.403 08:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:30.661 08:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 0 00:29:30.661 08:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:29:30.661 08:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:29:30.661 08:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:29:30.661 08:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:29:30.661 08:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:29:30.661 08:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:30.661 08:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:30.661 08:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:30.661 08:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:29:30.661 08:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:29:31.624 00:29:31.624 08:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:29:31.624 08:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:29:31.624 08:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:31.882 08:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:31.882 08:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:31.882 08:55:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:31.882 08:55:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:31.882 08:55:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:31.882 08:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:29:31.882 { 00:29:31.882 "cntlid": 89, 00:29:31.882 "qid": 0, 00:29:31.882 "state": "enabled", 00:29:31.882 "listen_address": { 00:29:31.882 "trtype": "TCP", 00:29:31.882 "adrfam": "IPv4", 00:29:31.882 "traddr": "10.0.0.2", 00:29:31.882 "trsvcid": "4420" 00:29:31.882 }, 00:29:31.882 "peer_address": { 00:29:31.882 "trtype": "TCP", 00:29:31.882 "adrfam": "IPv4", 00:29:31.882 "traddr": "10.0.0.1", 00:29:31.882 "trsvcid": "57886" 00:29:31.882 }, 00:29:31.882 "auth": { 00:29:31.882 "state": "completed", 00:29:31.882 "digest": "sha384", 00:29:31.882 "dhgroup": "ffdhe8192" 00:29:31.882 } 00:29:31.882 } 00:29:31.882 ]' 00:29:31.882 08:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:29:31.882 08:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:31.882 08:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:29:31.882 08:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:29:31.882 08:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:29:31.882 08:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:31.882 08:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:31.882 08:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:32.448 08:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:YWI3YjQ2MThmOGM4ZjY1MTg4ZGU0NTI4ODI0MGM0MzMyYWZlZGQyNDc5NmJlOGNivkRwUg==: 00:29:33.380 08:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:33.380 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:33.380 08:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:33.380 08:55:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:33.380 08:55:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:33.380 08:55:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:33.380 08:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:29:33.380 08:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:33.380 08:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:33.380 08:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 1 00:29:33.380 08:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:29:33.380 08:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:29:33.380 08:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:29:33.380 08:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:29:33.380 08:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:29:33.380 08:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:33.380 08:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:33.638 08:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:33.638 08:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:29:33.638 08:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:29:34.571 00:29:34.571 08:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:29:34.571 08:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:29:34.571 08:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:34.571 08:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:34.571 08:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:34.571 08:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:34.571 08:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:34.571 08:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:34.571 08:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:29:34.571 { 00:29:34.571 "cntlid": 91, 00:29:34.571 "qid": 0, 00:29:34.571 "state": "enabled", 00:29:34.571 "listen_address": { 00:29:34.571 "trtype": "TCP", 00:29:34.571 "adrfam": "IPv4", 00:29:34.571 "traddr": "10.0.0.2", 00:29:34.571 "trsvcid": "4420" 00:29:34.571 }, 00:29:34.571 "peer_address": { 00:29:34.571 "trtype": "TCP", 00:29:34.571 "adrfam": "IPv4", 00:29:34.571 "traddr": "10.0.0.1", 00:29:34.571 "trsvcid": "57922" 00:29:34.571 }, 00:29:34.571 "auth": { 00:29:34.571 "state": "completed", 00:29:34.571 "digest": "sha384", 00:29:34.571 "dhgroup": "ffdhe8192" 00:29:34.571 } 00:29:34.571 } 00:29:34.571 ]' 00:29:34.571 08:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:29:34.571 08:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:34.571 08:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:29:34.571 08:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:29:34.828 08:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:29:34.828 08:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:34.828 08:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:34.828 08:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:35.086 08:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:N2EyM2IxNDc3MjM3ZjYyZTk5YjU0YjcxYWI1MWM4Mjl3KYtU: 00:29:36.019 08:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:36.019 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:36.019 08:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:36.019 08:55:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:36.019 08:55:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:36.019 08:55:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:36.019 08:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:29:36.019 08:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:36.019 08:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:36.277 08:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 2 00:29:36.277 08:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:29:36.277 08:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:29:36.277 08:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:29:36.277 08:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:29:36.277 08:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:29:36.277 08:55:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:36.277 08:55:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:36.277 08:55:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:36.277 08:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:29:36.277 08:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:29:37.210 00:29:37.210 08:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:29:37.210 08:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:29:37.210 08:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:37.210 08:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:37.210 08:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:37.210 08:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:37.210 08:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:37.210 08:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:37.210 08:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:29:37.210 { 00:29:37.210 "cntlid": 93, 00:29:37.210 "qid": 0, 00:29:37.211 "state": "enabled", 00:29:37.211 "listen_address": { 00:29:37.211 "trtype": "TCP", 00:29:37.211 "adrfam": "IPv4", 00:29:37.211 "traddr": "10.0.0.2", 00:29:37.211 "trsvcid": "4420" 00:29:37.211 }, 00:29:37.211 "peer_address": { 00:29:37.211 "trtype": "TCP", 00:29:37.211 "adrfam": "IPv4", 00:29:37.211 "traddr": "10.0.0.1", 00:29:37.211 "trsvcid": "57948" 00:29:37.211 }, 00:29:37.211 "auth": { 00:29:37.211 "state": "completed", 00:29:37.211 "digest": "sha384", 00:29:37.211 "dhgroup": "ffdhe8192" 00:29:37.211 } 00:29:37.211 } 00:29:37.211 ]' 00:29:37.211 08:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:29:37.468 08:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:37.468 08:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:29:37.469 08:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:29:37.469 08:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:29:37.469 08:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:37.469 08:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:37.469 08:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:37.726 08:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZTliOWIwMjlmNzE4OWMzMDVhOTQyYjA3NmE3NWI3Y2UyZmQyOTgxNGYzMTkxYjkzn7qbcw==: 00:29:38.658 08:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:38.658 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:38.658 08:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:38.658 08:55:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:38.658 08:55:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:38.658 08:55:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:38.658 08:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:29:38.658 08:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:38.658 08:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:38.916 08:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 3 00:29:38.916 08:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:29:38.916 08:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:29:38.916 08:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:29:38.916 08:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:29:38.916 08:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:29:38.916 08:55:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:38.916 08:55:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:38.916 08:55:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:38.916 08:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:38.916 08:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:39.851 00:29:39.851 08:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:29:39.851 08:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:29:39.851 08:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:40.109 08:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:40.109 08:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:40.109 08:55:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:40.109 08:55:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:40.109 08:55:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:40.109 08:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:29:40.109 { 00:29:40.109 "cntlid": 95, 00:29:40.109 "qid": 0, 00:29:40.109 "state": "enabled", 00:29:40.109 "listen_address": { 00:29:40.109 "trtype": "TCP", 00:29:40.109 "adrfam": "IPv4", 00:29:40.109 "traddr": "10.0.0.2", 00:29:40.109 "trsvcid": "4420" 00:29:40.109 }, 00:29:40.109 "peer_address": { 00:29:40.109 "trtype": "TCP", 00:29:40.109 "adrfam": "IPv4", 00:29:40.109 "traddr": "10.0.0.1", 00:29:40.109 "trsvcid": "37556" 00:29:40.109 }, 00:29:40.109 "auth": { 00:29:40.109 "state": "completed", 00:29:40.109 "digest": "sha384", 00:29:40.109 "dhgroup": "ffdhe8192" 00:29:40.109 } 00:29:40.109 } 00:29:40.109 ]' 00:29:40.109 08:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:29:40.109 08:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:40.109 08:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:29:40.109 08:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:29:40.109 08:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:29:40.109 08:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:40.109 08:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:40.109 08:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:40.675 08:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZmYzODNkMzJlNmZmMzlkMDkzZDM2YzJiMTQxNTI0M2FjZTE3MDFiYThjZGUxMmFlN2MzNDQ3MmFkNTNhZTYyZgA1QD8=: 00:29:41.608 08:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:41.608 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:41.608 08:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:41.608 08:55:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:41.608 08:55:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:41.608 08:55:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:41.608 08:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:29:41.608 08:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:29:41.608 08:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:29:41.608 08:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:29:41.608 08:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:29:41.866 08:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 0 00:29:41.866 08:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:29:41.866 08:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:29:41.866 08:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:29:41.866 08:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:29:41.866 08:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:29:41.866 08:55:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:41.866 08:55:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:41.866 08:55:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:41.866 08:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:29:41.866 08:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:29:42.125 00:29:42.125 08:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:29:42.125 08:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:42.125 08:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:29:42.383 08:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:42.383 08:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:42.383 08:55:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:42.383 08:55:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:42.383 08:55:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:42.383 08:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:29:42.383 { 00:29:42.383 "cntlid": 97, 00:29:42.383 "qid": 0, 00:29:42.383 "state": "enabled", 00:29:42.383 "listen_address": { 00:29:42.383 "trtype": "TCP", 00:29:42.383 "adrfam": "IPv4", 00:29:42.383 "traddr": "10.0.0.2", 00:29:42.383 "trsvcid": "4420" 00:29:42.383 }, 00:29:42.383 "peer_address": { 00:29:42.383 "trtype": "TCP", 00:29:42.383 "adrfam": "IPv4", 00:29:42.383 "traddr": "10.0.0.1", 00:29:42.383 "trsvcid": "37582" 00:29:42.383 }, 00:29:42.383 "auth": { 00:29:42.383 "state": "completed", 00:29:42.383 "digest": "sha512", 00:29:42.383 "dhgroup": "null" 00:29:42.383 } 00:29:42.383 } 00:29:42.383 ]' 00:29:42.383 08:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:29:42.383 08:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:29:42.383 08:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:29:42.383 08:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:29:42.383 08:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:29:42.383 08:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:42.383 08:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:42.383 08:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:42.641 08:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:YWI3YjQ2MThmOGM4ZjY1MTg4ZGU0NTI4ODI0MGM0MzMyYWZlZGQyNDc5NmJlOGNivkRwUg==: 00:29:43.575 08:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:43.575 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:43.575 08:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:43.575 08:55:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:43.575 08:55:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:43.575 08:55:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:43.575 08:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:29:43.575 08:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:29:43.575 08:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:29:43.833 08:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 1 00:29:43.833 08:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:29:43.833 08:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:29:43.833 08:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:29:43.833 08:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:29:43.833 08:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:29:43.833 08:55:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:43.833 08:55:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:43.833 08:55:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:43.833 08:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:29:43.833 08:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:29:44.090 00:29:44.090 08:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:29:44.090 08:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:29:44.090 08:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:44.348 08:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:44.348 08:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:44.348 08:55:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:44.348 08:55:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:44.348 08:55:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:44.348 08:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:29:44.348 { 00:29:44.348 "cntlid": 99, 00:29:44.348 "qid": 0, 00:29:44.348 "state": "enabled", 00:29:44.348 "listen_address": { 00:29:44.348 "trtype": "TCP", 00:29:44.348 "adrfam": "IPv4", 00:29:44.348 "traddr": "10.0.0.2", 00:29:44.348 "trsvcid": "4420" 00:29:44.348 }, 00:29:44.348 "peer_address": { 00:29:44.348 "trtype": "TCP", 00:29:44.348 "adrfam": "IPv4", 00:29:44.348 "traddr": "10.0.0.1", 00:29:44.348 "trsvcid": "37604" 00:29:44.348 }, 00:29:44.348 "auth": { 00:29:44.348 "state": "completed", 00:29:44.348 "digest": "sha512", 00:29:44.348 "dhgroup": "null" 00:29:44.348 } 00:29:44.348 } 00:29:44.348 ]' 00:29:44.348 08:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:29:44.606 08:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:29:44.606 08:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:29:44.606 08:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:29:44.606 08:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:29:44.606 08:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:44.606 08:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:44.606 08:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:44.864 08:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:N2EyM2IxNDc3MjM3ZjYyZTk5YjU0YjcxYWI1MWM4Mjl3KYtU: 00:29:45.806 08:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:45.806 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:45.806 08:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:45.806 08:55:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:45.806 08:55:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:45.807 08:55:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:45.807 08:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:29:45.807 08:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:29:45.807 08:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:29:46.064 08:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 2 00:29:46.064 08:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:29:46.064 08:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:29:46.064 08:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:29:46.064 08:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:29:46.064 08:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:29:46.064 08:55:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:46.064 08:55:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:46.064 08:55:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:46.064 08:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:29:46.064 08:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:29:46.324 00:29:46.324 08:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:29:46.324 08:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:29:46.324 08:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:46.621 08:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:46.621 08:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:46.621 08:55:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:46.621 08:55:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:46.621 08:55:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:46.621 08:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:29:46.621 { 00:29:46.621 "cntlid": 101, 00:29:46.621 "qid": 0, 00:29:46.621 "state": "enabled", 00:29:46.621 "listen_address": { 00:29:46.621 "trtype": "TCP", 00:29:46.621 "adrfam": "IPv4", 00:29:46.621 "traddr": "10.0.0.2", 00:29:46.621 "trsvcid": "4420" 00:29:46.621 }, 00:29:46.621 "peer_address": { 00:29:46.621 "trtype": "TCP", 00:29:46.621 "adrfam": "IPv4", 00:29:46.621 "traddr": "10.0.0.1", 00:29:46.621 "trsvcid": "37634" 00:29:46.621 }, 00:29:46.621 "auth": { 00:29:46.621 "state": "completed", 00:29:46.621 "digest": "sha512", 00:29:46.621 "dhgroup": "null" 00:29:46.621 } 00:29:46.621 } 00:29:46.621 ]' 00:29:46.621 08:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:29:46.621 08:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:29:46.621 08:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:29:46.621 08:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:29:46.621 08:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:29:46.621 08:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:46.621 08:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:46.621 08:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:46.900 08:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZTliOWIwMjlmNzE4OWMzMDVhOTQyYjA3NmE3NWI3Y2UyZmQyOTgxNGYzMTkxYjkzn7qbcw==: 00:29:47.832 08:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:47.832 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:47.832 08:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:47.832 08:55:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:47.832 08:55:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:47.832 08:55:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:47.832 08:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:29:47.832 08:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:29:47.832 08:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:29:48.089 08:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 3 00:29:48.089 08:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:29:48.089 08:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:29:48.089 08:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:29:48.089 08:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:29:48.089 08:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:29:48.089 08:55:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:48.089 08:55:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:48.089 08:55:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:48.089 08:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:48.089 08:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:48.347 00:29:48.347 08:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:29:48.347 08:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:29:48.347 08:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:48.604 08:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:48.604 08:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:48.604 08:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:48.604 08:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:48.604 08:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:48.604 08:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:29:48.604 { 00:29:48.604 "cntlid": 103, 00:29:48.604 "qid": 0, 00:29:48.604 "state": "enabled", 00:29:48.604 "listen_address": { 00:29:48.604 "trtype": "TCP", 00:29:48.604 "adrfam": "IPv4", 00:29:48.604 "traddr": "10.0.0.2", 00:29:48.604 "trsvcid": "4420" 00:29:48.604 }, 00:29:48.604 "peer_address": { 00:29:48.604 "trtype": "TCP", 00:29:48.604 "adrfam": "IPv4", 00:29:48.604 "traddr": "10.0.0.1", 00:29:48.604 "trsvcid": "48424" 00:29:48.604 }, 00:29:48.604 "auth": { 00:29:48.604 "state": "completed", 00:29:48.604 "digest": "sha512", 00:29:48.604 "dhgroup": "null" 00:29:48.604 } 00:29:48.604 } 00:29:48.604 ]' 00:29:48.604 08:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:29:48.604 08:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:29:48.604 08:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:29:48.861 08:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:29:48.861 08:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:29:48.861 08:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:48.861 08:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:48.861 08:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:49.118 08:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZmYzODNkMzJlNmZmMzlkMDkzZDM2YzJiMTQxNTI0M2FjZTE3MDFiYThjZGUxMmFlN2MzNDQ3MmFkNTNhZTYyZgA1QD8=: 00:29:50.050 08:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:50.050 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:50.050 08:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:50.050 08:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:50.050 08:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:50.050 08:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:50.050 08:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:29:50.050 08:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:29:50.050 08:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:50.050 08:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:50.307 08:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 0 00:29:50.307 08:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:29:50.307 08:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:29:50.307 08:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:29:50.307 08:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:29:50.307 08:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:29:50.307 08:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:50.307 08:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:50.307 08:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:50.307 08:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:29:50.307 08:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:29:50.565 00:29:50.565 08:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:29:50.565 08:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:29:50.565 08:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:50.822 08:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:50.822 08:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:50.822 08:55:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:50.822 08:55:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:50.822 08:55:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:50.822 08:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:29:50.822 { 00:29:50.822 "cntlid": 105, 00:29:50.822 "qid": 0, 00:29:50.822 "state": "enabled", 00:29:50.822 "listen_address": { 00:29:50.822 "trtype": "TCP", 00:29:50.822 "adrfam": "IPv4", 00:29:50.822 "traddr": "10.0.0.2", 00:29:50.822 "trsvcid": "4420" 00:29:50.822 }, 00:29:50.822 "peer_address": { 00:29:50.822 "trtype": "TCP", 00:29:50.822 "adrfam": "IPv4", 00:29:50.822 "traddr": "10.0.0.1", 00:29:50.822 "trsvcid": "48454" 00:29:50.822 }, 00:29:50.822 "auth": { 00:29:50.822 "state": "completed", 00:29:50.822 "digest": "sha512", 00:29:50.822 "dhgroup": "ffdhe2048" 00:29:50.822 } 00:29:50.822 } 00:29:50.822 ]' 00:29:50.822 08:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:29:50.822 08:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:29:50.822 08:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:29:50.822 08:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:29:50.822 08:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:29:50.822 08:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:50.822 08:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:50.823 08:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:51.081 08:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:YWI3YjQ2MThmOGM4ZjY1MTg4ZGU0NTI4ODI0MGM0MzMyYWZlZGQyNDc5NmJlOGNivkRwUg==: 00:29:52.015 08:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:52.015 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:52.015 08:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:52.015 08:55:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:52.015 08:55:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:52.015 08:55:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:52.015 08:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:29:52.015 08:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:52.015 08:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:52.581 08:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 1 00:29:52.581 08:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:29:52.581 08:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:29:52.581 08:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:29:52.581 08:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:29:52.581 08:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:29:52.581 08:55:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:52.581 08:55:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:52.581 08:55:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:52.581 08:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:29:52.581 08:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:29:52.840 00:29:52.840 08:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:29:52.840 08:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:29:52.840 08:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:53.098 08:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:53.098 08:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:53.098 08:55:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:53.098 08:55:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:53.098 08:55:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:53.098 08:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:29:53.098 { 00:29:53.098 "cntlid": 107, 00:29:53.098 "qid": 0, 00:29:53.098 "state": "enabled", 00:29:53.098 "listen_address": { 00:29:53.098 "trtype": "TCP", 00:29:53.098 "adrfam": "IPv4", 00:29:53.098 "traddr": "10.0.0.2", 00:29:53.098 "trsvcid": "4420" 00:29:53.098 }, 00:29:53.098 "peer_address": { 00:29:53.098 "trtype": "TCP", 00:29:53.098 "adrfam": "IPv4", 00:29:53.098 "traddr": "10.0.0.1", 00:29:53.098 "trsvcid": "48496" 00:29:53.098 }, 00:29:53.098 "auth": { 00:29:53.098 "state": "completed", 00:29:53.098 "digest": "sha512", 00:29:53.098 "dhgroup": "ffdhe2048" 00:29:53.098 } 00:29:53.098 } 00:29:53.098 ]' 00:29:53.098 08:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:29:53.098 08:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:29:53.098 08:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:29:53.098 08:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:29:53.098 08:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:29:53.098 08:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:53.098 08:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:53.098 08:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:53.356 08:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:N2EyM2IxNDc3MjM3ZjYyZTk5YjU0YjcxYWI1MWM4Mjl3KYtU: 00:29:54.292 08:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:54.292 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:54.292 08:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:54.292 08:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:54.292 08:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:54.292 08:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:54.292 08:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:29:54.292 08:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:54.292 08:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:54.550 08:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 2 00:29:54.550 08:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:29:54.550 08:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:29:54.550 08:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:29:54.550 08:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:29:54.550 08:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:29:54.550 08:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:54.550 08:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:54.550 08:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:54.550 08:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:29:54.550 08:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:29:55.116 00:29:55.116 08:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:29:55.116 08:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:55.116 08:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:29:55.116 08:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:55.116 08:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:55.116 08:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:55.116 08:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:55.116 08:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:55.116 08:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:29:55.116 { 00:29:55.116 "cntlid": 109, 00:29:55.116 "qid": 0, 00:29:55.116 "state": "enabled", 00:29:55.116 "listen_address": { 00:29:55.116 "trtype": "TCP", 00:29:55.116 "adrfam": "IPv4", 00:29:55.116 "traddr": "10.0.0.2", 00:29:55.116 "trsvcid": "4420" 00:29:55.116 }, 00:29:55.116 "peer_address": { 00:29:55.116 "trtype": "TCP", 00:29:55.116 "adrfam": "IPv4", 00:29:55.116 "traddr": "10.0.0.1", 00:29:55.116 "trsvcid": "48522" 00:29:55.116 }, 00:29:55.116 "auth": { 00:29:55.116 "state": "completed", 00:29:55.116 "digest": "sha512", 00:29:55.116 "dhgroup": "ffdhe2048" 00:29:55.116 } 00:29:55.116 } 00:29:55.116 ]' 00:29:55.116 08:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:29:55.374 08:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:29:55.374 08:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:29:55.374 08:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:29:55.374 08:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:29:55.374 08:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:55.374 08:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:55.374 08:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:55.632 08:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZTliOWIwMjlmNzE4OWMzMDVhOTQyYjA3NmE3NWI3Y2UyZmQyOTgxNGYzMTkxYjkzn7qbcw==: 00:29:56.566 08:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:56.566 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:56.566 08:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:56.566 08:55:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:56.566 08:55:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:56.566 08:55:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:56.566 08:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:29:56.566 08:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:56.566 08:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:56.823 08:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 3 00:29:56.823 08:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:29:56.823 08:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:29:56.823 08:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:29:56.823 08:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:29:56.823 08:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:29:56.823 08:55:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:56.823 08:55:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:56.823 08:55:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:56.823 08:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:56.823 08:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:57.080 00:29:57.080 08:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:29:57.080 08:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:29:57.080 08:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:57.338 08:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:57.338 08:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:57.338 08:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:57.338 08:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:57.338 08:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:57.338 08:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:29:57.338 { 00:29:57.338 "cntlid": 111, 00:29:57.338 "qid": 0, 00:29:57.338 "state": "enabled", 00:29:57.338 "listen_address": { 00:29:57.338 "trtype": "TCP", 00:29:57.338 "adrfam": "IPv4", 00:29:57.338 "traddr": "10.0.0.2", 00:29:57.338 "trsvcid": "4420" 00:29:57.338 }, 00:29:57.338 "peer_address": { 00:29:57.338 "trtype": "TCP", 00:29:57.338 "adrfam": "IPv4", 00:29:57.338 "traddr": "10.0.0.1", 00:29:57.338 "trsvcid": "50970" 00:29:57.338 }, 00:29:57.338 "auth": { 00:29:57.338 "state": "completed", 00:29:57.338 "digest": "sha512", 00:29:57.338 "dhgroup": "ffdhe2048" 00:29:57.338 } 00:29:57.338 } 00:29:57.338 ]' 00:29:57.338 08:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:29:57.338 08:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:29:57.338 08:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:29:57.596 08:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:29:57.596 08:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:29:57.596 08:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:57.596 08:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:57.596 08:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:57.854 08:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZmYzODNkMzJlNmZmMzlkMDkzZDM2YzJiMTQxNTI0M2FjZTE3MDFiYThjZGUxMmFlN2MzNDQ3MmFkNTNhZTYyZgA1QD8=: 00:29:58.787 08:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:58.787 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:58.787 08:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:58.787 08:55:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:58.787 08:55:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:58.787 08:55:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:58.787 08:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:29:58.787 08:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:29:58.787 08:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:58.787 08:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:59.044 08:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 0 00:29:59.044 08:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:29:59.044 08:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:29:59.044 08:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:29:59.044 08:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:29:59.044 08:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:29:59.044 08:55:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:59.044 08:55:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:59.044 08:55:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:59.044 08:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:29:59.044 08:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:29:59.302 00:29:59.302 08:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:29:59.302 08:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:59.302 08:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:29:59.567 08:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:59.567 08:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:59.567 08:55:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:59.567 08:55:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:59.567 08:55:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:59.567 08:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:29:59.567 { 00:29:59.567 "cntlid": 113, 00:29:59.567 "qid": 0, 00:29:59.567 "state": "enabled", 00:29:59.567 "listen_address": { 00:29:59.567 "trtype": "TCP", 00:29:59.567 "adrfam": "IPv4", 00:29:59.567 "traddr": "10.0.0.2", 00:29:59.567 "trsvcid": "4420" 00:29:59.567 }, 00:29:59.567 "peer_address": { 00:29:59.567 "trtype": "TCP", 00:29:59.567 "adrfam": "IPv4", 00:29:59.567 "traddr": "10.0.0.1", 00:29:59.567 "trsvcid": "50998" 00:29:59.567 }, 00:29:59.567 "auth": { 00:29:59.567 "state": "completed", 00:29:59.567 "digest": "sha512", 00:29:59.567 "dhgroup": "ffdhe3072" 00:29:59.567 } 00:29:59.567 } 00:29:59.567 ]' 00:29:59.567 08:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:29:59.567 08:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:29:59.567 08:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:29:59.567 08:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:29:59.567 08:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:29:59.567 08:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:59.567 08:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:59.567 08:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:59.826 08:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:YWI3YjQ2MThmOGM4ZjY1MTg4ZGU0NTI4ODI0MGM0MzMyYWZlZGQyNDc5NmJlOGNivkRwUg==: 00:30:00.757 08:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:00.757 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:00.757 08:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:00.757 08:55:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:00.757 08:55:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:00.757 08:55:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:00.757 08:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:30:00.757 08:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:00.757 08:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:01.323 08:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 1 00:30:01.323 08:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:30:01.323 08:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:30:01.323 08:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:30:01.323 08:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:30:01.323 08:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:30:01.323 08:55:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:01.323 08:55:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:01.323 08:55:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:01.323 08:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:30:01.323 08:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:30:01.613 00:30:01.613 08:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:30:01.613 08:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:30:01.613 08:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:01.871 08:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:01.871 08:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:01.871 08:55:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:01.871 08:55:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:01.871 08:55:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:01.871 08:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:30:01.871 { 00:30:01.871 "cntlid": 115, 00:30:01.871 "qid": 0, 00:30:01.871 "state": "enabled", 00:30:01.871 "listen_address": { 00:30:01.871 "trtype": "TCP", 00:30:01.871 "adrfam": "IPv4", 00:30:01.871 "traddr": "10.0.0.2", 00:30:01.871 "trsvcid": "4420" 00:30:01.871 }, 00:30:01.871 "peer_address": { 00:30:01.871 "trtype": "TCP", 00:30:01.871 "adrfam": "IPv4", 00:30:01.871 "traddr": "10.0.0.1", 00:30:01.871 "trsvcid": "51016" 00:30:01.871 }, 00:30:01.871 "auth": { 00:30:01.871 "state": "completed", 00:30:01.871 "digest": "sha512", 00:30:01.871 "dhgroup": "ffdhe3072" 00:30:01.871 } 00:30:01.871 } 00:30:01.871 ]' 00:30:01.871 08:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:30:01.871 08:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:01.872 08:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:30:01.872 08:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:30:01.872 08:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:30:01.872 08:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:01.872 08:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:01.872 08:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:02.130 08:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:N2EyM2IxNDc3MjM3ZjYyZTk5YjU0YjcxYWI1MWM4Mjl3KYtU: 00:30:03.064 08:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:03.064 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:03.064 08:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:03.064 08:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:03.064 08:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:03.064 08:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:03.064 08:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:30:03.064 08:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:03.064 08:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:03.322 08:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 2 00:30:03.322 08:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:30:03.322 08:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:30:03.322 08:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:30:03.322 08:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:30:03.322 08:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:30:03.322 08:55:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:03.322 08:55:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:03.322 08:55:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:03.322 08:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:30:03.322 08:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:30:03.888 00:30:03.888 08:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:30:03.888 08:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:30:03.888 08:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:04.146 08:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:04.146 08:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:04.146 08:55:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:04.146 08:55:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:04.146 08:55:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:04.146 08:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:30:04.146 { 00:30:04.146 "cntlid": 117, 00:30:04.146 "qid": 0, 00:30:04.146 "state": "enabled", 00:30:04.146 "listen_address": { 00:30:04.146 "trtype": "TCP", 00:30:04.146 "adrfam": "IPv4", 00:30:04.146 "traddr": "10.0.0.2", 00:30:04.146 "trsvcid": "4420" 00:30:04.146 }, 00:30:04.146 "peer_address": { 00:30:04.146 "trtype": "TCP", 00:30:04.146 "adrfam": "IPv4", 00:30:04.146 "traddr": "10.0.0.1", 00:30:04.146 "trsvcid": "51058" 00:30:04.146 }, 00:30:04.146 "auth": { 00:30:04.146 "state": "completed", 00:30:04.146 "digest": "sha512", 00:30:04.146 "dhgroup": "ffdhe3072" 00:30:04.146 } 00:30:04.146 } 00:30:04.146 ]' 00:30:04.146 08:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:30:04.146 08:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:04.146 08:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:30:04.146 08:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:30:04.146 08:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:30:04.146 08:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:04.146 08:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:04.146 08:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:04.403 08:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZTliOWIwMjlmNzE4OWMzMDVhOTQyYjA3NmE3NWI3Y2UyZmQyOTgxNGYzMTkxYjkzn7qbcw==: 00:30:05.337 08:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:05.337 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:05.337 08:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:05.337 08:55:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:05.337 08:55:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:05.337 08:55:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:05.337 08:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:30:05.337 08:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:05.338 08:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:05.596 08:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 3 00:30:05.596 08:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:30:05.596 08:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:30:05.596 08:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:30:05.596 08:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:30:05.596 08:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:30:05.596 08:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:05.596 08:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:05.596 08:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:05.596 08:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:30:05.596 08:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:30:05.853 00:30:05.853 08:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:30:05.854 08:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:05.854 08:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:30:06.111 08:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:06.111 08:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:06.111 08:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:06.111 08:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:06.369 08:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:06.369 08:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:30:06.369 { 00:30:06.369 "cntlid": 119, 00:30:06.369 "qid": 0, 00:30:06.369 "state": "enabled", 00:30:06.369 "listen_address": { 00:30:06.369 "trtype": "TCP", 00:30:06.369 "adrfam": "IPv4", 00:30:06.369 "traddr": "10.0.0.2", 00:30:06.369 "trsvcid": "4420" 00:30:06.369 }, 00:30:06.369 "peer_address": { 00:30:06.369 "trtype": "TCP", 00:30:06.369 "adrfam": "IPv4", 00:30:06.369 "traddr": "10.0.0.1", 00:30:06.369 "trsvcid": "51078" 00:30:06.369 }, 00:30:06.369 "auth": { 00:30:06.369 "state": "completed", 00:30:06.369 "digest": "sha512", 00:30:06.369 "dhgroup": "ffdhe3072" 00:30:06.369 } 00:30:06.369 } 00:30:06.369 ]' 00:30:06.369 08:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:30:06.369 08:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:06.369 08:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:30:06.369 08:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:30:06.369 08:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:30:06.369 08:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:06.369 08:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:06.369 08:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:06.627 08:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZmYzODNkMzJlNmZmMzlkMDkzZDM2YzJiMTQxNTI0M2FjZTE3MDFiYThjZGUxMmFlN2MzNDQ3MmFkNTNhZTYyZgA1QD8=: 00:30:07.559 08:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:07.559 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:07.559 08:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:07.559 08:56:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:07.559 08:56:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:07.559 08:56:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:07.559 08:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:30:07.559 08:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:30:07.559 08:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:07.559 08:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:07.816 08:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 0 00:30:07.816 08:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:30:07.816 08:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:30:07.816 08:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:30:07.816 08:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:30:07.816 08:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:30:07.816 08:56:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:07.816 08:56:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:07.816 08:56:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:07.816 08:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:30:07.816 08:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:30:08.075 00:30:08.075 08:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:30:08.075 08:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:30:08.075 08:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:08.332 08:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:08.332 08:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:08.332 08:56:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:08.332 08:56:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:08.332 08:56:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:08.332 08:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:30:08.332 { 00:30:08.332 "cntlid": 121, 00:30:08.332 "qid": 0, 00:30:08.332 "state": "enabled", 00:30:08.332 "listen_address": { 00:30:08.332 "trtype": "TCP", 00:30:08.332 "adrfam": "IPv4", 00:30:08.332 "traddr": "10.0.0.2", 00:30:08.332 "trsvcid": "4420" 00:30:08.332 }, 00:30:08.332 "peer_address": { 00:30:08.332 "trtype": "TCP", 00:30:08.332 "adrfam": "IPv4", 00:30:08.332 "traddr": "10.0.0.1", 00:30:08.332 "trsvcid": "36222" 00:30:08.332 }, 00:30:08.332 "auth": { 00:30:08.332 "state": "completed", 00:30:08.332 "digest": "sha512", 00:30:08.332 "dhgroup": "ffdhe4096" 00:30:08.332 } 00:30:08.332 } 00:30:08.332 ]' 00:30:08.332 08:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:30:08.332 08:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:08.332 08:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:30:08.589 08:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:30:08.589 08:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:30:08.589 08:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:08.589 08:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:08.589 08:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:08.846 08:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:YWI3YjQ2MThmOGM4ZjY1MTg4ZGU0NTI4ODI0MGM0MzMyYWZlZGQyNDc5NmJlOGNivkRwUg==: 00:30:09.778 08:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:09.778 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:09.778 08:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:09.778 08:56:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:09.778 08:56:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:09.778 08:56:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:09.778 08:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:30:09.778 08:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:09.778 08:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:10.036 08:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 1 00:30:10.036 08:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:30:10.036 08:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:30:10.036 08:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:30:10.036 08:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:30:10.036 08:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:30:10.036 08:56:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:10.036 08:56:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:10.036 08:56:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:10.036 08:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:30:10.036 08:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:30:10.293 00:30:10.293 08:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:30:10.293 08:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:10.293 08:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:30:10.552 08:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:10.552 08:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:10.552 08:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:10.552 08:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:10.552 08:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:10.552 08:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:30:10.552 { 00:30:10.552 "cntlid": 123, 00:30:10.552 "qid": 0, 00:30:10.552 "state": "enabled", 00:30:10.552 "listen_address": { 00:30:10.552 "trtype": "TCP", 00:30:10.552 "adrfam": "IPv4", 00:30:10.552 "traddr": "10.0.0.2", 00:30:10.552 "trsvcid": "4420" 00:30:10.552 }, 00:30:10.552 "peer_address": { 00:30:10.552 "trtype": "TCP", 00:30:10.552 "adrfam": "IPv4", 00:30:10.552 "traddr": "10.0.0.1", 00:30:10.552 "trsvcid": "36252" 00:30:10.552 }, 00:30:10.552 "auth": { 00:30:10.552 "state": "completed", 00:30:10.552 "digest": "sha512", 00:30:10.552 "dhgroup": "ffdhe4096" 00:30:10.552 } 00:30:10.552 } 00:30:10.552 ]' 00:30:10.552 08:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:30:10.809 08:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:10.809 08:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:30:10.810 08:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:30:10.810 08:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:30:10.810 08:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:10.810 08:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:10.810 08:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:11.067 08:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:N2EyM2IxNDc3MjM3ZjYyZTk5YjU0YjcxYWI1MWM4Mjl3KYtU: 00:30:12.001 08:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:12.001 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:12.001 08:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:12.001 08:56:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:12.001 08:56:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:12.001 08:56:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:12.001 08:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:30:12.001 08:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:12.001 08:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:12.258 08:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 2 00:30:12.258 08:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:30:12.258 08:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:30:12.258 08:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:30:12.258 08:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:30:12.258 08:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:30:12.258 08:56:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:12.258 08:56:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:12.258 08:56:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:12.258 08:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:30:12.258 08:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:30:12.823 00:30:12.823 08:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:30:12.823 08:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:30:12.823 08:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:12.823 08:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:12.823 08:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:12.823 08:56:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:12.823 08:56:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:12.823 08:56:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:12.823 08:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:30:12.823 { 00:30:12.823 "cntlid": 125, 00:30:12.823 "qid": 0, 00:30:12.823 "state": "enabled", 00:30:12.823 "listen_address": { 00:30:12.823 "trtype": "TCP", 00:30:12.823 "adrfam": "IPv4", 00:30:12.823 "traddr": "10.0.0.2", 00:30:12.823 "trsvcid": "4420" 00:30:12.823 }, 00:30:12.823 "peer_address": { 00:30:12.823 "trtype": "TCP", 00:30:12.823 "adrfam": "IPv4", 00:30:12.823 "traddr": "10.0.0.1", 00:30:12.823 "trsvcid": "36280" 00:30:12.823 }, 00:30:12.823 "auth": { 00:30:12.823 "state": "completed", 00:30:12.823 "digest": "sha512", 00:30:12.823 "dhgroup": "ffdhe4096" 00:30:12.823 } 00:30:12.823 } 00:30:12.823 ]' 00:30:12.823 08:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:30:13.080 08:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:13.080 08:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:30:13.080 08:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:30:13.080 08:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:30:13.080 08:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:13.080 08:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:13.080 08:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:13.338 08:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZTliOWIwMjlmNzE4OWMzMDVhOTQyYjA3NmE3NWI3Y2UyZmQyOTgxNGYzMTkxYjkzn7qbcw==: 00:30:14.271 08:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:14.271 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:14.271 08:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:14.271 08:56:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:14.271 08:56:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:14.271 08:56:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:14.271 08:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:30:14.271 08:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:14.271 08:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:14.530 08:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 3 00:30:14.530 08:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:30:14.530 08:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:30:14.530 08:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:30:14.530 08:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:30:14.530 08:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:30:14.530 08:56:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:14.530 08:56:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:14.530 08:56:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:14.530 08:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:30:14.530 08:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:30:15.097 00:30:15.097 08:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:30:15.097 08:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:30:15.097 08:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:15.356 08:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:15.356 08:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:15.356 08:56:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:15.356 08:56:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:15.356 08:56:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:15.356 08:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:30:15.356 { 00:30:15.356 "cntlid": 127, 00:30:15.356 "qid": 0, 00:30:15.356 "state": "enabled", 00:30:15.356 "listen_address": { 00:30:15.356 "trtype": "TCP", 00:30:15.356 "adrfam": "IPv4", 00:30:15.356 "traddr": "10.0.0.2", 00:30:15.356 "trsvcid": "4420" 00:30:15.356 }, 00:30:15.356 "peer_address": { 00:30:15.356 "trtype": "TCP", 00:30:15.356 "adrfam": "IPv4", 00:30:15.356 "traddr": "10.0.0.1", 00:30:15.356 "trsvcid": "36290" 00:30:15.356 }, 00:30:15.356 "auth": { 00:30:15.356 "state": "completed", 00:30:15.356 "digest": "sha512", 00:30:15.356 "dhgroup": "ffdhe4096" 00:30:15.356 } 00:30:15.356 } 00:30:15.356 ]' 00:30:15.356 08:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:30:15.356 08:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:15.356 08:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:30:15.356 08:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:30:15.356 08:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:30:15.356 08:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:15.356 08:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:15.356 08:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:15.614 08:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZmYzODNkMzJlNmZmMzlkMDkzZDM2YzJiMTQxNTI0M2FjZTE3MDFiYThjZGUxMmFlN2MzNDQ3MmFkNTNhZTYyZgA1QD8=: 00:30:16.583 08:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:16.583 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:16.583 08:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:16.583 08:56:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:16.583 08:56:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:16.583 08:56:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:16.583 08:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:30:16.583 08:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:30:16.583 08:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:16.583 08:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:16.841 08:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 0 00:30:16.841 08:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:30:16.841 08:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:30:16.841 08:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:30:16.841 08:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:30:16.841 08:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:30:16.841 08:56:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:16.841 08:56:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:16.841 08:56:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:16.841 08:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:30:16.841 08:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:30:17.406 00:30:17.406 08:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:30:17.406 08:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:30:17.406 08:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:17.663 08:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:17.663 08:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:17.663 08:56:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:17.663 08:56:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:17.663 08:56:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:17.663 08:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:30:17.663 { 00:30:17.663 "cntlid": 129, 00:30:17.663 "qid": 0, 00:30:17.663 "state": "enabled", 00:30:17.663 "listen_address": { 00:30:17.663 "trtype": "TCP", 00:30:17.663 "adrfam": "IPv4", 00:30:17.663 "traddr": "10.0.0.2", 00:30:17.663 "trsvcid": "4420" 00:30:17.663 }, 00:30:17.663 "peer_address": { 00:30:17.663 "trtype": "TCP", 00:30:17.663 "adrfam": "IPv4", 00:30:17.663 "traddr": "10.0.0.1", 00:30:17.663 "trsvcid": "39298" 00:30:17.663 }, 00:30:17.663 "auth": { 00:30:17.663 "state": "completed", 00:30:17.663 "digest": "sha512", 00:30:17.663 "dhgroup": "ffdhe6144" 00:30:17.663 } 00:30:17.663 } 00:30:17.663 ]' 00:30:17.663 08:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:30:17.663 08:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:17.663 08:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:30:17.663 08:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:30:17.663 08:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:30:17.920 08:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:17.920 08:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:17.920 08:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:18.179 08:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:YWI3YjQ2MThmOGM4ZjY1MTg4ZGU0NTI4ODI0MGM0MzMyYWZlZGQyNDc5NmJlOGNivkRwUg==: 00:30:19.111 08:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:19.111 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:19.111 08:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:19.111 08:56:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:19.111 08:56:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:19.111 08:56:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:19.111 08:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:30:19.111 08:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:19.111 08:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:19.368 08:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 1 00:30:19.368 08:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:30:19.368 08:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:30:19.368 08:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:30:19.368 08:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:30:19.368 08:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:30:19.368 08:56:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:19.368 08:56:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:19.368 08:56:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:19.368 08:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:30:19.368 08:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:30:19.932 00:30:19.932 08:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:30:19.932 08:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:30:19.932 08:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:20.189 08:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:20.189 08:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:20.189 08:56:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:20.189 08:56:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:20.189 08:56:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:20.189 08:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:30:20.189 { 00:30:20.189 "cntlid": 131, 00:30:20.189 "qid": 0, 00:30:20.189 "state": "enabled", 00:30:20.189 "listen_address": { 00:30:20.189 "trtype": "TCP", 00:30:20.189 "adrfam": "IPv4", 00:30:20.189 "traddr": "10.0.0.2", 00:30:20.189 "trsvcid": "4420" 00:30:20.189 }, 00:30:20.189 "peer_address": { 00:30:20.189 "trtype": "TCP", 00:30:20.189 "adrfam": "IPv4", 00:30:20.189 "traddr": "10.0.0.1", 00:30:20.189 "trsvcid": "39330" 00:30:20.189 }, 00:30:20.189 "auth": { 00:30:20.189 "state": "completed", 00:30:20.189 "digest": "sha512", 00:30:20.189 "dhgroup": "ffdhe6144" 00:30:20.189 } 00:30:20.189 } 00:30:20.189 ]' 00:30:20.189 08:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:30:20.189 08:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:20.189 08:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:30:20.189 08:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:30:20.189 08:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:30:20.189 08:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:20.189 08:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:20.189 08:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:20.446 08:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:N2EyM2IxNDc3MjM3ZjYyZTk5YjU0YjcxYWI1MWM4Mjl3KYtU: 00:30:21.379 08:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:21.379 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:21.379 08:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:21.379 08:56:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:21.379 08:56:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:21.379 08:56:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:21.379 08:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:30:21.379 08:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:21.379 08:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:21.637 08:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 2 00:30:21.637 08:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:30:21.637 08:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:30:21.637 08:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:30:21.637 08:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:30:21.637 08:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:30:21.637 08:56:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:21.637 08:56:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:21.637 08:56:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:21.637 08:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:30:21.637 08:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:30:22.201 00:30:22.201 08:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:30:22.201 08:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:30:22.201 08:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:22.459 08:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:22.459 08:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:22.459 08:56:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:22.459 08:56:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:22.459 08:56:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:22.459 08:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:30:22.459 { 00:30:22.459 "cntlid": 133, 00:30:22.459 "qid": 0, 00:30:22.459 "state": "enabled", 00:30:22.459 "listen_address": { 00:30:22.459 "trtype": "TCP", 00:30:22.459 "adrfam": "IPv4", 00:30:22.459 "traddr": "10.0.0.2", 00:30:22.459 "trsvcid": "4420" 00:30:22.459 }, 00:30:22.459 "peer_address": { 00:30:22.459 "trtype": "TCP", 00:30:22.459 "adrfam": "IPv4", 00:30:22.459 "traddr": "10.0.0.1", 00:30:22.459 "trsvcid": "39348" 00:30:22.459 }, 00:30:22.459 "auth": { 00:30:22.459 "state": "completed", 00:30:22.459 "digest": "sha512", 00:30:22.459 "dhgroup": "ffdhe6144" 00:30:22.459 } 00:30:22.459 } 00:30:22.459 ]' 00:30:22.459 08:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:30:22.459 08:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:22.459 08:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:30:22.459 08:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:30:22.459 08:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:30:22.459 08:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:22.459 08:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:22.459 08:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:22.717 08:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZTliOWIwMjlmNzE4OWMzMDVhOTQyYjA3NmE3NWI3Y2UyZmQyOTgxNGYzMTkxYjkzn7qbcw==: 00:30:23.649 08:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:23.649 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:23.649 08:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:23.649 08:56:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:23.649 08:56:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:23.649 08:56:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:23.649 08:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:30:23.649 08:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:23.649 08:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:23.907 08:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 3 00:30:23.907 08:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:30:23.907 08:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:30:23.907 08:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:30:23.907 08:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:30:23.907 08:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:30:23.907 08:56:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:23.907 08:56:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:23.907 08:56:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:23.907 08:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:30:23.907 08:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:30:24.840 00:30:24.840 08:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:30:24.840 08:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:24.840 08:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:30:24.840 08:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:24.840 08:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:24.840 08:56:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:24.840 08:56:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:24.840 08:56:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:24.840 08:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:30:24.840 { 00:30:24.840 "cntlid": 135, 00:30:24.840 "qid": 0, 00:30:24.840 "state": "enabled", 00:30:24.840 "listen_address": { 00:30:24.840 "trtype": "TCP", 00:30:24.840 "adrfam": "IPv4", 00:30:24.840 "traddr": "10.0.0.2", 00:30:24.840 "trsvcid": "4420" 00:30:24.840 }, 00:30:24.840 "peer_address": { 00:30:24.840 "trtype": "TCP", 00:30:24.840 "adrfam": "IPv4", 00:30:24.840 "traddr": "10.0.0.1", 00:30:24.840 "trsvcid": "39372" 00:30:24.840 }, 00:30:24.840 "auth": { 00:30:24.840 "state": "completed", 00:30:24.840 "digest": "sha512", 00:30:24.840 "dhgroup": "ffdhe6144" 00:30:24.840 } 00:30:24.840 } 00:30:24.840 ]' 00:30:24.840 08:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:30:24.840 08:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:24.840 08:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:30:24.840 08:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:30:24.840 08:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:30:25.098 08:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:25.098 08:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:25.098 08:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:25.356 08:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZmYzODNkMzJlNmZmMzlkMDkzZDM2YzJiMTQxNTI0M2FjZTE3MDFiYThjZGUxMmFlN2MzNDQ3MmFkNTNhZTYyZgA1QD8=: 00:30:26.289 08:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:26.289 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:26.289 08:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:26.289 08:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:26.289 08:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:26.289 08:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:26.289 08:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:30:26.289 08:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:30:26.289 08:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:26.289 08:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:26.546 08:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 0 00:30:26.546 08:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:30:26.546 08:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:30:26.546 08:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:30:26.546 08:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:30:26.546 08:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:30:26.546 08:56:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:26.546 08:56:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:26.546 08:56:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:26.546 08:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:30:26.546 08:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:30:27.479 00:30:27.479 08:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:30:27.479 08:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:30:27.479 08:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:27.479 08:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:27.479 08:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:27.479 08:56:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:27.479 08:56:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:27.479 08:56:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:27.479 08:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:30:27.479 { 00:30:27.479 "cntlid": 137, 00:30:27.479 "qid": 0, 00:30:27.479 "state": "enabled", 00:30:27.479 "listen_address": { 00:30:27.479 "trtype": "TCP", 00:30:27.479 "adrfam": "IPv4", 00:30:27.479 "traddr": "10.0.0.2", 00:30:27.479 "trsvcid": "4420" 00:30:27.479 }, 00:30:27.479 "peer_address": { 00:30:27.479 "trtype": "TCP", 00:30:27.479 "adrfam": "IPv4", 00:30:27.479 "traddr": "10.0.0.1", 00:30:27.479 "trsvcid": "38518" 00:30:27.479 }, 00:30:27.479 "auth": { 00:30:27.479 "state": "completed", 00:30:27.479 "digest": "sha512", 00:30:27.479 "dhgroup": "ffdhe8192" 00:30:27.479 } 00:30:27.479 } 00:30:27.479 ]' 00:30:27.479 08:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:30:27.736 08:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:27.736 08:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:30:27.737 08:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:30:27.737 08:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:30:27.737 08:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:27.737 08:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:27.737 08:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:27.995 08:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:YWI3YjQ2MThmOGM4ZjY1MTg4ZGU0NTI4ODI0MGM0MzMyYWZlZGQyNDc5NmJlOGNivkRwUg==: 00:30:28.927 08:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:28.927 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:28.927 08:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:28.927 08:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:28.927 08:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:28.927 08:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:28.927 08:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:30:28.927 08:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:28.927 08:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:29.185 08:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 1 00:30:29.185 08:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:30:29.186 08:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:30:29.186 08:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:30:29.186 08:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:30:29.186 08:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:30:29.186 08:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:29.186 08:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:29.186 08:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:29.186 08:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:30:29.186 08:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:30:30.119 00:30:30.119 08:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:30:30.119 08:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:30:30.119 08:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:30.377 08:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:30.377 08:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:30.377 08:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:30.377 08:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:30.377 08:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:30.377 08:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:30:30.377 { 00:30:30.377 "cntlid": 139, 00:30:30.377 "qid": 0, 00:30:30.377 "state": "enabled", 00:30:30.377 "listen_address": { 00:30:30.377 "trtype": "TCP", 00:30:30.377 "adrfam": "IPv4", 00:30:30.377 "traddr": "10.0.0.2", 00:30:30.377 "trsvcid": "4420" 00:30:30.377 }, 00:30:30.377 "peer_address": { 00:30:30.377 "trtype": "TCP", 00:30:30.377 "adrfam": "IPv4", 00:30:30.377 "traddr": "10.0.0.1", 00:30:30.377 "trsvcid": "38554" 00:30:30.377 }, 00:30:30.377 "auth": { 00:30:30.377 "state": "completed", 00:30:30.377 "digest": "sha512", 00:30:30.377 "dhgroup": "ffdhe8192" 00:30:30.377 } 00:30:30.377 } 00:30:30.377 ]' 00:30:30.377 08:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:30:30.377 08:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:30.377 08:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:30:30.377 08:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:30:30.377 08:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:30:30.377 08:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:30.377 08:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:30.377 08:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:30.635 08:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:N2EyM2IxNDc3MjM3ZjYyZTk5YjU0YjcxYWI1MWM4Mjl3KYtU: 00:30:31.569 08:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:31.569 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:31.569 08:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:31.569 08:56:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:31.569 08:56:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:31.569 08:56:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:31.569 08:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:30:31.569 08:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:31.569 08:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:31.858 08:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 2 00:30:31.858 08:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:30:31.858 08:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:30:31.858 08:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:30:31.858 08:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:30:31.858 08:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:30:31.858 08:56:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:31.858 08:56:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:31.858 08:56:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:31.858 08:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:30:31.858 08:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:30:32.793 00:30:32.793 08:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:30:32.793 08:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:30:32.793 08:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:33.051 08:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:33.051 08:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:33.051 08:56:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:33.051 08:56:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:33.051 08:56:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:33.051 08:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:30:33.051 { 00:30:33.051 "cntlid": 141, 00:30:33.051 "qid": 0, 00:30:33.051 "state": "enabled", 00:30:33.051 "listen_address": { 00:30:33.051 "trtype": "TCP", 00:30:33.051 "adrfam": "IPv4", 00:30:33.051 "traddr": "10.0.0.2", 00:30:33.051 "trsvcid": "4420" 00:30:33.051 }, 00:30:33.051 "peer_address": { 00:30:33.051 "trtype": "TCP", 00:30:33.051 "adrfam": "IPv4", 00:30:33.051 "traddr": "10.0.0.1", 00:30:33.051 "trsvcid": "38576" 00:30:33.051 }, 00:30:33.051 "auth": { 00:30:33.051 "state": "completed", 00:30:33.051 "digest": "sha512", 00:30:33.051 "dhgroup": "ffdhe8192" 00:30:33.051 } 00:30:33.051 } 00:30:33.051 ]' 00:30:33.051 08:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:30:33.051 08:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:33.051 08:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:30:33.051 08:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:30:33.051 08:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:30:33.051 08:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:33.051 08:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:33.051 08:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:33.310 08:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZTliOWIwMjlmNzE4OWMzMDVhOTQyYjA3NmE3NWI3Y2UyZmQyOTgxNGYzMTkxYjkzn7qbcw==: 00:30:34.684 08:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:34.684 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:34.684 08:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:34.684 08:56:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:34.684 08:56:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:34.684 08:56:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:34.684 08:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:30:34.684 08:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:34.684 08:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:34.684 08:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 3 00:30:34.684 08:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:30:34.684 08:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:30:34.684 08:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:30:34.684 08:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:30:34.684 08:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:30:34.684 08:56:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:34.684 08:56:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:34.684 08:56:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:34.684 08:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:30:34.684 08:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:30:35.617 00:30:35.618 08:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:30:35.618 08:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:30:35.618 08:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:35.876 08:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:35.876 08:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:35.876 08:56:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:35.876 08:56:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:35.876 08:56:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:35.876 08:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:30:35.876 { 00:30:35.876 "cntlid": 143, 00:30:35.876 "qid": 0, 00:30:35.876 "state": "enabled", 00:30:35.876 "listen_address": { 00:30:35.876 "trtype": "TCP", 00:30:35.876 "adrfam": "IPv4", 00:30:35.876 "traddr": "10.0.0.2", 00:30:35.876 "trsvcid": "4420" 00:30:35.876 }, 00:30:35.876 "peer_address": { 00:30:35.876 "trtype": "TCP", 00:30:35.876 "adrfam": "IPv4", 00:30:35.876 "traddr": "10.0.0.1", 00:30:35.876 "trsvcid": "38596" 00:30:35.876 }, 00:30:35.876 "auth": { 00:30:35.876 "state": "completed", 00:30:35.876 "digest": "sha512", 00:30:35.876 "dhgroup": "ffdhe8192" 00:30:35.876 } 00:30:35.876 } 00:30:35.876 ]' 00:30:35.876 08:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:30:35.876 08:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:35.876 08:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:30:35.876 08:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:30:35.876 08:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:30:35.876 08:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:35.876 08:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:35.877 08:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:36.135 08:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZmYzODNkMzJlNmZmMzlkMDkzZDM2YzJiMTQxNTI0M2FjZTE3MDFiYThjZGUxMmFlN2MzNDQ3MmFkNTNhZTYyZgA1QD8=: 00:30:37.069 08:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:37.069 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:37.069 08:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:37.069 08:56:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:37.069 08:56:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:37.069 08:56:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:37.069 08:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # IFS=, 00:30:37.069 08:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # printf %s sha256,sha384,sha512 00:30:37.069 08:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # IFS=, 00:30:37.069 08:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:30:37.069 08:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:30:37.069 08:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:30:37.328 08:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@107 -- # connect_authenticate sha512 ffdhe8192 0 00:30:37.328 08:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:30:37.328 08:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:30:37.328 08:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:30:37.328 08:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:30:37.328 08:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:30:37.328 08:56:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:37.328 08:56:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:37.328 08:56:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:37.328 08:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:30:37.328 08:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:30:38.263 00:30:38.263 08:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:30:38.264 08:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:30:38.264 08:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:38.522 08:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:38.522 08:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:38.522 08:56:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:38.522 08:56:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:38.522 08:56:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:38.522 08:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:30:38.522 { 00:30:38.522 "cntlid": 145, 00:30:38.522 "qid": 0, 00:30:38.522 "state": "enabled", 00:30:38.522 "listen_address": { 00:30:38.522 "trtype": "TCP", 00:30:38.522 "adrfam": "IPv4", 00:30:38.522 "traddr": "10.0.0.2", 00:30:38.522 "trsvcid": "4420" 00:30:38.522 }, 00:30:38.522 "peer_address": { 00:30:38.522 "trtype": "TCP", 00:30:38.522 "adrfam": "IPv4", 00:30:38.522 "traddr": "10.0.0.1", 00:30:38.522 "trsvcid": "55476" 00:30:38.522 }, 00:30:38.522 "auth": { 00:30:38.522 "state": "completed", 00:30:38.522 "digest": "sha512", 00:30:38.522 "dhgroup": "ffdhe8192" 00:30:38.522 } 00:30:38.522 } 00:30:38.522 ]' 00:30:38.522 08:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:30:38.522 08:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:38.522 08:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:30:38.522 08:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:30:38.522 08:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:30:38.522 08:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:38.522 08:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:38.522 08:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:38.780 08:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:YWI3YjQ2MThmOGM4ZjY1MTg4ZGU0NTI4ODI0MGM0MzMyYWZlZGQyNDc5NmJlOGNivkRwUg==: 00:30:39.713 08:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:39.713 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:39.713 08:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:39.713 08:56:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:39.713 08:56:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:39.713 08:56:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:39.713 08:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@110 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:30:39.713 08:56:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:39.713 08:56:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:39.713 08:56:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:39.713 08:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@111 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:30:39.713 08:56:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:30:39.713 08:56:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:30:39.713 08:56:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:30:39.713 08:56:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:30:39.713 08:56:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:30:39.713 08:56:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:30:39.713 08:56:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:30:39.713 08:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:30:40.646 request: 00:30:40.646 { 00:30:40.646 "name": "nvme0", 00:30:40.646 "trtype": "tcp", 00:30:40.646 "traddr": "10.0.0.2", 00:30:40.646 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:30:40.646 "adrfam": "ipv4", 00:30:40.646 "trsvcid": "4420", 00:30:40.646 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:30:40.646 "dhchap_key": "key2", 00:30:40.646 "method": "bdev_nvme_attach_controller", 00:30:40.646 "req_id": 1 00:30:40.646 } 00:30:40.646 Got JSON-RPC error response 00:30:40.646 response: 00:30:40.646 { 00:30:40.646 "code": -32602, 00:30:40.646 "message": "Invalid parameters" 00:30:40.646 } 00:30:40.646 08:56:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:30:40.646 08:56:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:30:40.646 08:56:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:30:40.646 08:56:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:30:40.646 08:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:40.646 08:56:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:40.646 08:56:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:40.646 08:56:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:40.646 08:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@116 -- # trap - SIGINT SIGTERM EXIT 00:30:40.646 08:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # cleanup 00:30:40.646 08:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2286490 00:30:40.646 08:56:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@947 -- # '[' -z 2286490 ']' 00:30:40.646 08:56:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # kill -0 2286490 00:30:40.646 08:56:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # uname 00:30:40.646 08:56:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:30:40.646 08:56:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2286490 00:30:40.646 08:56:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:30:40.646 08:56:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:30:40.646 08:56:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2286490' 00:30:40.646 killing process with pid 2286490 00:30:40.646 08:56:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # kill 2286490 00:30:40.646 08:56:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@971 -- # wait 2286490 00:30:41.211 08:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:30:41.211 08:56:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:41.211 08:56:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:30:41.211 08:56:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:41.211 08:56:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:30:41.211 08:56:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:41.211 08:56:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:41.211 rmmod nvme_tcp 00:30:41.211 rmmod nvme_fabrics 00:30:41.211 rmmod nvme_keyring 00:30:41.211 08:56:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:41.211 08:56:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:30:41.211 08:56:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:30:41.211 08:56:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 2286469 ']' 00:30:41.211 08:56:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 2286469 00:30:41.211 08:56:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@947 -- # '[' -z 2286469 ']' 00:30:41.211 08:56:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # kill -0 2286469 00:30:41.211 08:56:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # uname 00:30:41.211 08:56:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:30:41.211 08:56:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2286469 00:30:41.211 08:56:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:30:41.211 08:56:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:30:41.211 08:56:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2286469' 00:30:41.211 killing process with pid 2286469 00:30:41.211 08:56:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # kill 2286469 00:30:41.211 08:56:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@971 -- # wait 2286469 00:30:41.471 08:56:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:41.471 08:56:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:41.471 08:56:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:41.471 08:56:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:41.471 08:56:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:41.471 08:56:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:41.471 08:56:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:41.471 08:56:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:43.372 08:56:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:43.372 08:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.ndc /tmp/spdk.key-sha256.2Qr /tmp/spdk.key-sha384.veH /tmp/spdk.key-sha512.Xui /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:30:43.372 00:30:43.372 real 2m57.525s 00:30:43.372 user 6m52.194s 00:30:43.372 sys 0m21.117s 00:30:43.372 08:56:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # xtrace_disable 00:30:43.372 08:56:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:43.372 ************************************ 00:30:43.372 END TEST nvmf_auth_target 00:30:43.372 ************************************ 00:30:43.372 08:56:38 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:30:43.372 08:56:38 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:30:43.372 08:56:38 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:30:43.372 08:56:38 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:30:43.372 08:56:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:43.631 ************************************ 00:30:43.631 START TEST nvmf_bdevio_no_huge 00:30:43.631 ************************************ 00:30:43.631 08:56:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:30:43.631 * Looking for test storage... 00:30:43.631 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:43.631 08:56:38 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:43.631 08:56:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:30:43.631 08:56:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:43.631 08:56:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:43.631 08:56:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:43.631 08:56:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:43.631 08:56:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:43.631 08:56:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:43.631 08:56:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:43.631 08:56:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:43.631 08:56:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:43.631 08:56:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:43.631 08:56:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:43.631 08:56:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:43.631 08:56:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:43.631 08:56:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:43.631 08:56:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:43.631 08:56:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:43.631 08:56:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:43.631 08:56:38 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:43.631 08:56:38 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:43.631 08:56:38 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:43.631 08:56:38 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.631 08:56:38 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.631 08:56:38 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.631 08:56:38 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:30:43.631 08:56:38 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.631 08:56:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:30:43.631 08:56:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:43.631 08:56:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:43.631 08:56:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:43.631 08:56:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:43.631 08:56:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:43.631 08:56:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:43.631 08:56:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:43.631 08:56:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:43.631 08:56:38 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:43.631 08:56:38 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:43.631 08:56:38 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:30:43.631 08:56:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:43.631 08:56:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:43.631 08:56:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:43.631 08:56:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:43.631 08:56:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:43.631 08:56:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:43.631 08:56:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:43.631 08:56:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:43.631 08:56:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:43.631 08:56:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:43.631 08:56:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:30:43.631 08:56:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:30:46.162 Found 0000:09:00.0 (0x8086 - 0x159b) 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:30:46.162 Found 0000:09:00.1 (0x8086 - 0x159b) 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:30:46.162 Found net devices under 0000:09:00.0: cvl_0_0 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:30:46.162 Found net devices under 0000:09:00.1: cvl_0_1 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:46.162 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:46.162 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:46.163 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:30:46.163 00:30:46.163 --- 10.0.0.2 ping statistics --- 00:30:46.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:46.163 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:30:46.163 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:46.163 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:46.163 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:30:46.163 00:30:46.163 --- 10.0.0.1 ping statistics --- 00:30:46.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:46.163 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:30:46.163 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:46.163 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:30:46.163 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:46.163 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:46.163 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:46.163 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:46.163 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:46.163 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:46.163 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:46.163 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:30:46.163 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:46.163 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@721 -- # xtrace_disable 00:30:46.163 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:30:46.163 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=2310424 00:30:46.163 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:30:46.163 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 2310424 00:30:46.163 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@828 -- # '[' -z 2310424 ']' 00:30:46.163 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:46.163 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local max_retries=100 00:30:46.163 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:46.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:46.163 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # xtrace_disable 00:30:46.163 08:56:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:30:46.163 [2024-05-15 08:56:40.912289] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:30:46.163 [2024-05-15 08:56:40.912361] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:30:46.421 [2024-05-15 08:56:40.991970] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:46.421 [2024-05-15 08:56:41.082421] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:46.421 [2024-05-15 08:56:41.082484] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:46.421 [2024-05-15 08:56:41.082512] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:46.421 [2024-05-15 08:56:41.082527] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:46.421 [2024-05-15 08:56:41.082539] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:46.421 [2024-05-15 08:56:41.082634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:30:46.421 [2024-05-15 08:56:41.082699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:30:46.421 [2024-05-15 08:56:41.082750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:30:46.421 [2024-05-15 08:56:41.082753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:46.421 08:56:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:30:46.421 08:56:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@861 -- # return 0 00:30:46.421 08:56:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:46.421 08:56:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@727 -- # xtrace_disable 00:30:46.421 08:56:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:30:46.421 08:56:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:46.421 08:56:41 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:46.421 08:56:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:46.421 08:56:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:30:46.421 [2024-05-15 08:56:41.199764] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:46.421 08:56:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:46.421 08:56:41 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:46.421 08:56:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:46.421 08:56:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:30:46.680 Malloc0 00:30:46.680 08:56:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:46.680 08:56:41 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:46.680 08:56:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:46.680 08:56:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:30:46.680 08:56:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:46.680 08:56:41 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:46.680 08:56:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:46.680 08:56:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:30:46.680 08:56:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:46.680 08:56:41 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:46.680 08:56:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:46.680 08:56:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:30:46.680 [2024-05-15 08:56:41.237360] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:30:46.680 [2024-05-15 08:56:41.237667] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:46.680 08:56:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:46.680 08:56:41 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:30:46.680 08:56:41 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:30:46.680 08:56:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:30:46.680 08:56:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:30:46.680 08:56:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:46.680 08:56:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:46.680 { 00:30:46.680 "params": { 00:30:46.680 "name": "Nvme$subsystem", 00:30:46.680 "trtype": "$TEST_TRANSPORT", 00:30:46.680 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:46.680 "adrfam": "ipv4", 00:30:46.680 "trsvcid": "$NVMF_PORT", 00:30:46.680 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:46.680 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:46.680 "hdgst": ${hdgst:-false}, 00:30:46.680 "ddgst": ${ddgst:-false} 00:30:46.680 }, 00:30:46.680 "method": "bdev_nvme_attach_controller" 00:30:46.680 } 00:30:46.680 EOF 00:30:46.680 )") 00:30:46.680 08:56:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:30:46.680 08:56:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:30:46.680 08:56:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:30:46.680 08:56:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:46.680 "params": { 00:30:46.680 "name": "Nvme1", 00:30:46.680 "trtype": "tcp", 00:30:46.680 "traddr": "10.0.0.2", 00:30:46.680 "adrfam": "ipv4", 00:30:46.680 "trsvcid": "4420", 00:30:46.680 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:46.680 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:46.680 "hdgst": false, 00:30:46.680 "ddgst": false 00:30:46.680 }, 00:30:46.680 "method": "bdev_nvme_attach_controller" 00:30:46.680 }' 00:30:46.680 [2024-05-15 08:56:41.279985] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:30:46.680 [2024-05-15 08:56:41.280083] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2310451 ] 00:30:46.680 [2024-05-15 08:56:41.352698] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:46.680 [2024-05-15 08:56:41.439335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:46.680 [2024-05-15 08:56:41.439388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:46.680 [2024-05-15 08:56:41.439392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:46.969 I/O targets: 00:30:46.969 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:30:46.969 00:30:46.969 00:30:46.969 CUnit - A unit testing framework for C - Version 2.1-3 00:30:46.969 http://cunit.sourceforge.net/ 00:30:46.969 00:30:46.969 00:30:46.969 Suite: bdevio tests on: Nvme1n1 00:30:46.969 Test: blockdev write read block ...passed 00:30:46.969 Test: blockdev write zeroes read block ...passed 00:30:46.969 Test: blockdev write zeroes read no split ...passed 00:30:47.228 Test: blockdev write zeroes read split ...passed 00:30:47.228 Test: blockdev write zeroes read split partial ...passed 00:30:47.228 Test: blockdev reset ...[2024-05-15 08:56:41.757182] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.228 [2024-05-15 08:56:41.757300] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x113c160 (9): Bad file descriptor 00:30:47.228 [2024-05-15 08:56:41.771918] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:47.228 passed 00:30:47.228 Test: blockdev write read 8 blocks ...passed 00:30:47.228 Test: blockdev write read size > 128k ...passed 00:30:47.228 Test: blockdev write read invalid size ...passed 00:30:47.228 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:30:47.228 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:30:47.228 Test: blockdev write read max offset ...passed 00:30:47.228 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:30:47.228 Test: blockdev writev readv 8 blocks ...passed 00:30:47.228 Test: blockdev writev readv 30 x 1block ...passed 00:30:47.228 Test: blockdev writev readv block ...passed 00:30:47.228 Test: blockdev writev readv size > 128k ...passed 00:30:47.228 Test: blockdev writev readv size > 128k in two iovs ...passed 00:30:47.228 Test: blockdev comparev and writev ...[2024-05-15 08:56:41.987910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:47.228 [2024-05-15 08:56:41.987948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:47.228 [2024-05-15 08:56:41.987973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:47.228 [2024-05-15 08:56:41.987991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:47.228 [2024-05-15 08:56:41.988334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:47.228 [2024-05-15 08:56:41.988359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:47.228 [2024-05-15 08:56:41.988383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:47.228 [2024-05-15 08:56:41.988400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:47.228 [2024-05-15 08:56:41.988743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:47.228 [2024-05-15 08:56:41.988769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:47.228 [2024-05-15 08:56:41.988792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:47.228 [2024-05-15 08:56:41.988810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:47.228 [2024-05-15 08:56:41.989140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:47.228 [2024-05-15 08:56:41.989170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:47.228 [2024-05-15 08:56:41.989194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:47.228 [2024-05-15 08:56:41.989211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:47.486 passed 00:30:47.486 Test: blockdev nvme passthru rw ...passed 00:30:47.486 Test: blockdev nvme passthru vendor specific ...[2024-05-15 08:56:42.073506] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:47.486 [2024-05-15 08:56:42.073533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:47.486 [2024-05-15 08:56:42.073703] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:47.487 [2024-05-15 08:56:42.073727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:47.487 [2024-05-15 08:56:42.073892] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:47.487 [2024-05-15 08:56:42.073917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:47.487 [2024-05-15 08:56:42.074084] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:47.487 [2024-05-15 08:56:42.074107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:47.487 passed 00:30:47.487 Test: blockdev nvme admin passthru ...passed 00:30:47.487 Test: blockdev copy ...passed 00:30:47.487 00:30:47.487 Run Summary: Type Total Ran Passed Failed Inactive 00:30:47.487 suites 1 1 n/a 0 0 00:30:47.487 tests 23 23 23 0 0 00:30:47.487 asserts 152 152 152 0 n/a 00:30:47.487 00:30:47.487 Elapsed time = 1.074 seconds 00:30:47.745 08:56:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:47.745 08:56:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:47.745 08:56:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:30:47.745 08:56:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:47.745 08:56:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:30:47.745 08:56:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:30:47.745 08:56:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:47.745 08:56:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:30:47.745 08:56:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:47.745 08:56:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:30:47.745 08:56:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:47.745 08:56:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:47.745 rmmod nvme_tcp 00:30:47.745 rmmod nvme_fabrics 00:30:47.745 rmmod nvme_keyring 00:30:47.745 08:56:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:47.745 08:56:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:30:47.745 08:56:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:30:47.745 08:56:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 2310424 ']' 00:30:47.745 08:56:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 2310424 00:30:47.745 08:56:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@947 -- # '[' -z 2310424 ']' 00:30:47.745 08:56:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # kill -0 2310424 00:30:47.745 08:56:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # uname 00:30:47.745 08:56:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:30:47.745 08:56:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2310424 00:30:47.745 08:56:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # process_name=reactor_3 00:30:47.745 08:56:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # '[' reactor_3 = sudo ']' 00:30:47.745 08:56:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2310424' 00:30:47.745 killing process with pid 2310424 00:30:47.745 08:56:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # kill 2310424 00:30:47.745 [2024-05-15 08:56:42.527912] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:30:47.745 08:56:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@971 -- # wait 2310424 00:30:48.311 08:56:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:48.311 08:56:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:48.312 08:56:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:48.312 08:56:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:48.312 08:56:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:48.312 08:56:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:48.312 08:56:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:48.312 08:56:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:50.211 08:56:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:50.211 00:30:50.211 real 0m6.767s 00:30:50.211 user 0m9.617s 00:30:50.211 sys 0m2.875s 00:30:50.211 08:56:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # xtrace_disable 00:30:50.211 08:56:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:30:50.211 ************************************ 00:30:50.211 END TEST nvmf_bdevio_no_huge 00:30:50.211 ************************************ 00:30:50.211 08:56:44 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:30:50.211 08:56:44 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:30:50.211 08:56:44 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:30:50.211 08:56:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:50.211 ************************************ 00:30:50.211 START TEST nvmf_tls 00:30:50.211 ************************************ 00:30:50.211 08:56:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:30:50.469 * Looking for test storage... 00:30:50.469 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:50.470 08:56:45 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:50.470 08:56:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:30:50.470 08:56:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:50.470 08:56:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:50.470 08:56:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:50.470 08:56:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:50.470 08:56:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:50.470 08:56:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:50.470 08:56:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:50.470 08:56:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:50.470 08:56:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:50.470 08:56:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:50.470 08:56:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:50.470 08:56:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:50.470 08:56:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:50.470 08:56:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:50.470 08:56:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:50.470 08:56:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:50.470 08:56:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:50.470 08:56:45 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:50.470 08:56:45 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:50.470 08:56:45 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:50.470 08:56:45 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.470 08:56:45 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.470 08:56:45 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.470 08:56:45 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:30:50.470 08:56:45 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.470 08:56:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:30:50.470 08:56:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:50.470 08:56:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:50.470 08:56:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:50.470 08:56:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:50.470 08:56:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:50.470 08:56:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:50.470 08:56:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:50.470 08:56:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:50.470 08:56:45 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:50.470 08:56:45 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:30:50.470 08:56:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:50.470 08:56:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:50.470 08:56:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:50.470 08:56:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:50.470 08:56:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:50.470 08:56:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:50.470 08:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:50.470 08:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:50.470 08:56:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:50.470 08:56:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:50.470 08:56:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:30:50.470 08:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:30:52.999 Found 0000:09:00.0 (0x8086 - 0x159b) 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:30:52.999 Found 0000:09:00.1 (0x8086 - 0x159b) 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:30:52.999 Found net devices under 0000:09:00.0: cvl_0_0 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:30:52.999 Found net devices under 0000:09:00.1: cvl_0_1 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:52.999 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:53.000 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:53.000 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:53.000 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:53.000 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:53.000 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:53.000 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:53.000 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:53.000 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:53.000 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:53.000 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:53.000 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:53.000 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:53.000 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:53.000 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:53.000 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:53.000 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:53.000 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:53.000 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:53.000 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:53.000 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:53.000 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:53.000 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:53.000 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:30:53.000 00:30:53.000 --- 10.0.0.2 ping statistics --- 00:30:53.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:53.000 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:30:53.000 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:53.000 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:53.000 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:30:53.000 00:30:53.000 --- 10.0.0.1 ping statistics --- 00:30:53.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:53.000 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:30:53.000 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:53.000 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:30:53.000 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:53.000 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:53.000 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:53.000 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:53.000 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:53.000 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:53.000 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:53.000 08:56:47 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:30:53.000 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:53.000 08:56:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:30:53.000 08:56:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:53.000 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2312928 00:30:53.000 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:30:53.000 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2312928 00:30:53.000 08:56:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2312928 ']' 00:30:53.000 08:56:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:53.000 08:56:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:30:53.000 08:56:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:53.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:53.000 08:56:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:30:53.000 08:56:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:53.000 [2024-05-15 08:56:47.746376] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:30:53.000 [2024-05-15 08:56:47.746458] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:53.000 EAL: No free 2048 kB hugepages reported on node 1 00:30:53.258 [2024-05-15 08:56:47.823138] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:53.258 [2024-05-15 08:56:47.906394] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:53.258 [2024-05-15 08:56:47.906456] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:53.258 [2024-05-15 08:56:47.906469] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:53.258 [2024-05-15 08:56:47.906480] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:53.258 [2024-05-15 08:56:47.906490] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:53.258 [2024-05-15 08:56:47.906525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:53.258 08:56:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:30:53.258 08:56:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:30:53.258 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:53.258 08:56:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:30:53.258 08:56:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:53.258 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:53.258 08:56:47 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:30:53.258 08:56:47 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:30:53.516 true 00:30:53.516 08:56:48 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:30:53.516 08:56:48 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:30:53.774 08:56:48 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:30:53.774 08:56:48 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:30:53.774 08:56:48 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:30:54.032 08:56:48 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:30:54.032 08:56:48 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:30:54.290 08:56:48 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:30:54.290 08:56:48 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:30:54.290 08:56:48 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:30:54.547 08:56:49 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:30:54.547 08:56:49 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:30:54.805 08:56:49 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:30:54.805 08:56:49 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:30:54.805 08:56:49 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:30:54.805 08:56:49 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:30:55.062 08:56:49 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:30:55.062 08:56:49 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:30:55.062 08:56:49 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:30:55.319 08:56:49 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:30:55.319 08:56:49 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:30:55.576 08:56:50 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:30:55.576 08:56:50 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:30:55.576 08:56:50 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:30:55.835 08:56:50 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:30:55.835 08:56:50 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:30:56.093 08:56:50 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:30:56.094 08:56:50 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:30:56.094 08:56:50 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:30:56.094 08:56:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:30:56.094 08:56:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:30:56.094 08:56:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:56.094 08:56:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:30:56.094 08:56:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:30:56.094 08:56:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:30:56.094 08:56:50 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:30:56.094 08:56:50 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:30:56.094 08:56:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:30:56.094 08:56:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:30:56.094 08:56:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:56.094 08:56:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:30:56.094 08:56:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:30:56.094 08:56:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:30:56.094 08:56:50 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:30:56.094 08:56:50 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:30:56.094 08:56:50 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.rKruHcyuOo 00:30:56.094 08:56:50 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:30:56.094 08:56:50 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.aS5suBF3g1 00:30:56.094 08:56:50 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:30:56.094 08:56:50 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:30:56.094 08:56:50 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.rKruHcyuOo 00:30:56.094 08:56:50 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.aS5suBF3g1 00:30:56.094 08:56:50 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:30:56.352 08:56:51 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:30:56.917 08:56:51 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.rKruHcyuOo 00:30:56.917 08:56:51 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.rKruHcyuOo 00:30:56.917 08:56:51 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:30:56.917 [2024-05-15 08:56:51.696794] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:57.175 08:56:51 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:30:57.175 08:56:51 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:30:57.432 [2024-05-15 08:56:52.170032] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:30:57.432 [2024-05-15 08:56:52.170144] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:57.432 [2024-05-15 08:56:52.170385] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:57.432 08:56:52 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:30:57.689 malloc0 00:30:57.689 08:56:52 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:30:57.947 08:56:52 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rKruHcyuOo 00:30:58.205 [2024-05-15 08:56:52.924639] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:30:58.205 08:56:52 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.rKruHcyuOo 00:30:58.205 EAL: No free 2048 kB hugepages reported on node 1 00:31:10.433 Initializing NVMe Controllers 00:31:10.434 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:10.434 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:10.434 Initialization complete. Launching workers. 00:31:10.434 ======================================================== 00:31:10.434 Latency(us) 00:31:10.434 Device Information : IOPS MiB/s Average min max 00:31:10.434 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7791.88 30.44 8216.54 1320.53 13077.54 00:31:10.434 ======================================================== 00:31:10.434 Total : 7791.88 30.44 8216.54 1320.53 13077.54 00:31:10.434 00:31:10.434 08:57:03 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rKruHcyuOo 00:31:10.434 08:57:03 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:31:10.434 08:57:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:31:10.434 08:57:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:31:10.434 08:57:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.rKruHcyuOo' 00:31:10.434 08:57:03 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:10.434 08:57:03 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2314831 00:31:10.434 08:57:03 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:10.434 08:57:03 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2314831 /var/tmp/bdevperf.sock 00:31:10.434 08:57:03 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:31:10.434 08:57:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2314831 ']' 00:31:10.434 08:57:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:10.434 08:57:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:31:10.434 08:57:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:10.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:10.434 08:57:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:31:10.434 08:57:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:10.434 [2024-05-15 08:57:03.090267] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:31:10.434 [2024-05-15 08:57:03.090364] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2314831 ] 00:31:10.434 EAL: No free 2048 kB hugepages reported on node 1 00:31:10.434 [2024-05-15 08:57:03.164308] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:10.434 [2024-05-15 08:57:03.245477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:10.434 08:57:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:31:10.434 08:57:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:31:10.434 08:57:03 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rKruHcyuOo 00:31:10.434 [2024-05-15 08:57:03.573638] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:10.434 [2024-05-15 08:57:03.573767] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:31:10.434 TLSTESTn1 00:31:10.434 08:57:03 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:31:10.434 Running I/O for 10 seconds... 00:31:20.408 00:31:20.408 Latency(us) 00:31:20.408 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:20.408 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:31:20.408 Verification LBA range: start 0x0 length 0x2000 00:31:20.408 TLSTESTn1 : 10.02 3160.52 12.35 0.00 0.00 40427.20 9223.59 39612.87 00:31:20.408 =================================================================================================================== 00:31:20.408 Total : 3160.52 12.35 0.00 0.00 40427.20 9223.59 39612.87 00:31:20.408 0 00:31:20.408 08:57:13 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:20.408 08:57:13 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 2314831 00:31:20.408 08:57:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2314831 ']' 00:31:20.408 08:57:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2314831 00:31:20.408 08:57:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:31:20.408 08:57:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:31:20.408 08:57:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2314831 00:31:20.408 08:57:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:31:20.408 08:57:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:31:20.408 08:57:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2314831' 00:31:20.408 killing process with pid 2314831 00:31:20.408 08:57:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2314831 00:31:20.408 Received shutdown signal, test time was about 10.000000 seconds 00:31:20.408 00:31:20.408 Latency(us) 00:31:20.408 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:20.408 =================================================================================================================== 00:31:20.408 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:20.408 [2024-05-15 08:57:13.859921] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:31:20.408 08:57:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2314831 00:31:20.408 08:57:14 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.aS5suBF3g1 00:31:20.408 08:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:31:20.408 08:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.aS5suBF3g1 00:31:20.408 08:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:31:20.408 08:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:31:20.408 08:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:31:20.408 08:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:31:20.408 08:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.aS5suBF3g1 00:31:20.408 08:57:14 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:31:20.408 08:57:14 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:31:20.408 08:57:14 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:31:20.408 08:57:14 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.aS5suBF3g1' 00:31:20.408 08:57:14 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:20.408 08:57:14 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2316632 00:31:20.408 08:57:14 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:31:20.408 08:57:14 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:20.408 08:57:14 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2316632 /var/tmp/bdevperf.sock 00:31:20.408 08:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2316632 ']' 00:31:20.408 08:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:20.408 08:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:31:20.408 08:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:20.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:20.408 08:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:31:20.408 08:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:20.408 [2024-05-15 08:57:14.098800] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:31:20.408 [2024-05-15 08:57:14.098874] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2316632 ] 00:31:20.408 EAL: No free 2048 kB hugepages reported on node 1 00:31:20.408 [2024-05-15 08:57:14.165561] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:20.408 [2024-05-15 08:57:14.249766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:20.408 08:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:31:20.408 08:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:31:20.408 08:57:14 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.aS5suBF3g1 00:31:20.408 [2024-05-15 08:57:14.588374] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:20.408 [2024-05-15 08:57:14.588493] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:31:20.408 [2024-05-15 08:57:14.593788] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:31:20.408 [2024-05-15 08:57:14.594339] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ed700 (107): Transport endpoint is not connected 00:31:20.408 [2024-05-15 08:57:14.595327] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ed700 (9): Bad file descriptor 00:31:20.408 [2024-05-15 08:57:14.596325] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:20.408 [2024-05-15 08:57:14.596347] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:31:20.408 [2024-05-15 08:57:14.596365] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:20.408 request: 00:31:20.408 { 00:31:20.408 "name": "TLSTEST", 00:31:20.408 "trtype": "tcp", 00:31:20.408 "traddr": "10.0.0.2", 00:31:20.408 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:20.408 "adrfam": "ipv4", 00:31:20.408 "trsvcid": "4420", 00:31:20.408 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:20.408 "psk": "/tmp/tmp.aS5suBF3g1", 00:31:20.408 "method": "bdev_nvme_attach_controller", 00:31:20.408 "req_id": 1 00:31:20.408 } 00:31:20.408 Got JSON-RPC error response 00:31:20.408 response: 00:31:20.408 { 00:31:20.408 "code": -32602, 00:31:20.408 "message": "Invalid parameters" 00:31:20.408 } 00:31:20.408 08:57:14 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2316632 00:31:20.408 08:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2316632 ']' 00:31:20.408 08:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2316632 00:31:20.409 08:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:31:20.409 08:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:31:20.409 08:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2316632 00:31:20.409 08:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:31:20.409 08:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:31:20.409 08:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2316632' 00:31:20.409 killing process with pid 2316632 00:31:20.409 08:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2316632 00:31:20.409 Received shutdown signal, test time was about 10.000000 seconds 00:31:20.409 00:31:20.409 Latency(us) 00:31:20.409 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:20.409 =================================================================================================================== 00:31:20.409 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:20.409 [2024-05-15 08:57:14.647570] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:31:20.409 08:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2316632 00:31:20.409 08:57:14 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:31:20.409 08:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:31:20.409 08:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:31:20.409 08:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:31:20.409 08:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:31:20.409 08:57:14 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.rKruHcyuOo 00:31:20.409 08:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:31:20.409 08:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.rKruHcyuOo 00:31:20.409 08:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:31:20.409 08:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:31:20.409 08:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:31:20.409 08:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:31:20.409 08:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.rKruHcyuOo 00:31:20.409 08:57:14 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:31:20.409 08:57:14 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:31:20.409 08:57:14 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:31:20.409 08:57:14 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.rKruHcyuOo' 00:31:20.409 08:57:14 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:20.409 08:57:14 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2316766 00:31:20.409 08:57:14 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:31:20.409 08:57:14 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:20.409 08:57:14 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2316766 /var/tmp/bdevperf.sock 00:31:20.409 08:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2316766 ']' 00:31:20.409 08:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:20.409 08:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:31:20.409 08:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:20.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:20.409 08:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:31:20.409 08:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:20.409 [2024-05-15 08:57:14.908301] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:31:20.409 [2024-05-15 08:57:14.908392] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2316766 ] 00:31:20.409 EAL: No free 2048 kB hugepages reported on node 1 00:31:20.409 [2024-05-15 08:57:14.977968] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:20.409 [2024-05-15 08:57:15.057503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:20.409 08:57:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:31:20.409 08:57:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:31:20.409 08:57:15 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.rKruHcyuOo 00:31:20.666 [2024-05-15 08:57:15.390488] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:20.666 [2024-05-15 08:57:15.390603] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:31:20.666 [2024-05-15 08:57:15.395647] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:31:20.666 [2024-05-15 08:57:15.395677] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:31:20.666 [2024-05-15 08:57:15.395712] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:31:20.666 [2024-05-15 08:57:15.396188] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae700 (107): Transport endpoint is not connected 00:31:20.666 [2024-05-15 08:57:15.397176] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae700 (9): Bad file descriptor 00:31:20.666 [2024-05-15 08:57:15.398176] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:20.667 [2024-05-15 08:57:15.398211] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:31:20.667 [2024-05-15 08:57:15.398235] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:20.667 request: 00:31:20.667 { 00:31:20.667 "name": "TLSTEST", 00:31:20.667 "trtype": "tcp", 00:31:20.667 "traddr": "10.0.0.2", 00:31:20.667 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:20.667 "adrfam": "ipv4", 00:31:20.667 "trsvcid": "4420", 00:31:20.667 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:20.667 "psk": "/tmp/tmp.rKruHcyuOo", 00:31:20.667 "method": "bdev_nvme_attach_controller", 00:31:20.667 "req_id": 1 00:31:20.667 } 00:31:20.667 Got JSON-RPC error response 00:31:20.667 response: 00:31:20.667 { 00:31:20.667 "code": -32602, 00:31:20.667 "message": "Invalid parameters" 00:31:20.667 } 00:31:20.667 08:57:15 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2316766 00:31:20.667 08:57:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2316766 ']' 00:31:20.667 08:57:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2316766 00:31:20.667 08:57:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:31:20.667 08:57:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:31:20.667 08:57:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2316766 00:31:20.667 08:57:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:31:20.667 08:57:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:31:20.667 08:57:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2316766' 00:31:20.667 killing process with pid 2316766 00:31:20.667 08:57:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2316766 00:31:20.667 Received shutdown signal, test time was about 10.000000 seconds 00:31:20.667 00:31:20.667 Latency(us) 00:31:20.667 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:20.667 =================================================================================================================== 00:31:20.667 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:20.667 [2024-05-15 08:57:15.437886] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:31:20.667 08:57:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2316766 00:31:20.924 08:57:15 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:31:20.924 08:57:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:31:20.924 08:57:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:31:20.924 08:57:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:31:20.924 08:57:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:31:20.924 08:57:15 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.rKruHcyuOo 00:31:20.924 08:57:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:31:20.924 08:57:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.rKruHcyuOo 00:31:20.924 08:57:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:31:20.924 08:57:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:31:20.924 08:57:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:31:20.924 08:57:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:31:20.924 08:57:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.rKruHcyuOo 00:31:20.924 08:57:15 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:31:20.924 08:57:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:31:20.924 08:57:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:31:20.924 08:57:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.rKruHcyuOo' 00:31:20.924 08:57:15 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:20.924 08:57:15 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2316830 00:31:20.924 08:57:15 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:20.924 08:57:15 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:31:20.924 08:57:15 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2316830 /var/tmp/bdevperf.sock 00:31:20.925 08:57:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2316830 ']' 00:31:20.925 08:57:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:20.925 08:57:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:31:20.925 08:57:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:20.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:20.925 08:57:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:31:20.925 08:57:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:20.925 [2024-05-15 08:57:15.673788] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:31:20.925 [2024-05-15 08:57:15.673867] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2316830 ] 00:31:20.925 EAL: No free 2048 kB hugepages reported on node 1 00:31:21.181 [2024-05-15 08:57:15.743741] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:21.181 [2024-05-15 08:57:15.822866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:21.181 08:57:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:31:21.182 08:57:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:31:21.182 08:57:15 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rKruHcyuOo 00:31:21.439 [2024-05-15 08:57:16.149912] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:21.439 [2024-05-15 08:57:16.150039] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:31:21.439 [2024-05-15 08:57:16.155437] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:31:21.439 [2024-05-15 08:57:16.155470] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:31:21.439 [2024-05-15 08:57:16.155510] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:31:21.439 [2024-05-15 08:57:16.155953] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x84a700 (107): Transport endpoint is not connected 00:31:21.439 [2024-05-15 08:57:16.156942] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x84a700 (9): Bad file descriptor 00:31:21.439 [2024-05-15 08:57:16.157941] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:31:21.439 [2024-05-15 08:57:16.157963] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:31:21.439 [2024-05-15 08:57:16.157981] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:31:21.439 request: 00:31:21.439 { 00:31:21.439 "name": "TLSTEST", 00:31:21.439 "trtype": "tcp", 00:31:21.439 "traddr": "10.0.0.2", 00:31:21.439 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:21.439 "adrfam": "ipv4", 00:31:21.439 "trsvcid": "4420", 00:31:21.439 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:21.439 "psk": "/tmp/tmp.rKruHcyuOo", 00:31:21.439 "method": "bdev_nvme_attach_controller", 00:31:21.439 "req_id": 1 00:31:21.439 } 00:31:21.439 Got JSON-RPC error response 00:31:21.439 response: 00:31:21.439 { 00:31:21.439 "code": -32602, 00:31:21.439 "message": "Invalid parameters" 00:31:21.439 } 00:31:21.439 08:57:16 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2316830 00:31:21.439 08:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2316830 ']' 00:31:21.439 08:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2316830 00:31:21.439 08:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:31:21.439 08:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:31:21.439 08:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2316830 00:31:21.439 08:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:31:21.439 08:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:31:21.439 08:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2316830' 00:31:21.439 killing process with pid 2316830 00:31:21.439 08:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2316830 00:31:21.439 Received shutdown signal, test time was about 10.000000 seconds 00:31:21.439 00:31:21.439 Latency(us) 00:31:21.439 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:21.439 =================================================================================================================== 00:31:21.439 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:21.439 [2024-05-15 08:57:16.210538] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:31:21.439 08:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2316830 00:31:21.697 08:57:16 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:31:21.697 08:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:31:21.697 08:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:31:21.697 08:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:31:21.697 08:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:31:21.697 08:57:16 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:31:21.697 08:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:31:21.697 08:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:31:21.697 08:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:31:21.697 08:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:31:21.697 08:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:31:21.697 08:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:31:21.697 08:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:31:21.697 08:57:16 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:31:21.697 08:57:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:31:21.697 08:57:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:31:21.697 08:57:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:31:21.697 08:57:16 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:21.697 08:57:16 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2316927 00:31:21.697 08:57:16 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:31:21.697 08:57:16 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:21.697 08:57:16 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2316927 /var/tmp/bdevperf.sock 00:31:21.697 08:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2316927 ']' 00:31:21.697 08:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:21.697 08:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:31:21.697 08:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:21.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:21.697 08:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:31:21.697 08:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:21.697 [2024-05-15 08:57:16.469780] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:31:21.697 [2024-05-15 08:57:16.469865] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2316927 ] 00:31:21.954 EAL: No free 2048 kB hugepages reported on node 1 00:31:21.954 [2024-05-15 08:57:16.535843] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:21.954 [2024-05-15 08:57:16.612453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:21.954 08:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:31:21.954 08:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:31:21.955 08:57:16 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:31:22.212 [2024-05-15 08:57:16.957565] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:31:22.212 [2024-05-15 08:57:16.959504] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb20dd0 (9): Bad file descriptor 00:31:22.212 [2024-05-15 08:57:16.960490] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:22.212 [2024-05-15 08:57:16.960537] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:31:22.212 [2024-05-15 08:57:16.960554] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:22.212 request: 00:31:22.212 { 00:31:22.212 "name": "TLSTEST", 00:31:22.212 "trtype": "tcp", 00:31:22.212 "traddr": "10.0.0.2", 00:31:22.212 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:22.212 "adrfam": "ipv4", 00:31:22.212 "trsvcid": "4420", 00:31:22.212 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:22.212 "method": "bdev_nvme_attach_controller", 00:31:22.212 "req_id": 1 00:31:22.212 } 00:31:22.212 Got JSON-RPC error response 00:31:22.212 response: 00:31:22.212 { 00:31:22.212 "code": -32602, 00:31:22.212 "message": "Invalid parameters" 00:31:22.212 } 00:31:22.212 08:57:16 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2316927 00:31:22.212 08:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2316927 ']' 00:31:22.212 08:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2316927 00:31:22.212 08:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:31:22.212 08:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:31:22.212 08:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2316927 00:31:22.212 08:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:31:22.213 08:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:31:22.213 08:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2316927' 00:31:22.213 killing process with pid 2316927 00:31:22.213 08:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2316927 00:31:22.213 Received shutdown signal, test time was about 10.000000 seconds 00:31:22.213 00:31:22.213 Latency(us) 00:31:22.213 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:22.213 =================================================================================================================== 00:31:22.213 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:22.213 08:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2316927 00:31:22.470 08:57:17 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:31:22.470 08:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:31:22.470 08:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:31:22.471 08:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:31:22.471 08:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:31:22.471 08:57:17 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 2312928 00:31:22.471 08:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2312928 ']' 00:31:22.471 08:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2312928 00:31:22.471 08:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:31:22.471 08:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:31:22.471 08:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2312928 00:31:22.471 08:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:31:22.471 08:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:31:22.471 08:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2312928' 00:31:22.471 killing process with pid 2312928 00:31:22.471 08:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2312928 00:31:22.471 [2024-05-15 08:57:17.227054] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:31:22.471 [2024-05-15 08:57:17.227100] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:31:22.471 08:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2312928 00:31:22.729 08:57:17 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:31:22.729 08:57:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:31:22.729 08:57:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:31:22.729 08:57:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:22.729 08:57:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:31:22.729 08:57:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:31:22.729 08:57:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:31:22.729 08:57:17 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:31:22.729 08:57:17 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:31:22.729 08:57:17 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.ydBc68Nn4l 00:31:22.729 08:57:17 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:31:22.729 08:57:17 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.ydBc68Nn4l 00:31:22.729 08:57:17 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:31:22.729 08:57:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:22.729 08:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:31:22.729 08:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:22.729 08:57:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2317076 00:31:22.729 08:57:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:22.729 08:57:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2317076 00:31:22.729 08:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2317076 ']' 00:31:22.729 08:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:22.729 08:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:31:22.729 08:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:22.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:22.729 08:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:31:22.729 08:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:22.987 [2024-05-15 08:57:17.555072] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:31:22.987 [2024-05-15 08:57:17.555148] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:22.987 EAL: No free 2048 kB hugepages reported on node 1 00:31:22.987 [2024-05-15 08:57:17.645017] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:22.987 [2024-05-15 08:57:17.736342] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:22.987 [2024-05-15 08:57:17.736417] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:22.987 [2024-05-15 08:57:17.736442] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:22.987 [2024-05-15 08:57:17.736463] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:22.987 [2024-05-15 08:57:17.736482] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:22.987 [2024-05-15 08:57:17.736524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:23.245 08:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:31:23.245 08:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:31:23.245 08:57:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:23.245 08:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:31:23.245 08:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:23.245 08:57:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:23.245 08:57:17 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.ydBc68Nn4l 00:31:23.245 08:57:17 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ydBc68Nn4l 00:31:23.245 08:57:17 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:31:23.502 [2024-05-15 08:57:18.132641] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:23.502 08:57:18 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:31:23.759 08:57:18 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:31:24.017 [2024-05-15 08:57:18.633930] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:31:24.017 [2024-05-15 08:57:18.634040] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:24.017 [2024-05-15 08:57:18.634334] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:24.017 08:57:18 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:31:24.275 malloc0 00:31:24.275 08:57:18 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:31:24.532 08:57:19 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ydBc68Nn4l 00:31:24.789 [2024-05-15 08:57:19.499044] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:31:24.789 08:57:19 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ydBc68Nn4l 00:31:24.789 08:57:19 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:31:24.789 08:57:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:31:24.789 08:57:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:31:24.789 08:57:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ydBc68Nn4l' 00:31:24.789 08:57:19 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:24.789 08:57:19 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2317360 00:31:24.789 08:57:19 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:31:24.789 08:57:19 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:24.789 08:57:19 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2317360 /var/tmp/bdevperf.sock 00:31:24.789 08:57:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2317360 ']' 00:31:24.789 08:57:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:24.789 08:57:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:31:24.789 08:57:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:24.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:24.789 08:57:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:31:24.789 08:57:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:24.790 [2024-05-15 08:57:19.558315] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:31:24.790 [2024-05-15 08:57:19.558390] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2317360 ] 00:31:25.047 EAL: No free 2048 kB hugepages reported on node 1 00:31:25.047 [2024-05-15 08:57:19.624889] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:25.047 [2024-05-15 08:57:19.703272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:25.047 08:57:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:31:25.047 08:57:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:31:25.047 08:57:19 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ydBc68Nn4l 00:31:25.305 [2024-05-15 08:57:20.078943] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:25.305 [2024-05-15 08:57:20.079048] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:31:25.562 TLSTESTn1 00:31:25.562 08:57:20 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:31:25.562 Running I/O for 10 seconds... 00:31:37.752 00:31:37.752 Latency(us) 00:31:37.752 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:37.752 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:31:37.752 Verification LBA range: start 0x0 length 0x2000 00:31:37.752 TLSTESTn1 : 10.02 3320.70 12.97 0.00 0.00 38477.70 7767.23 53593.88 00:31:37.752 =================================================================================================================== 00:31:37.752 Total : 3320.70 12.97 0.00 0.00 38477.70 7767.23 53593.88 00:31:37.752 0 00:31:37.752 08:57:30 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:37.752 08:57:30 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 2317360 00:31:37.752 08:57:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2317360 ']' 00:31:37.752 08:57:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2317360 00:31:37.752 08:57:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:31:37.752 08:57:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:31:37.752 08:57:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2317360 00:31:37.752 08:57:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:31:37.752 08:57:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:31:37.752 08:57:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2317360' 00:31:37.752 killing process with pid 2317360 00:31:37.752 08:57:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2317360 00:31:37.752 Received shutdown signal, test time was about 10.000000 seconds 00:31:37.752 00:31:37.752 Latency(us) 00:31:37.752 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:37.752 =================================================================================================================== 00:31:37.752 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:37.752 [2024-05-15 08:57:30.370685] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:31:37.752 08:57:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2317360 00:31:37.752 08:57:30 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.ydBc68Nn4l 00:31:37.752 08:57:30 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ydBc68Nn4l 00:31:37.752 08:57:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:31:37.752 08:57:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ydBc68Nn4l 00:31:37.752 08:57:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:31:37.752 08:57:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:31:37.752 08:57:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:31:37.752 08:57:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:31:37.752 08:57:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ydBc68Nn4l 00:31:37.752 08:57:30 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:31:37.752 08:57:30 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:31:37.752 08:57:30 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:31:37.752 08:57:30 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ydBc68Nn4l' 00:31:37.752 08:57:30 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:37.752 08:57:30 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2318649 00:31:37.752 08:57:30 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:31:37.752 08:57:30 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:37.752 08:57:30 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2318649 /var/tmp/bdevperf.sock 00:31:37.752 08:57:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2318649 ']' 00:31:37.752 08:57:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:37.752 08:57:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:31:37.752 08:57:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:37.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:37.752 08:57:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:31:37.752 08:57:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:37.752 [2024-05-15 08:57:30.626885] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:31:37.752 [2024-05-15 08:57:30.626976] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2318649 ] 00:31:37.752 EAL: No free 2048 kB hugepages reported on node 1 00:31:37.752 [2024-05-15 08:57:30.695824] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:37.752 [2024-05-15 08:57:30.777303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:37.752 08:57:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:31:37.752 08:57:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:31:37.752 08:57:30 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ydBc68Nn4l 00:31:37.752 [2024-05-15 08:57:31.131893] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:37.752 [2024-05-15 08:57:31.131981] bdev_nvme.c:6105:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:31:37.752 [2024-05-15 08:57:31.131995] bdev_nvme.c:6214:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.ydBc68Nn4l 00:31:37.752 request: 00:31:37.752 { 00:31:37.752 "name": "TLSTEST", 00:31:37.752 "trtype": "tcp", 00:31:37.752 "traddr": "10.0.0.2", 00:31:37.752 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:37.752 "adrfam": "ipv4", 00:31:37.752 "trsvcid": "4420", 00:31:37.752 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:37.752 "psk": "/tmp/tmp.ydBc68Nn4l", 00:31:37.752 "method": "bdev_nvme_attach_controller", 00:31:37.752 "req_id": 1 00:31:37.752 } 00:31:37.752 Got JSON-RPC error response 00:31:37.752 response: 00:31:37.752 { 00:31:37.752 "code": -1, 00:31:37.752 "message": "Operation not permitted" 00:31:37.752 } 00:31:37.752 08:57:31 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2318649 00:31:37.752 08:57:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2318649 ']' 00:31:37.752 08:57:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2318649 00:31:37.752 08:57:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:31:37.752 08:57:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:31:37.752 08:57:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2318649 00:31:37.752 08:57:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:31:37.752 08:57:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:31:37.753 08:57:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2318649' 00:31:37.753 killing process with pid 2318649 00:31:37.753 08:57:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2318649 00:31:37.753 Received shutdown signal, test time was about 10.000000 seconds 00:31:37.753 00:31:37.753 Latency(us) 00:31:37.753 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:37.753 =================================================================================================================== 00:31:37.753 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:37.753 08:57:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2318649 00:31:37.753 08:57:31 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:31:37.753 08:57:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:31:37.753 08:57:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:31:37.753 08:57:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:31:37.753 08:57:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:31:37.753 08:57:31 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 2317076 00:31:37.753 08:57:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2317076 ']' 00:31:37.753 08:57:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2317076 00:31:37.753 08:57:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:31:37.753 08:57:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:31:37.753 08:57:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2317076 00:31:37.753 08:57:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:31:37.753 08:57:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:31:37.753 08:57:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2317076' 00:31:37.753 killing process with pid 2317076 00:31:37.753 08:57:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2317076 00:31:37.753 [2024-05-15 08:57:31.427099] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:31:37.753 [2024-05-15 08:57:31.427162] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:31:37.753 08:57:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2317076 00:31:37.753 08:57:31 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:31:37.753 08:57:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:37.753 08:57:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:31:37.753 08:57:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:37.753 08:57:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2318738 00:31:37.753 08:57:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:37.753 08:57:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2318738 00:31:37.753 08:57:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2318738 ']' 00:31:37.753 08:57:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:37.753 08:57:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:31:37.753 08:57:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:37.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:37.753 08:57:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:31:37.753 08:57:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:37.753 [2024-05-15 08:57:31.716556] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:31:37.753 [2024-05-15 08:57:31.716647] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:37.753 EAL: No free 2048 kB hugepages reported on node 1 00:31:37.753 [2024-05-15 08:57:31.788073] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:37.753 [2024-05-15 08:57:31.867823] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:37.753 [2024-05-15 08:57:31.867877] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:37.753 [2024-05-15 08:57:31.867891] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:37.753 [2024-05-15 08:57:31.867902] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:37.753 [2024-05-15 08:57:31.867913] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:37.753 [2024-05-15 08:57:31.867944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:37.753 08:57:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:31:37.753 08:57:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:31:37.753 08:57:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:37.753 08:57:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:31:37.753 08:57:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:37.753 08:57:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:37.753 08:57:31 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.ydBc68Nn4l 00:31:37.753 08:57:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:31:37.753 08:57:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.ydBc68Nn4l 00:31:37.753 08:57:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=setup_nvmf_tgt 00:31:37.753 08:57:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:31:37.753 08:57:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t setup_nvmf_tgt 00:31:37.753 08:57:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:31:37.753 08:57:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # setup_nvmf_tgt /tmp/tmp.ydBc68Nn4l 00:31:37.753 08:57:32 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ydBc68Nn4l 00:31:37.753 08:57:32 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:31:37.753 [2024-05-15 08:57:32.241654] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:37.753 08:57:32 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:31:37.753 08:57:32 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:31:38.053 [2024-05-15 08:57:32.771025] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:31:38.053 [2024-05-15 08:57:32.771113] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:38.053 [2024-05-15 08:57:32.771371] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:38.053 08:57:32 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:31:38.311 malloc0 00:31:38.311 08:57:33 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:31:38.568 08:57:33 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ydBc68Nn4l 00:31:38.826 [2024-05-15 08:57:33.541640] tcp.c:3575:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:31:38.826 [2024-05-15 08:57:33.541681] tcp.c:3661:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:31:38.826 [2024-05-15 08:57:33.541714] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:31:38.826 request: 00:31:38.826 { 00:31:38.826 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:38.826 "host": "nqn.2016-06.io.spdk:host1", 00:31:38.826 "psk": "/tmp/tmp.ydBc68Nn4l", 00:31:38.826 "method": "nvmf_subsystem_add_host", 00:31:38.826 "req_id": 1 00:31:38.826 } 00:31:38.826 Got JSON-RPC error response 00:31:38.826 response: 00:31:38.826 { 00:31:38.826 "code": -32603, 00:31:38.826 "message": "Internal error" 00:31:38.826 } 00:31:38.826 08:57:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:31:38.826 08:57:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:31:38.826 08:57:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:31:38.826 08:57:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:31:38.826 08:57:33 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 2318738 00:31:38.826 08:57:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2318738 ']' 00:31:38.826 08:57:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2318738 00:31:38.826 08:57:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:31:38.826 08:57:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:31:38.826 08:57:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2318738 00:31:38.826 08:57:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:31:38.826 08:57:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:31:38.826 08:57:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2318738' 00:31:38.826 killing process with pid 2318738 00:31:38.826 08:57:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2318738 00:31:38.826 [2024-05-15 08:57:33.592422] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:31:38.826 08:57:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2318738 00:31:39.084 08:57:33 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.ydBc68Nn4l 00:31:39.084 08:57:33 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:31:39.084 08:57:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:39.084 08:57:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:31:39.084 08:57:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:39.084 08:57:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2318997 00:31:39.084 08:57:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:39.084 08:57:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2318997 00:31:39.084 08:57:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2318997 ']' 00:31:39.084 08:57:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:39.084 08:57:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:31:39.084 08:57:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:39.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:39.084 08:57:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:31:39.084 08:57:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:39.342 [2024-05-15 08:57:33.885898] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:31:39.342 [2024-05-15 08:57:33.885987] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:39.342 EAL: No free 2048 kB hugepages reported on node 1 00:31:39.342 [2024-05-15 08:57:33.962441] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:39.342 [2024-05-15 08:57:34.045157] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:39.342 [2024-05-15 08:57:34.045228] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:39.342 [2024-05-15 08:57:34.045243] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:39.342 [2024-05-15 08:57:34.045254] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:39.342 [2024-05-15 08:57:34.045263] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:39.342 [2024-05-15 08:57:34.045288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:39.600 08:57:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:31:39.600 08:57:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:31:39.600 08:57:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:39.600 08:57:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:31:39.600 08:57:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:39.600 08:57:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:39.600 08:57:34 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.ydBc68Nn4l 00:31:39.600 08:57:34 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ydBc68Nn4l 00:31:39.600 08:57:34 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:31:39.858 [2024-05-15 08:57:34.442674] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:39.858 08:57:34 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:31:40.116 08:57:34 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:31:40.373 [2024-05-15 08:57:34.976055] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:31:40.373 [2024-05-15 08:57:34.976140] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:40.373 [2024-05-15 08:57:34.976385] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:40.373 08:57:34 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:31:40.631 malloc0 00:31:40.631 08:57:35 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:31:40.888 08:57:35 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ydBc68Nn4l 00:31:41.146 [2024-05-15 08:57:35.765835] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:31:41.146 08:57:35 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=2319280 00:31:41.146 08:57:35 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:31:41.146 08:57:35 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:41.146 08:57:35 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 2319280 /var/tmp/bdevperf.sock 00:31:41.146 08:57:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2319280 ']' 00:31:41.146 08:57:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:41.146 08:57:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:31:41.146 08:57:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:41.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:41.146 08:57:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:31:41.146 08:57:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:41.146 [2024-05-15 08:57:35.820016] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:31:41.146 [2024-05-15 08:57:35.820099] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2319280 ] 00:31:41.146 EAL: No free 2048 kB hugepages reported on node 1 00:31:41.146 [2024-05-15 08:57:35.905701] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:41.404 [2024-05-15 08:57:36.000992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:41.404 08:57:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:31:41.404 08:57:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:31:41.404 08:57:36 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ydBc68Nn4l 00:31:41.663 [2024-05-15 08:57:36.394416] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:41.663 [2024-05-15 08:57:36.394546] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:31:41.921 TLSTESTn1 00:31:41.921 08:57:36 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:31:42.179 08:57:36 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:31:42.179 "subsystems": [ 00:31:42.179 { 00:31:42.179 "subsystem": "keyring", 00:31:42.179 "config": [] 00:31:42.179 }, 00:31:42.179 { 00:31:42.179 "subsystem": "iobuf", 00:31:42.179 "config": [ 00:31:42.179 { 00:31:42.179 "method": "iobuf_set_options", 00:31:42.179 "params": { 00:31:42.179 "small_pool_count": 8192, 00:31:42.179 "large_pool_count": 1024, 00:31:42.179 "small_bufsize": 8192, 00:31:42.179 "large_bufsize": 135168 00:31:42.179 } 00:31:42.179 } 00:31:42.179 ] 00:31:42.179 }, 00:31:42.179 { 00:31:42.179 "subsystem": "sock", 00:31:42.179 "config": [ 00:31:42.179 { 00:31:42.179 "method": "sock_impl_set_options", 00:31:42.179 "params": { 00:31:42.179 "impl_name": "posix", 00:31:42.179 "recv_buf_size": 2097152, 00:31:42.179 "send_buf_size": 2097152, 00:31:42.179 "enable_recv_pipe": true, 00:31:42.179 "enable_quickack": false, 00:31:42.180 "enable_placement_id": 0, 00:31:42.180 "enable_zerocopy_send_server": true, 00:31:42.180 "enable_zerocopy_send_client": false, 00:31:42.180 "zerocopy_threshold": 0, 00:31:42.180 "tls_version": 0, 00:31:42.180 "enable_ktls": false 00:31:42.180 } 00:31:42.180 }, 00:31:42.180 { 00:31:42.180 "method": "sock_impl_set_options", 00:31:42.180 "params": { 00:31:42.180 "impl_name": "ssl", 00:31:42.180 "recv_buf_size": 4096, 00:31:42.180 "send_buf_size": 4096, 00:31:42.180 "enable_recv_pipe": true, 00:31:42.180 "enable_quickack": false, 00:31:42.180 "enable_placement_id": 0, 00:31:42.180 "enable_zerocopy_send_server": true, 00:31:42.180 "enable_zerocopy_send_client": false, 00:31:42.180 "zerocopy_threshold": 0, 00:31:42.180 "tls_version": 0, 00:31:42.180 "enable_ktls": false 00:31:42.180 } 00:31:42.180 } 00:31:42.180 ] 00:31:42.180 }, 00:31:42.180 { 00:31:42.180 "subsystem": "vmd", 00:31:42.180 "config": [] 00:31:42.180 }, 00:31:42.180 { 00:31:42.180 "subsystem": "accel", 00:31:42.180 "config": [ 00:31:42.180 { 00:31:42.180 "method": "accel_set_options", 00:31:42.180 "params": { 00:31:42.180 "small_cache_size": 128, 00:31:42.180 "large_cache_size": 16, 00:31:42.180 "task_count": 2048, 00:31:42.180 "sequence_count": 2048, 00:31:42.180 "buf_count": 2048 00:31:42.180 } 00:31:42.180 } 00:31:42.180 ] 00:31:42.180 }, 00:31:42.180 { 00:31:42.180 "subsystem": "bdev", 00:31:42.180 "config": [ 00:31:42.180 { 00:31:42.180 "method": "bdev_set_options", 00:31:42.180 "params": { 00:31:42.180 "bdev_io_pool_size": 65535, 00:31:42.180 "bdev_io_cache_size": 256, 00:31:42.180 "bdev_auto_examine": true, 00:31:42.180 "iobuf_small_cache_size": 128, 00:31:42.180 "iobuf_large_cache_size": 16 00:31:42.180 } 00:31:42.180 }, 00:31:42.180 { 00:31:42.180 "method": "bdev_raid_set_options", 00:31:42.180 "params": { 00:31:42.180 "process_window_size_kb": 1024 00:31:42.180 } 00:31:42.180 }, 00:31:42.180 { 00:31:42.180 "method": "bdev_iscsi_set_options", 00:31:42.180 "params": { 00:31:42.180 "timeout_sec": 30 00:31:42.180 } 00:31:42.180 }, 00:31:42.180 { 00:31:42.180 "method": "bdev_nvme_set_options", 00:31:42.180 "params": { 00:31:42.180 "action_on_timeout": "none", 00:31:42.180 "timeout_us": 0, 00:31:42.180 "timeout_admin_us": 0, 00:31:42.180 "keep_alive_timeout_ms": 10000, 00:31:42.180 "arbitration_burst": 0, 00:31:42.180 "low_priority_weight": 0, 00:31:42.180 "medium_priority_weight": 0, 00:31:42.180 "high_priority_weight": 0, 00:31:42.180 "nvme_adminq_poll_period_us": 10000, 00:31:42.180 "nvme_ioq_poll_period_us": 0, 00:31:42.180 "io_queue_requests": 0, 00:31:42.180 "delay_cmd_submit": true, 00:31:42.180 "transport_retry_count": 4, 00:31:42.180 "bdev_retry_count": 3, 00:31:42.180 "transport_ack_timeout": 0, 00:31:42.180 "ctrlr_loss_timeout_sec": 0, 00:31:42.180 "reconnect_delay_sec": 0, 00:31:42.180 "fast_io_fail_timeout_sec": 0, 00:31:42.180 "disable_auto_failback": false, 00:31:42.180 "generate_uuids": false, 00:31:42.180 "transport_tos": 0, 00:31:42.180 "nvme_error_stat": false, 00:31:42.180 "rdma_srq_size": 0, 00:31:42.180 "io_path_stat": false, 00:31:42.180 "allow_accel_sequence": false, 00:31:42.180 "rdma_max_cq_size": 0, 00:31:42.180 "rdma_cm_event_timeout_ms": 0, 00:31:42.180 "dhchap_digests": [ 00:31:42.180 "sha256", 00:31:42.180 "sha384", 00:31:42.180 "sha512" 00:31:42.180 ], 00:31:42.180 "dhchap_dhgroups": [ 00:31:42.180 "null", 00:31:42.180 "ffdhe2048", 00:31:42.180 "ffdhe3072", 00:31:42.180 "ffdhe4096", 00:31:42.180 "ffdhe6144", 00:31:42.180 "ffdhe8192" 00:31:42.180 ] 00:31:42.180 } 00:31:42.180 }, 00:31:42.180 { 00:31:42.180 "method": "bdev_nvme_set_hotplug", 00:31:42.180 "params": { 00:31:42.180 "period_us": 100000, 00:31:42.180 "enable": false 00:31:42.180 } 00:31:42.180 }, 00:31:42.180 { 00:31:42.180 "method": "bdev_malloc_create", 00:31:42.180 "params": { 00:31:42.180 "name": "malloc0", 00:31:42.180 "num_blocks": 8192, 00:31:42.180 "block_size": 4096, 00:31:42.180 "physical_block_size": 4096, 00:31:42.180 "uuid": "20bc303b-62cd-4fbf-bbce-90b70418f541", 00:31:42.180 "optimal_io_boundary": 0 00:31:42.180 } 00:31:42.180 }, 00:31:42.180 { 00:31:42.180 "method": "bdev_wait_for_examine" 00:31:42.180 } 00:31:42.180 ] 00:31:42.180 }, 00:31:42.180 { 00:31:42.180 "subsystem": "nbd", 00:31:42.180 "config": [] 00:31:42.180 }, 00:31:42.180 { 00:31:42.180 "subsystem": "scheduler", 00:31:42.180 "config": [ 00:31:42.180 { 00:31:42.180 "method": "framework_set_scheduler", 00:31:42.180 "params": { 00:31:42.180 "name": "static" 00:31:42.180 } 00:31:42.180 } 00:31:42.180 ] 00:31:42.180 }, 00:31:42.180 { 00:31:42.180 "subsystem": "nvmf", 00:31:42.180 "config": [ 00:31:42.180 { 00:31:42.180 "method": "nvmf_set_config", 00:31:42.180 "params": { 00:31:42.180 "discovery_filter": "match_any", 00:31:42.180 "admin_cmd_passthru": { 00:31:42.180 "identify_ctrlr": false 00:31:42.180 } 00:31:42.180 } 00:31:42.180 }, 00:31:42.180 { 00:31:42.180 "method": "nvmf_set_max_subsystems", 00:31:42.180 "params": { 00:31:42.180 "max_subsystems": 1024 00:31:42.180 } 00:31:42.180 }, 00:31:42.180 { 00:31:42.180 "method": "nvmf_set_crdt", 00:31:42.180 "params": { 00:31:42.180 "crdt1": 0, 00:31:42.180 "crdt2": 0, 00:31:42.180 "crdt3": 0 00:31:42.180 } 00:31:42.180 }, 00:31:42.180 { 00:31:42.180 "method": "nvmf_create_transport", 00:31:42.180 "params": { 00:31:42.180 "trtype": "TCP", 00:31:42.180 "max_queue_depth": 128, 00:31:42.180 "max_io_qpairs_per_ctrlr": 127, 00:31:42.180 "in_capsule_data_size": 4096, 00:31:42.180 "max_io_size": 131072, 00:31:42.180 "io_unit_size": 131072, 00:31:42.180 "max_aq_depth": 128, 00:31:42.180 "num_shared_buffers": 511, 00:31:42.180 "buf_cache_size": 4294967295, 00:31:42.180 "dif_insert_or_strip": false, 00:31:42.180 "zcopy": false, 00:31:42.180 "c2h_success": false, 00:31:42.180 "sock_priority": 0, 00:31:42.180 "abort_timeout_sec": 1, 00:31:42.180 "ack_timeout": 0, 00:31:42.180 "data_wr_pool_size": 0 00:31:42.180 } 00:31:42.180 }, 00:31:42.180 { 00:31:42.180 "method": "nvmf_create_subsystem", 00:31:42.180 "params": { 00:31:42.180 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:42.180 "allow_any_host": false, 00:31:42.180 "serial_number": "SPDK00000000000001", 00:31:42.180 "model_number": "SPDK bdev Controller", 00:31:42.180 "max_namespaces": 10, 00:31:42.180 "min_cntlid": 1, 00:31:42.180 "max_cntlid": 65519, 00:31:42.180 "ana_reporting": false 00:31:42.180 } 00:31:42.180 }, 00:31:42.180 { 00:31:42.180 "method": "nvmf_subsystem_add_host", 00:31:42.180 "params": { 00:31:42.180 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:42.181 "host": "nqn.2016-06.io.spdk:host1", 00:31:42.181 "psk": "/tmp/tmp.ydBc68Nn4l" 00:31:42.181 } 00:31:42.181 }, 00:31:42.181 { 00:31:42.181 "method": "nvmf_subsystem_add_ns", 00:31:42.181 "params": { 00:31:42.181 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:42.181 "namespace": { 00:31:42.181 "nsid": 1, 00:31:42.181 "bdev_name": "malloc0", 00:31:42.181 "nguid": "20BC303B62CD4FBFBBCE90B70418F541", 00:31:42.181 "uuid": "20bc303b-62cd-4fbf-bbce-90b70418f541", 00:31:42.181 "no_auto_visible": false 00:31:42.181 } 00:31:42.181 } 00:31:42.181 }, 00:31:42.181 { 00:31:42.181 "method": "nvmf_subsystem_add_listener", 00:31:42.181 "params": { 00:31:42.181 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:42.181 "listen_address": { 00:31:42.181 "trtype": "TCP", 00:31:42.181 "adrfam": "IPv4", 00:31:42.181 "traddr": "10.0.0.2", 00:31:42.181 "trsvcid": "4420" 00:31:42.181 }, 00:31:42.181 "secure_channel": true 00:31:42.181 } 00:31:42.181 } 00:31:42.181 ] 00:31:42.181 } 00:31:42.181 ] 00:31:42.181 }' 00:31:42.181 08:57:36 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:31:42.439 08:57:37 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:31:42.439 "subsystems": [ 00:31:42.439 { 00:31:42.439 "subsystem": "keyring", 00:31:42.439 "config": [] 00:31:42.439 }, 00:31:42.439 { 00:31:42.439 "subsystem": "iobuf", 00:31:42.439 "config": [ 00:31:42.439 { 00:31:42.439 "method": "iobuf_set_options", 00:31:42.439 "params": { 00:31:42.439 "small_pool_count": 8192, 00:31:42.439 "large_pool_count": 1024, 00:31:42.439 "small_bufsize": 8192, 00:31:42.439 "large_bufsize": 135168 00:31:42.439 } 00:31:42.439 } 00:31:42.439 ] 00:31:42.439 }, 00:31:42.439 { 00:31:42.439 "subsystem": "sock", 00:31:42.439 "config": [ 00:31:42.439 { 00:31:42.439 "method": "sock_impl_set_options", 00:31:42.439 "params": { 00:31:42.439 "impl_name": "posix", 00:31:42.439 "recv_buf_size": 2097152, 00:31:42.439 "send_buf_size": 2097152, 00:31:42.439 "enable_recv_pipe": true, 00:31:42.439 "enable_quickack": false, 00:31:42.439 "enable_placement_id": 0, 00:31:42.439 "enable_zerocopy_send_server": true, 00:31:42.439 "enable_zerocopy_send_client": false, 00:31:42.439 "zerocopy_threshold": 0, 00:31:42.439 "tls_version": 0, 00:31:42.439 "enable_ktls": false 00:31:42.439 } 00:31:42.439 }, 00:31:42.439 { 00:31:42.439 "method": "sock_impl_set_options", 00:31:42.439 "params": { 00:31:42.439 "impl_name": "ssl", 00:31:42.439 "recv_buf_size": 4096, 00:31:42.439 "send_buf_size": 4096, 00:31:42.439 "enable_recv_pipe": true, 00:31:42.439 "enable_quickack": false, 00:31:42.439 "enable_placement_id": 0, 00:31:42.439 "enable_zerocopy_send_server": true, 00:31:42.439 "enable_zerocopy_send_client": false, 00:31:42.439 "zerocopy_threshold": 0, 00:31:42.439 "tls_version": 0, 00:31:42.439 "enable_ktls": false 00:31:42.439 } 00:31:42.439 } 00:31:42.439 ] 00:31:42.439 }, 00:31:42.439 { 00:31:42.439 "subsystem": "vmd", 00:31:42.439 "config": [] 00:31:42.439 }, 00:31:42.439 { 00:31:42.439 "subsystem": "accel", 00:31:42.439 "config": [ 00:31:42.439 { 00:31:42.439 "method": "accel_set_options", 00:31:42.439 "params": { 00:31:42.439 "small_cache_size": 128, 00:31:42.439 "large_cache_size": 16, 00:31:42.439 "task_count": 2048, 00:31:42.439 "sequence_count": 2048, 00:31:42.439 "buf_count": 2048 00:31:42.439 } 00:31:42.439 } 00:31:42.439 ] 00:31:42.439 }, 00:31:42.439 { 00:31:42.439 "subsystem": "bdev", 00:31:42.439 "config": [ 00:31:42.439 { 00:31:42.439 "method": "bdev_set_options", 00:31:42.439 "params": { 00:31:42.439 "bdev_io_pool_size": 65535, 00:31:42.439 "bdev_io_cache_size": 256, 00:31:42.439 "bdev_auto_examine": true, 00:31:42.439 "iobuf_small_cache_size": 128, 00:31:42.439 "iobuf_large_cache_size": 16 00:31:42.439 } 00:31:42.439 }, 00:31:42.439 { 00:31:42.440 "method": "bdev_raid_set_options", 00:31:42.440 "params": { 00:31:42.440 "process_window_size_kb": 1024 00:31:42.440 } 00:31:42.440 }, 00:31:42.440 { 00:31:42.440 "method": "bdev_iscsi_set_options", 00:31:42.440 "params": { 00:31:42.440 "timeout_sec": 30 00:31:42.440 } 00:31:42.440 }, 00:31:42.440 { 00:31:42.440 "method": "bdev_nvme_set_options", 00:31:42.440 "params": { 00:31:42.440 "action_on_timeout": "none", 00:31:42.440 "timeout_us": 0, 00:31:42.440 "timeout_admin_us": 0, 00:31:42.440 "keep_alive_timeout_ms": 10000, 00:31:42.440 "arbitration_burst": 0, 00:31:42.440 "low_priority_weight": 0, 00:31:42.440 "medium_priority_weight": 0, 00:31:42.440 "high_priority_weight": 0, 00:31:42.440 "nvme_adminq_poll_period_us": 10000, 00:31:42.440 "nvme_ioq_poll_period_us": 0, 00:31:42.440 "io_queue_requests": 512, 00:31:42.440 "delay_cmd_submit": true, 00:31:42.440 "transport_retry_count": 4, 00:31:42.440 "bdev_retry_count": 3, 00:31:42.440 "transport_ack_timeout": 0, 00:31:42.440 "ctrlr_loss_timeout_sec": 0, 00:31:42.440 "reconnect_delay_sec": 0, 00:31:42.440 "fast_io_fail_timeout_sec": 0, 00:31:42.440 "disable_auto_failback": false, 00:31:42.440 "generate_uuids": false, 00:31:42.440 "transport_tos": 0, 00:31:42.440 "nvme_error_stat": false, 00:31:42.440 "rdma_srq_size": 0, 00:31:42.440 "io_path_stat": false, 00:31:42.440 "allow_accel_sequence": false, 00:31:42.440 "rdma_max_cq_size": 0, 00:31:42.440 "rdma_cm_event_timeout_ms": 0, 00:31:42.440 "dhchap_digests": [ 00:31:42.440 "sha256", 00:31:42.440 "sha384", 00:31:42.440 "sha512" 00:31:42.440 ], 00:31:42.440 "dhchap_dhgroups": [ 00:31:42.440 "null", 00:31:42.440 "ffdhe2048", 00:31:42.440 "ffdhe3072", 00:31:42.440 "ffdhe4096", 00:31:42.440 "ffdhe6144", 00:31:42.440 "ffdhe8192" 00:31:42.440 ] 00:31:42.440 } 00:31:42.440 }, 00:31:42.440 { 00:31:42.440 "method": "bdev_nvme_attach_controller", 00:31:42.440 "params": { 00:31:42.440 "name": "TLSTEST", 00:31:42.440 "trtype": "TCP", 00:31:42.440 "adrfam": "IPv4", 00:31:42.440 "traddr": "10.0.0.2", 00:31:42.440 "trsvcid": "4420", 00:31:42.440 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:42.440 "prchk_reftag": false, 00:31:42.440 "prchk_guard": false, 00:31:42.440 "ctrlr_loss_timeout_sec": 0, 00:31:42.440 "reconnect_delay_sec": 0, 00:31:42.440 "fast_io_fail_timeout_sec": 0, 00:31:42.440 "psk": "/tmp/tmp.ydBc68Nn4l", 00:31:42.440 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:42.440 "hdgst": false, 00:31:42.440 "ddgst": false 00:31:42.440 } 00:31:42.440 }, 00:31:42.440 { 00:31:42.440 "method": "bdev_nvme_set_hotplug", 00:31:42.440 "params": { 00:31:42.440 "period_us": 100000, 00:31:42.440 "enable": false 00:31:42.440 } 00:31:42.440 }, 00:31:42.440 { 00:31:42.440 "method": "bdev_wait_for_examine" 00:31:42.440 } 00:31:42.440 ] 00:31:42.440 }, 00:31:42.440 { 00:31:42.440 "subsystem": "nbd", 00:31:42.440 "config": [] 00:31:42.440 } 00:31:42.440 ] 00:31:42.440 }' 00:31:42.440 08:57:37 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 2319280 00:31:42.440 08:57:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2319280 ']' 00:31:42.440 08:57:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2319280 00:31:42.440 08:57:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:31:42.440 08:57:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:31:42.440 08:57:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2319280 00:31:42.440 08:57:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:31:42.440 08:57:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:31:42.440 08:57:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2319280' 00:31:42.440 killing process with pid 2319280 00:31:42.440 08:57:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2319280 00:31:42.440 Received shutdown signal, test time was about 10.000000 seconds 00:31:42.440 00:31:42.440 Latency(us) 00:31:42.440 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:42.440 =================================================================================================================== 00:31:42.440 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:42.440 [2024-05-15 08:57:37.138599] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:31:42.440 08:57:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2319280 00:31:42.698 08:57:37 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 2318997 00:31:42.698 08:57:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2318997 ']' 00:31:42.698 08:57:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2318997 00:31:42.698 08:57:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:31:42.698 08:57:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:31:42.698 08:57:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2318997 00:31:42.698 08:57:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:31:42.698 08:57:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:31:42.698 08:57:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2318997' 00:31:42.698 killing process with pid 2318997 00:31:42.698 08:57:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2318997 00:31:42.698 [2024-05-15 08:57:37.365045] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:31:42.698 [2024-05-15 08:57:37.365095] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:31:42.698 08:57:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2318997 00:31:42.956 08:57:37 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:31:42.956 08:57:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:42.956 08:57:37 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:31:42.956 "subsystems": [ 00:31:42.956 { 00:31:42.956 "subsystem": "keyring", 00:31:42.956 "config": [] 00:31:42.956 }, 00:31:42.956 { 00:31:42.956 "subsystem": "iobuf", 00:31:42.956 "config": [ 00:31:42.956 { 00:31:42.956 "method": "iobuf_set_options", 00:31:42.956 "params": { 00:31:42.956 "small_pool_count": 8192, 00:31:42.956 "large_pool_count": 1024, 00:31:42.956 "small_bufsize": 8192, 00:31:42.956 "large_bufsize": 135168 00:31:42.956 } 00:31:42.956 } 00:31:42.956 ] 00:31:42.956 }, 00:31:42.956 { 00:31:42.956 "subsystem": "sock", 00:31:42.956 "config": [ 00:31:42.956 { 00:31:42.956 "method": "sock_impl_set_options", 00:31:42.956 "params": { 00:31:42.956 "impl_name": "posix", 00:31:42.956 "recv_buf_size": 2097152, 00:31:42.956 "send_buf_size": 2097152, 00:31:42.956 "enable_recv_pipe": true, 00:31:42.956 "enable_quickack": false, 00:31:42.956 "enable_placement_id": 0, 00:31:42.956 "enable_zerocopy_send_server": true, 00:31:42.956 "enable_zerocopy_send_client": false, 00:31:42.956 "zerocopy_threshold": 0, 00:31:42.956 "tls_version": 0, 00:31:42.956 "enable_ktls": false 00:31:42.956 } 00:31:42.956 }, 00:31:42.956 { 00:31:42.956 "method": "sock_impl_set_options", 00:31:42.956 "params": { 00:31:42.956 "impl_name": "ssl", 00:31:42.956 "recv_buf_size": 4096, 00:31:42.956 "send_buf_size": 4096, 00:31:42.956 "enable_recv_pipe": true, 00:31:42.956 "enable_quickack": false, 00:31:42.956 "enable_placement_id": 0, 00:31:42.956 "enable_zerocopy_send_server": true, 00:31:42.956 "enable_zerocopy_send_client": false, 00:31:42.956 "zerocopy_threshold": 0, 00:31:42.956 "tls_version": 0, 00:31:42.956 "enable_ktls": false 00:31:42.956 } 00:31:42.956 } 00:31:42.956 ] 00:31:42.956 }, 00:31:42.956 { 00:31:42.956 "subsystem": "vmd", 00:31:42.956 "config": [] 00:31:42.956 }, 00:31:42.956 { 00:31:42.956 "subsystem": "accel", 00:31:42.956 "config": [ 00:31:42.956 { 00:31:42.956 "method": "accel_set_options", 00:31:42.956 "params": { 00:31:42.956 "small_cache_size": 128, 00:31:42.956 "large_cache_size": 16, 00:31:42.956 "task_count": 2048, 00:31:42.956 "sequence_count": 2048, 00:31:42.956 "buf_count": 2048 00:31:42.956 } 00:31:42.956 } 00:31:42.956 ] 00:31:42.956 }, 00:31:42.956 { 00:31:42.956 "subsystem": "bdev", 00:31:42.956 "config": [ 00:31:42.956 { 00:31:42.956 "method": "bdev_set_options", 00:31:42.956 "params": { 00:31:42.956 "bdev_io_pool_size": 65535, 00:31:42.956 "bdev_io_cache_size": 256, 00:31:42.956 "bdev_auto_examine": true, 00:31:42.956 "iobuf_small_cache_size": 128, 00:31:42.956 "iobuf_large_cache_size": 16 00:31:42.956 } 00:31:42.956 }, 00:31:42.956 { 00:31:42.956 "method": "bdev_raid_set_options", 00:31:42.956 "params": { 00:31:42.956 "process_window_size_kb": 1024 00:31:42.956 } 00:31:42.956 }, 00:31:42.956 { 00:31:42.956 "method": "bdev_iscsi_set_options", 00:31:42.956 "params": { 00:31:42.956 "timeout_sec": 30 00:31:42.956 } 00:31:42.956 }, 00:31:42.956 { 00:31:42.956 "method": "bdev_nvme_set_options", 00:31:42.956 "params": { 00:31:42.956 "action_on_timeout": "none", 00:31:42.956 "timeout_us": 0, 00:31:42.956 "timeout_admin_us": 0, 00:31:42.956 "keep_alive_timeout_ms": 10000, 00:31:42.956 "arbitration_burst": 0, 00:31:42.956 "low_priority_weight": 0, 00:31:42.956 "medium_priority_weight": 0, 00:31:42.956 "high_priority_weight": 0, 00:31:42.957 "nvme_adminq_poll_period_us": 10000, 00:31:42.957 "nvme_ioq_poll_period_us": 0, 00:31:42.957 "io_queue_requests": 0, 00:31:42.957 "delay_cmd_submit": true, 00:31:42.957 "transport_retry_count": 4, 00:31:42.957 "bdev_retry_count": 3, 00:31:42.957 "transport_ack_timeout": 0, 00:31:42.957 "ctrlr_loss_timeout_sec": 0, 00:31:42.957 "reconnect_delay_sec": 0, 00:31:42.957 "fast_io_fail_timeout_sec": 0, 00:31:42.957 "disable_auto_failback": false, 00:31:42.957 "generate_uuids": false, 00:31:42.957 "transport_tos": 0, 00:31:42.957 "nvme_error_stat": false, 00:31:42.957 "rdma_srq_size": 0, 00:31:42.957 "io_path_stat": false, 00:31:42.957 "allow_accel_sequence": false, 00:31:42.957 "rdma_max_cq_size": 0, 00:31:42.957 "rdma_cm_event_timeout_ms": 0, 00:31:42.957 "dhchap_digests": [ 00:31:42.957 "sha256", 00:31:42.957 "sha384", 00:31:42.957 "sha512" 00:31:42.957 ], 00:31:42.957 "dhchap_dhgroups": [ 00:31:42.957 "null", 00:31:42.957 "ffdhe2048", 00:31:42.957 "ffdhe3072", 00:31:42.957 "ffdhe4096", 00:31:42.957 "ffdhe6144", 00:31:42.957 "ffdhe8192" 00:31:42.957 ] 00:31:42.957 } 00:31:42.957 }, 00:31:42.957 { 00:31:42.957 "method": "bdev_nvme_set_hotplug", 00:31:42.957 "params": { 00:31:42.957 "period_us": 100000, 00:31:42.957 "enable": false 00:31:42.957 } 00:31:42.957 }, 00:31:42.957 { 00:31:42.957 "method": "bdev_malloc_create", 00:31:42.957 "params": { 00:31:42.957 "name": "malloc0", 00:31:42.957 "num_blocks": 8192, 00:31:42.957 "block_size": 4096, 00:31:42.957 "physical_block_size": 4096, 00:31:42.957 "uuid": "20bc303b-62cd-4fbf-bbce-90b70418f541", 00:31:42.957 "optimal_io_boundary": 0 00:31:42.957 } 00:31:42.957 }, 00:31:42.957 { 00:31:42.957 "method": "bdev_wait_for_examine" 00:31:42.957 } 00:31:42.957 ] 00:31:42.957 }, 00:31:42.957 { 00:31:42.957 "subsystem": "nbd", 00:31:42.957 "config": [] 00:31:42.957 }, 00:31:42.957 { 00:31:42.957 "subsystem": "scheduler", 00:31:42.957 "config": [ 00:31:42.957 { 00:31:42.957 "method": "framework_set_scheduler", 00:31:42.957 "params": { 00:31:42.957 "name": "static" 00:31:42.957 } 00:31:42.957 } 00:31:42.957 ] 00:31:42.957 }, 00:31:42.957 { 00:31:42.957 "subsystem": "nvmf", 00:31:42.957 "config": [ 00:31:42.957 { 00:31:42.957 "method": "nvmf_set_config", 00:31:42.957 "params": { 00:31:42.957 "discovery_filter": "match_any", 00:31:42.957 "admin_cmd_passthru": { 00:31:42.957 "identify_ctrlr": false 00:31:42.957 } 00:31:42.957 } 00:31:42.957 }, 00:31:42.957 { 00:31:42.957 "method": "nvmf_set_max_subsystems", 00:31:42.957 "params": { 00:31:42.957 "max_subsystems": 1024 00:31:42.957 } 00:31:42.957 }, 00:31:42.957 { 00:31:42.957 "method": "nvmf_set_crdt", 00:31:42.957 "params": { 00:31:42.957 "crdt1": 0, 00:31:42.957 "crdt2": 0, 00:31:42.957 "crdt3": 0 00:31:42.957 } 00:31:42.957 }, 00:31:42.957 { 00:31:42.957 "method": "nvmf_create_transport", 00:31:42.957 "params": { 00:31:42.957 "trtype": "TCP", 00:31:42.957 "max_queue_depth": 128, 00:31:42.957 "max_io_qpairs_per_ctrlr": 127, 00:31:42.957 "in_capsule_data_size": 4096, 00:31:42.957 "max_io_size": 131072, 00:31:42.957 "io_unit_size": 131072, 00:31:42.957 "max_aq_depth": 128, 00:31:42.957 "num_shared_buffers": 511, 00:31:42.957 "buf_cache_size": 4294967295, 00:31:42.957 "dif_insert_or_strip": false, 00:31:42.957 "zcopy": false, 00:31:42.957 "c2h_success": false, 00:31:42.957 "sock_priority": 0, 00:31:42.957 "abort_timeout_sec": 1, 00:31:42.957 "ack_timeout": 0, 00:31:42.957 "data_wr_pool_size": 0 00:31:42.957 } 00:31:42.957 }, 00:31:42.957 { 00:31:42.957 "method": "nvmf_create_subsystem", 00:31:42.957 "params": { 00:31:42.957 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:42.957 "allow_any_host": false, 00:31:42.957 "serial_number": "SPDK00000000000001", 00:31:42.957 "model_number": "SPDK bdev Controller", 00:31:42.957 "max_namespaces": 10, 00:31:42.957 "min_cntlid": 1, 00:31:42.957 "max_cntlid": 65519, 00:31:42.957 "ana_reporting": false 00:31:42.957 } 00:31:42.957 }, 00:31:42.957 { 00:31:42.957 "method": "nvmf_subsystem_add_host", 00:31:42.957 "params": { 00:31:42.957 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:42.957 "host": "nqn.2016-06.io.spdk:host1", 00:31:42.957 "psk": "/tmp/tmp.ydBc68Nn4l" 00:31:42.957 } 00:31:42.957 }, 00:31:42.957 { 00:31:42.957 "method": "nvmf_subsystem_add_ns", 00:31:42.957 "params": { 00:31:42.957 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:42.957 "namespace": { 00:31:42.957 "nsid": 1, 00:31:42.957 "bdev_name": "malloc0", 00:31:42.957 "nguid": "20BC303B62CD4FBFBBCE90B70418F541", 00:31:42.957 "uuid": "20bc303b-62cd-4fbf-bbce-90b70418f541", 00:31:42.957 "no_auto_visible": false 00:31:42.957 } 00:31:42.957 } 00:31:42.957 }, 00:31:42.957 { 00:31:42.957 "method": "nvmf_subsystem_add_listener", 00:31:42.957 "params": { 00:31:42.957 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:42.957 "listen_address": { 00:31:42.957 "trtype": "TCP", 00:31:42.957 "adrfam": "IPv4", 00:31:42.957 "traddr": "10.0.0.2", 00:31:42.957 "trsvcid": "4420" 00:31:42.957 }, 00:31:42.957 "secure_channel": true 00:31:42.957 } 00:31:42.957 } 00:31:42.957 ] 00:31:42.957 } 00:31:42.957 ] 00:31:42.957 }' 00:31:42.957 08:57:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:31:42.957 08:57:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:42.957 08:57:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2319553 00:31:42.957 08:57:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:31:42.957 08:57:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2319553 00:31:42.957 08:57:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2319553 ']' 00:31:42.957 08:57:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:42.957 08:57:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:31:42.957 08:57:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:42.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:42.957 08:57:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:31:42.957 08:57:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:42.957 [2024-05-15 08:57:37.634050] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:31:42.957 [2024-05-15 08:57:37.634137] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:42.957 EAL: No free 2048 kB hugepages reported on node 1 00:31:42.957 [2024-05-15 08:57:37.711543] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:43.215 [2024-05-15 08:57:37.801861] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:43.215 [2024-05-15 08:57:37.801931] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:43.215 [2024-05-15 08:57:37.801948] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:43.215 [2024-05-15 08:57:37.801963] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:43.215 [2024-05-15 08:57:37.801975] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:43.215 [2024-05-15 08:57:37.802073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:43.473 [2024-05-15 08:57:38.029625] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:43.473 [2024-05-15 08:57:38.045568] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:31:43.473 [2024-05-15 08:57:38.061587] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:31:43.473 [2024-05-15 08:57:38.061675] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:43.473 [2024-05-15 08:57:38.072424] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:44.039 08:57:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:31:44.039 08:57:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:31:44.039 08:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:44.039 08:57:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:31:44.039 08:57:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:44.039 08:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:44.039 08:57:38 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=2319702 00:31:44.039 08:57:38 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 2319702 /var/tmp/bdevperf.sock 00:31:44.039 08:57:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2319702 ']' 00:31:44.039 08:57:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:44.039 08:57:38 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:31:44.039 08:57:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:31:44.039 08:57:38 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:31:44.039 "subsystems": [ 00:31:44.039 { 00:31:44.039 "subsystem": "keyring", 00:31:44.039 "config": [] 00:31:44.039 }, 00:31:44.039 { 00:31:44.039 "subsystem": "iobuf", 00:31:44.039 "config": [ 00:31:44.039 { 00:31:44.039 "method": "iobuf_set_options", 00:31:44.039 "params": { 00:31:44.039 "small_pool_count": 8192, 00:31:44.039 "large_pool_count": 1024, 00:31:44.039 "small_bufsize": 8192, 00:31:44.039 "large_bufsize": 135168 00:31:44.039 } 00:31:44.039 } 00:31:44.039 ] 00:31:44.039 }, 00:31:44.039 { 00:31:44.039 "subsystem": "sock", 00:31:44.039 "config": [ 00:31:44.039 { 00:31:44.039 "method": "sock_impl_set_options", 00:31:44.039 "params": { 00:31:44.039 "impl_name": "posix", 00:31:44.039 "recv_buf_size": 2097152, 00:31:44.039 "send_buf_size": 2097152, 00:31:44.039 "enable_recv_pipe": true, 00:31:44.039 "enable_quickack": false, 00:31:44.039 "enable_placement_id": 0, 00:31:44.039 "enable_zerocopy_send_server": true, 00:31:44.039 "enable_zerocopy_send_client": false, 00:31:44.039 "zerocopy_threshold": 0, 00:31:44.039 "tls_version": 0, 00:31:44.039 "enable_ktls": false 00:31:44.039 } 00:31:44.039 }, 00:31:44.039 { 00:31:44.039 "method": "sock_impl_set_options", 00:31:44.039 "params": { 00:31:44.039 "impl_name": "ssl", 00:31:44.039 "recv_buf_size": 4096, 00:31:44.040 "send_buf_size": 4096, 00:31:44.040 "enable_recv_pipe": true, 00:31:44.040 "enable_quickack": false, 00:31:44.040 "enable_placement_id": 0, 00:31:44.040 "enable_zerocopy_send_server": true, 00:31:44.040 "enable_zerocopy_send_client": false, 00:31:44.040 "zerocopy_threshold": 0, 00:31:44.040 "tls_version": 0, 00:31:44.040 "enable_ktls": false 00:31:44.040 } 00:31:44.040 } 00:31:44.040 ] 00:31:44.040 }, 00:31:44.040 { 00:31:44.040 "subsystem": "vmd", 00:31:44.040 "config": [] 00:31:44.040 }, 00:31:44.040 { 00:31:44.040 "subsystem": "accel", 00:31:44.040 "config": [ 00:31:44.040 { 00:31:44.040 "method": "accel_set_options", 00:31:44.040 "params": { 00:31:44.040 "small_cache_size": 128, 00:31:44.040 "large_cache_size": 16, 00:31:44.040 "task_count": 2048, 00:31:44.040 "sequence_count": 2048, 00:31:44.040 "buf_count": 2048 00:31:44.040 } 00:31:44.040 } 00:31:44.040 ] 00:31:44.040 }, 00:31:44.040 { 00:31:44.040 "subsystem": "bdev", 00:31:44.040 "config": [ 00:31:44.040 { 00:31:44.040 "method": "bdev_set_options", 00:31:44.040 "params": { 00:31:44.040 "bdev_io_pool_size": 65535, 00:31:44.040 "bdev_io_cache_size": 256, 00:31:44.040 "bdev_auto_examine": true, 00:31:44.040 "iobuf_small_cache_size": 128, 00:31:44.040 "iobuf_large_cache_size": 16 00:31:44.040 } 00:31:44.040 }, 00:31:44.040 { 00:31:44.040 "method": "bdev_raid_set_options", 00:31:44.040 "params": { 00:31:44.040 "process_window_size_kb": 1024 00:31:44.040 } 00:31:44.040 }, 00:31:44.040 { 00:31:44.040 "method": "bdev_iscsi_set_options", 00:31:44.040 "params": { 00:31:44.040 "timeout_sec": 30 00:31:44.040 } 00:31:44.040 }, 00:31:44.040 { 00:31:44.040 "method": "bdev_nvme_set_options", 00:31:44.040 "params": { 00:31:44.040 "action_on_timeout": "none", 00:31:44.040 "timeout_us": 0, 00:31:44.040 "timeout_admin_us": 0, 00:31:44.040 "keep_alive_timeout_ms": 10000, 00:31:44.040 "arbitration_burst": 0, 00:31:44.040 "low_priority_weight": 0, 00:31:44.040 "medium_priority_weight": 0, 00:31:44.040 "high_priority_weight": 0, 00:31:44.040 "nvme_adminq_poll_period_us": 10000, 00:31:44.040 "nvme_ioq_poll_period_us": 0, 00:31:44.040 "io_queue_requests": 512, 00:31:44.040 "delay_cmd_submit": true, 00:31:44.040 "transport_retry_count": 4, 00:31:44.040 "bdev_retry_count": 3, 00:31:44.040 "transport_ack_timeout": 0, 00:31:44.040 "ctrlr_loss_timeout_sec": 0, 00:31:44.040 "reconnect_delay_sec": 0, 00:31:44.040 "fast_io_fail_timeout_sec": 0, 00:31:44.040 "disable_auto_failback": false, 00:31:44.040 "generate_uuids": false, 00:31:44.040 "transport_tos": 0, 00:31:44.040 "nvme_error_stat": false, 00:31:44.040 "rdma_srq_size": 0, 00:31:44.040 "io_path_stat": false, 00:31:44.040 "allow_accel_sequence": false, 00:31:44.040 "rdma_max_cq_size": 0, 00:31:44.040 "rdma_cm_event_timeout_ms": 0, 00:31:44.040 "dhchap_digests": [ 00:31:44.040 "sha256", 00:31:44.040 "sha384", 00:31:44.040 "sha512" 00:31:44.040 ], 00:31:44.040 "dhchap_dhgroups": [ 00:31:44.040 "null", 00:31:44.040 "ffdhe2048", 00:31:44.040 "ffdhe3072", 00:31:44.040 "ffdhe4096", 00:31:44.040 "ffdhe6144", 00:31:44.040 "ffdhe8192" 00:31:44.040 ] 00:31:44.040 } 00:31:44.040 }, 00:31:44.040 { 00:31:44.040 "method": "bdev_nvme_attach_controller", 00:31:44.040 "params": { 00:31:44.040 "name": "TLSTEST", 00:31:44.040 "trtype": "TCP", 00:31:44.040 "adrfam": "IPv4", 00:31:44.040 "traddr": "10.0.0.2", 00:31:44.040 "trsvcid": "4420", 00:31:44.040 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:44.040 "prchk_reftag": false, 00:31:44.040 "prchk_guard": false, 00:31:44.040 "ctrlr_loss_timeout_sec": 0, 00:31:44.040 "reconnect_delay_sec": 0, 00:31:44.040 "fast_io_fail_timeout_sec": 0, 00:31:44.040 "psk": "/tmp/tmp.ydBc68Nn4l", 00:31:44.040 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:44.040 "hdgst": false, 00:31:44.040 "ddgst": false 00:31:44.040 } 00:31:44.040 }, 00:31:44.040 { 00:31:44.040 "method": "bdev_nvme_set_hotplug", 00:31:44.040 "params": { 00:31:44.040 "period_us": 100000, 00:31:44.040 "enable": false 00:31:44.040 } 00:31:44.040 }, 00:31:44.040 { 00:31:44.040 "method": "bdev_wait_for_examine" 00:31:44.040 } 00:31:44.040 ] 00:31:44.040 }, 00:31:44.040 { 00:31:44.040 "subsystem": "nbd", 00:31:44.040 "config": [] 00:31:44.040 } 00:31:44.040 ] 00:31:44.040 }' 00:31:44.040 08:57:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:44.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:44.040 08:57:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:31:44.040 08:57:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:44.040 [2024-05-15 08:57:38.714960] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:31:44.040 [2024-05-15 08:57:38.715048] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2319702 ] 00:31:44.040 EAL: No free 2048 kB hugepages reported on node 1 00:31:44.040 [2024-05-15 08:57:38.781756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:44.298 [2024-05-15 08:57:38.864418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:44.298 [2024-05-15 08:57:39.021500] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:44.298 [2024-05-15 08:57:39.021658] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:31:45.231 08:57:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:31:45.231 08:57:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:31:45.231 08:57:39 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:31:45.231 Running I/O for 10 seconds... 00:31:55.194 00:31:55.194 Latency(us) 00:31:55.194 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:55.194 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:31:55.194 Verification LBA range: start 0x0 length 0x2000 00:31:55.194 TLSTESTn1 : 10.02 3385.47 13.22 0.00 0.00 37743.60 6310.87 39612.87 00:31:55.194 =================================================================================================================== 00:31:55.194 Total : 3385.47 13.22 0.00 0.00 37743.60 6310.87 39612.87 00:31:55.194 0 00:31:55.194 08:57:49 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:55.194 08:57:49 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 2319702 00:31:55.194 08:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2319702 ']' 00:31:55.194 08:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2319702 00:31:55.194 08:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:31:55.194 08:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:31:55.194 08:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2319702 00:31:55.194 08:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:31:55.194 08:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:31:55.194 08:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2319702' 00:31:55.194 killing process with pid 2319702 00:31:55.194 08:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2319702 00:31:55.194 Received shutdown signal, test time was about 10.000000 seconds 00:31:55.194 00:31:55.194 Latency(us) 00:31:55.194 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:55.194 =================================================================================================================== 00:31:55.194 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:55.194 [2024-05-15 08:57:49.911269] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:31:55.194 08:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2319702 00:31:55.451 08:57:50 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 2319553 00:31:55.451 08:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2319553 ']' 00:31:55.451 08:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2319553 00:31:55.451 08:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:31:55.451 08:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:31:55.451 08:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2319553 00:31:55.451 08:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:31:55.451 08:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:31:55.451 08:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2319553' 00:31:55.451 killing process with pid 2319553 00:31:55.451 08:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2319553 00:31:55.451 [2024-05-15 08:57:50.150670] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:31:55.451 08:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2319553 00:31:55.452 [2024-05-15 08:57:50.150729] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:31:55.710 08:57:50 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:31:55.710 08:57:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:55.710 08:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:31:55.710 08:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:55.710 08:57:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2321028 00:31:55.710 08:57:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:31:55.710 08:57:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2321028 00:31:55.710 08:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2321028 ']' 00:31:55.710 08:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:55.710 08:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:31:55.710 08:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:55.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:55.710 08:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:31:55.710 08:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:55.710 [2024-05-15 08:57:50.429442] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:31:55.710 [2024-05-15 08:57:50.429529] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:55.710 EAL: No free 2048 kB hugepages reported on node 1 00:31:55.969 [2024-05-15 08:57:50.507105] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:55.969 [2024-05-15 08:57:50.588950] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:55.969 [2024-05-15 08:57:50.589025] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:55.969 [2024-05-15 08:57:50.589047] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:55.969 [2024-05-15 08:57:50.589058] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:55.969 [2024-05-15 08:57:50.589068] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:55.969 [2024-05-15 08:57:50.589100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:55.969 08:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:31:55.969 08:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:31:55.969 08:57:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:55.969 08:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:31:55.969 08:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:55.969 08:57:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:55.969 08:57:50 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.ydBc68Nn4l 00:31:55.969 08:57:50 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ydBc68Nn4l 00:31:55.969 08:57:50 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:31:56.227 [2024-05-15 08:57:50.945903] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:56.227 08:57:50 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:31:56.485 08:57:51 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:31:56.743 [2024-05-15 08:57:51.447228] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:31:56.743 [2024-05-15 08:57:51.447331] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:56.743 [2024-05-15 08:57:51.447570] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:56.743 08:57:51 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:31:57.000 malloc0 00:31:57.000 08:57:51 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:31:57.256 08:57:51 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ydBc68Nn4l 00:31:57.572 [2024-05-15 08:57:52.188989] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:31:57.572 08:57:52 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=2321242 00:31:57.572 08:57:52 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:57.572 08:57:52 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:31:57.572 08:57:52 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 2321242 /var/tmp/bdevperf.sock 00:31:57.572 08:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2321242 ']' 00:31:57.572 08:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:57.572 08:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:31:57.572 08:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:57.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:57.572 08:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:31:57.572 08:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:57.572 [2024-05-15 08:57:52.253835] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:31:57.572 [2024-05-15 08:57:52.253919] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2321242 ] 00:31:57.572 EAL: No free 2048 kB hugepages reported on node 1 00:31:57.572 [2024-05-15 08:57:52.330960] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:57.830 [2024-05-15 08:57:52.419870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:57.830 08:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:31:57.830 08:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:31:57.830 08:57:52 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ydBc68Nn4l 00:31:58.087 08:57:52 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:31:58.344 [2024-05-15 08:57:53.014937] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:58.344 nvme0n1 00:31:58.344 08:57:53 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:58.600 Running I/O for 1 seconds... 00:31:59.534 00:31:59.534 Latency(us) 00:31:59.534 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:59.534 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:31:59.534 Verification LBA range: start 0x0 length 0x2000 00:31:59.534 nvme0n1 : 1.03 3148.54 12.30 0.00 0.00 40047.01 6505.05 38641.97 00:31:59.534 =================================================================================================================== 00:31:59.534 Total : 3148.54 12.30 0.00 0.00 40047.01 6505.05 38641.97 00:31:59.534 0 00:31:59.534 08:57:54 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 2321242 00:31:59.534 08:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2321242 ']' 00:31:59.534 08:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2321242 00:31:59.534 08:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:31:59.534 08:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:31:59.534 08:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2321242 00:31:59.534 08:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:31:59.534 08:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:31:59.534 08:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2321242' 00:31:59.534 killing process with pid 2321242 00:31:59.534 08:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2321242 00:31:59.534 Received shutdown signal, test time was about 1.000000 seconds 00:31:59.534 00:31:59.534 Latency(us) 00:31:59.534 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:59.534 =================================================================================================================== 00:31:59.534 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:59.534 08:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2321242 00:31:59.791 08:57:54 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 2321028 00:31:59.791 08:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2321028 ']' 00:31:59.791 08:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2321028 00:31:59.791 08:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:31:59.791 08:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:31:59.791 08:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2321028 00:31:59.791 08:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:31:59.791 08:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:31:59.791 08:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2321028' 00:31:59.791 killing process with pid 2321028 00:31:59.791 08:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2321028 00:31:59.791 [2024-05-15 08:57:54.531917] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:31:59.791 [2024-05-15 08:57:54.531974] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:31:59.791 08:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2321028 00:32:00.049 08:57:54 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:32:00.049 08:57:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:00.049 08:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:32:00.049 08:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:00.049 08:57:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2321599 00:32:00.049 08:57:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:32:00.049 08:57:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2321599 00:32:00.049 08:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2321599 ']' 00:32:00.049 08:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:00.049 08:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:32:00.049 08:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:00.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:00.049 08:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:32:00.049 08:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:00.049 [2024-05-15 08:57:54.834576] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:32:00.049 [2024-05-15 08:57:54.834670] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:00.306 EAL: No free 2048 kB hugepages reported on node 1 00:32:00.306 [2024-05-15 08:57:54.908290] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:00.306 [2024-05-15 08:57:54.990968] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:00.306 [2024-05-15 08:57:54.991046] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:00.306 [2024-05-15 08:57:54.991075] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:00.306 [2024-05-15 08:57:54.991087] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:00.306 [2024-05-15 08:57:54.991097] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:00.306 [2024-05-15 08:57:54.991125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:00.564 08:57:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:32:00.564 08:57:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:32:00.564 08:57:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:00.564 08:57:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:32:00.564 08:57:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:00.564 08:57:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:00.564 08:57:55 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:32:00.564 08:57:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:00.564 08:57:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:00.564 [2024-05-15 08:57:55.136583] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:00.564 malloc0 00:32:00.564 [2024-05-15 08:57:55.168101] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:32:00.564 [2024-05-15 08:57:55.168197] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:00.564 [2024-05-15 08:57:55.168478] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:00.564 08:57:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:00.564 08:57:55 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=2321622 00:32:00.564 08:57:55 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:32:00.564 08:57:55 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 2321622 /var/tmp/bdevperf.sock 00:32:00.564 08:57:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2321622 ']' 00:32:00.564 08:57:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:00.564 08:57:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:32:00.564 08:57:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:00.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:00.564 08:57:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:32:00.564 08:57:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:00.564 [2024-05-15 08:57:55.235543] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:32:00.564 [2024-05-15 08:57:55.235618] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2321622 ] 00:32:00.564 EAL: No free 2048 kB hugepages reported on node 1 00:32:00.564 [2024-05-15 08:57:55.306620] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:00.821 [2024-05-15 08:57:55.392847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:00.821 08:57:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:32:00.821 08:57:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:32:00.821 08:57:55 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ydBc68Nn4l 00:32:01.078 08:57:55 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:32:01.336 [2024-05-15 08:57:55.990172] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:01.336 nvme0n1 00:32:01.336 08:57:56 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:01.593 Running I/O for 1 seconds... 00:32:02.526 00:32:02.526 Latency(us) 00:32:02.526 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:02.526 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:02.526 Verification LBA range: start 0x0 length 0x2000 00:32:02.526 nvme0n1 : 1.02 3266.71 12.76 0.00 0.00 38737.06 10679.94 40389.59 00:32:02.526 =================================================================================================================== 00:32:02.526 Total : 3266.71 12.76 0.00 0.00 38737.06 10679.94 40389.59 00:32:02.526 0 00:32:02.526 08:57:57 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:32:02.526 08:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:02.526 08:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:02.526 08:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:02.526 08:57:57 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:32:02.526 "subsystems": [ 00:32:02.526 { 00:32:02.526 "subsystem": "keyring", 00:32:02.526 "config": [ 00:32:02.526 { 00:32:02.526 "method": "keyring_file_add_key", 00:32:02.526 "params": { 00:32:02.526 "name": "key0", 00:32:02.526 "path": "/tmp/tmp.ydBc68Nn4l" 00:32:02.526 } 00:32:02.526 } 00:32:02.526 ] 00:32:02.526 }, 00:32:02.526 { 00:32:02.526 "subsystem": "iobuf", 00:32:02.526 "config": [ 00:32:02.526 { 00:32:02.526 "method": "iobuf_set_options", 00:32:02.526 "params": { 00:32:02.526 "small_pool_count": 8192, 00:32:02.526 "large_pool_count": 1024, 00:32:02.526 "small_bufsize": 8192, 00:32:02.526 "large_bufsize": 135168 00:32:02.526 } 00:32:02.526 } 00:32:02.526 ] 00:32:02.526 }, 00:32:02.526 { 00:32:02.526 "subsystem": "sock", 00:32:02.526 "config": [ 00:32:02.526 { 00:32:02.526 "method": "sock_impl_set_options", 00:32:02.526 "params": { 00:32:02.526 "impl_name": "posix", 00:32:02.526 "recv_buf_size": 2097152, 00:32:02.526 "send_buf_size": 2097152, 00:32:02.526 "enable_recv_pipe": true, 00:32:02.526 "enable_quickack": false, 00:32:02.526 "enable_placement_id": 0, 00:32:02.526 "enable_zerocopy_send_server": true, 00:32:02.526 "enable_zerocopy_send_client": false, 00:32:02.526 "zerocopy_threshold": 0, 00:32:02.526 "tls_version": 0, 00:32:02.526 "enable_ktls": false 00:32:02.526 } 00:32:02.526 }, 00:32:02.526 { 00:32:02.527 "method": "sock_impl_set_options", 00:32:02.527 "params": { 00:32:02.527 "impl_name": "ssl", 00:32:02.527 "recv_buf_size": 4096, 00:32:02.527 "send_buf_size": 4096, 00:32:02.527 "enable_recv_pipe": true, 00:32:02.527 "enable_quickack": false, 00:32:02.527 "enable_placement_id": 0, 00:32:02.527 "enable_zerocopy_send_server": true, 00:32:02.527 "enable_zerocopy_send_client": false, 00:32:02.527 "zerocopy_threshold": 0, 00:32:02.527 "tls_version": 0, 00:32:02.527 "enable_ktls": false 00:32:02.527 } 00:32:02.527 } 00:32:02.527 ] 00:32:02.527 }, 00:32:02.527 { 00:32:02.527 "subsystem": "vmd", 00:32:02.527 "config": [] 00:32:02.527 }, 00:32:02.527 { 00:32:02.527 "subsystem": "accel", 00:32:02.527 "config": [ 00:32:02.527 { 00:32:02.527 "method": "accel_set_options", 00:32:02.527 "params": { 00:32:02.527 "small_cache_size": 128, 00:32:02.527 "large_cache_size": 16, 00:32:02.527 "task_count": 2048, 00:32:02.527 "sequence_count": 2048, 00:32:02.527 "buf_count": 2048 00:32:02.527 } 00:32:02.527 } 00:32:02.527 ] 00:32:02.527 }, 00:32:02.527 { 00:32:02.527 "subsystem": "bdev", 00:32:02.527 "config": [ 00:32:02.527 { 00:32:02.527 "method": "bdev_set_options", 00:32:02.527 "params": { 00:32:02.527 "bdev_io_pool_size": 65535, 00:32:02.527 "bdev_io_cache_size": 256, 00:32:02.527 "bdev_auto_examine": true, 00:32:02.527 "iobuf_small_cache_size": 128, 00:32:02.527 "iobuf_large_cache_size": 16 00:32:02.527 } 00:32:02.527 }, 00:32:02.527 { 00:32:02.527 "method": "bdev_raid_set_options", 00:32:02.527 "params": { 00:32:02.527 "process_window_size_kb": 1024 00:32:02.527 } 00:32:02.527 }, 00:32:02.527 { 00:32:02.527 "method": "bdev_iscsi_set_options", 00:32:02.527 "params": { 00:32:02.527 "timeout_sec": 30 00:32:02.527 } 00:32:02.527 }, 00:32:02.527 { 00:32:02.527 "method": "bdev_nvme_set_options", 00:32:02.527 "params": { 00:32:02.527 "action_on_timeout": "none", 00:32:02.527 "timeout_us": 0, 00:32:02.527 "timeout_admin_us": 0, 00:32:02.527 "keep_alive_timeout_ms": 10000, 00:32:02.527 "arbitration_burst": 0, 00:32:02.527 "low_priority_weight": 0, 00:32:02.527 "medium_priority_weight": 0, 00:32:02.527 "high_priority_weight": 0, 00:32:02.527 "nvme_adminq_poll_period_us": 10000, 00:32:02.527 "nvme_ioq_poll_period_us": 0, 00:32:02.527 "io_queue_requests": 0, 00:32:02.527 "delay_cmd_submit": true, 00:32:02.527 "transport_retry_count": 4, 00:32:02.527 "bdev_retry_count": 3, 00:32:02.527 "transport_ack_timeout": 0, 00:32:02.527 "ctrlr_loss_timeout_sec": 0, 00:32:02.527 "reconnect_delay_sec": 0, 00:32:02.527 "fast_io_fail_timeout_sec": 0, 00:32:02.527 "disable_auto_failback": false, 00:32:02.527 "generate_uuids": false, 00:32:02.527 "transport_tos": 0, 00:32:02.527 "nvme_error_stat": false, 00:32:02.527 "rdma_srq_size": 0, 00:32:02.527 "io_path_stat": false, 00:32:02.527 "allow_accel_sequence": false, 00:32:02.527 "rdma_max_cq_size": 0, 00:32:02.527 "rdma_cm_event_timeout_ms": 0, 00:32:02.527 "dhchap_digests": [ 00:32:02.527 "sha256", 00:32:02.527 "sha384", 00:32:02.527 "sha512" 00:32:02.527 ], 00:32:02.527 "dhchap_dhgroups": [ 00:32:02.527 "null", 00:32:02.527 "ffdhe2048", 00:32:02.527 "ffdhe3072", 00:32:02.527 "ffdhe4096", 00:32:02.527 "ffdhe6144", 00:32:02.527 "ffdhe8192" 00:32:02.527 ] 00:32:02.527 } 00:32:02.527 }, 00:32:02.527 { 00:32:02.527 "method": "bdev_nvme_set_hotplug", 00:32:02.527 "params": { 00:32:02.527 "period_us": 100000, 00:32:02.527 "enable": false 00:32:02.527 } 00:32:02.527 }, 00:32:02.527 { 00:32:02.527 "method": "bdev_malloc_create", 00:32:02.527 "params": { 00:32:02.527 "name": "malloc0", 00:32:02.527 "num_blocks": 8192, 00:32:02.527 "block_size": 4096, 00:32:02.527 "physical_block_size": 4096, 00:32:02.527 "uuid": "21b9e204-8077-4c02-8371-eb7355b76d77", 00:32:02.527 "optimal_io_boundary": 0 00:32:02.527 } 00:32:02.527 }, 00:32:02.527 { 00:32:02.527 "method": "bdev_wait_for_examine" 00:32:02.527 } 00:32:02.527 ] 00:32:02.527 }, 00:32:02.527 { 00:32:02.527 "subsystem": "nbd", 00:32:02.527 "config": [] 00:32:02.527 }, 00:32:02.527 { 00:32:02.527 "subsystem": "scheduler", 00:32:02.527 "config": [ 00:32:02.527 { 00:32:02.527 "method": "framework_set_scheduler", 00:32:02.527 "params": { 00:32:02.527 "name": "static" 00:32:02.527 } 00:32:02.527 } 00:32:02.527 ] 00:32:02.527 }, 00:32:02.527 { 00:32:02.527 "subsystem": "nvmf", 00:32:02.527 "config": [ 00:32:02.527 { 00:32:02.527 "method": "nvmf_set_config", 00:32:02.527 "params": { 00:32:02.527 "discovery_filter": "match_any", 00:32:02.527 "admin_cmd_passthru": { 00:32:02.527 "identify_ctrlr": false 00:32:02.527 } 00:32:02.527 } 00:32:02.527 }, 00:32:02.527 { 00:32:02.527 "method": "nvmf_set_max_subsystems", 00:32:02.527 "params": { 00:32:02.527 "max_subsystems": 1024 00:32:02.527 } 00:32:02.527 }, 00:32:02.527 { 00:32:02.527 "method": "nvmf_set_crdt", 00:32:02.527 "params": { 00:32:02.527 "crdt1": 0, 00:32:02.527 "crdt2": 0, 00:32:02.527 "crdt3": 0 00:32:02.527 } 00:32:02.527 }, 00:32:02.527 { 00:32:02.527 "method": "nvmf_create_transport", 00:32:02.527 "params": { 00:32:02.527 "trtype": "TCP", 00:32:02.527 "max_queue_depth": 128, 00:32:02.527 "max_io_qpairs_per_ctrlr": 127, 00:32:02.527 "in_capsule_data_size": 4096, 00:32:02.527 "max_io_size": 131072, 00:32:02.527 "io_unit_size": 131072, 00:32:02.527 "max_aq_depth": 128, 00:32:02.527 "num_shared_buffers": 511, 00:32:02.527 "buf_cache_size": 4294967295, 00:32:02.527 "dif_insert_or_strip": false, 00:32:02.527 "zcopy": false, 00:32:02.527 "c2h_success": false, 00:32:02.527 "sock_priority": 0, 00:32:02.527 "abort_timeout_sec": 1, 00:32:02.527 "ack_timeout": 0, 00:32:02.527 "data_wr_pool_size": 0 00:32:02.527 } 00:32:02.527 }, 00:32:02.527 { 00:32:02.527 "method": "nvmf_create_subsystem", 00:32:02.527 "params": { 00:32:02.527 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:02.527 "allow_any_host": false, 00:32:02.527 "serial_number": "00000000000000000000", 00:32:02.527 "model_number": "SPDK bdev Controller", 00:32:02.527 "max_namespaces": 32, 00:32:02.527 "min_cntlid": 1, 00:32:02.527 "max_cntlid": 65519, 00:32:02.527 "ana_reporting": false 00:32:02.527 } 00:32:02.527 }, 00:32:02.527 { 00:32:02.527 "method": "nvmf_subsystem_add_host", 00:32:02.527 "params": { 00:32:02.527 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:02.527 "host": "nqn.2016-06.io.spdk:host1", 00:32:02.527 "psk": "key0" 00:32:02.527 } 00:32:02.527 }, 00:32:02.527 { 00:32:02.527 "method": "nvmf_subsystem_add_ns", 00:32:02.527 "params": { 00:32:02.527 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:02.527 "namespace": { 00:32:02.527 "nsid": 1, 00:32:02.527 "bdev_name": "malloc0", 00:32:02.527 "nguid": "21B9E20480774C028371EB7355B76D77", 00:32:02.527 "uuid": "21b9e204-8077-4c02-8371-eb7355b76d77", 00:32:02.527 "no_auto_visible": false 00:32:02.527 } 00:32:02.527 } 00:32:02.527 }, 00:32:02.527 { 00:32:02.527 "method": "nvmf_subsystem_add_listener", 00:32:02.527 "params": { 00:32:02.527 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:02.527 "listen_address": { 00:32:02.527 "trtype": "TCP", 00:32:02.527 "adrfam": "IPv4", 00:32:02.527 "traddr": "10.0.0.2", 00:32:02.527 "trsvcid": "4420" 00:32:02.527 }, 00:32:02.527 "secure_channel": true 00:32:02.527 } 00:32:02.527 } 00:32:02.527 ] 00:32:02.527 } 00:32:02.527 ] 00:32:02.527 }' 00:32:02.527 08:57:57 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:32:03.094 08:57:57 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:32:03.094 "subsystems": [ 00:32:03.094 { 00:32:03.094 "subsystem": "keyring", 00:32:03.094 "config": [ 00:32:03.094 { 00:32:03.094 "method": "keyring_file_add_key", 00:32:03.094 "params": { 00:32:03.094 "name": "key0", 00:32:03.094 "path": "/tmp/tmp.ydBc68Nn4l" 00:32:03.094 } 00:32:03.094 } 00:32:03.094 ] 00:32:03.094 }, 00:32:03.094 { 00:32:03.094 "subsystem": "iobuf", 00:32:03.094 "config": [ 00:32:03.094 { 00:32:03.094 "method": "iobuf_set_options", 00:32:03.094 "params": { 00:32:03.094 "small_pool_count": 8192, 00:32:03.094 "large_pool_count": 1024, 00:32:03.094 "small_bufsize": 8192, 00:32:03.094 "large_bufsize": 135168 00:32:03.094 } 00:32:03.094 } 00:32:03.094 ] 00:32:03.094 }, 00:32:03.094 { 00:32:03.094 "subsystem": "sock", 00:32:03.094 "config": [ 00:32:03.094 { 00:32:03.094 "method": "sock_impl_set_options", 00:32:03.094 "params": { 00:32:03.094 "impl_name": "posix", 00:32:03.094 "recv_buf_size": 2097152, 00:32:03.094 "send_buf_size": 2097152, 00:32:03.094 "enable_recv_pipe": true, 00:32:03.094 "enable_quickack": false, 00:32:03.094 "enable_placement_id": 0, 00:32:03.094 "enable_zerocopy_send_server": true, 00:32:03.094 "enable_zerocopy_send_client": false, 00:32:03.094 "zerocopy_threshold": 0, 00:32:03.094 "tls_version": 0, 00:32:03.094 "enable_ktls": false 00:32:03.094 } 00:32:03.094 }, 00:32:03.094 { 00:32:03.094 "method": "sock_impl_set_options", 00:32:03.094 "params": { 00:32:03.094 "impl_name": "ssl", 00:32:03.094 "recv_buf_size": 4096, 00:32:03.094 "send_buf_size": 4096, 00:32:03.094 "enable_recv_pipe": true, 00:32:03.094 "enable_quickack": false, 00:32:03.094 "enable_placement_id": 0, 00:32:03.094 "enable_zerocopy_send_server": true, 00:32:03.094 "enable_zerocopy_send_client": false, 00:32:03.094 "zerocopy_threshold": 0, 00:32:03.094 "tls_version": 0, 00:32:03.094 "enable_ktls": false 00:32:03.094 } 00:32:03.094 } 00:32:03.094 ] 00:32:03.094 }, 00:32:03.094 { 00:32:03.094 "subsystem": "vmd", 00:32:03.094 "config": [] 00:32:03.094 }, 00:32:03.094 { 00:32:03.094 "subsystem": "accel", 00:32:03.094 "config": [ 00:32:03.094 { 00:32:03.094 "method": "accel_set_options", 00:32:03.094 "params": { 00:32:03.094 "small_cache_size": 128, 00:32:03.094 "large_cache_size": 16, 00:32:03.094 "task_count": 2048, 00:32:03.094 "sequence_count": 2048, 00:32:03.094 "buf_count": 2048 00:32:03.094 } 00:32:03.094 } 00:32:03.094 ] 00:32:03.094 }, 00:32:03.094 { 00:32:03.094 "subsystem": "bdev", 00:32:03.094 "config": [ 00:32:03.094 { 00:32:03.094 "method": "bdev_set_options", 00:32:03.094 "params": { 00:32:03.094 "bdev_io_pool_size": 65535, 00:32:03.094 "bdev_io_cache_size": 256, 00:32:03.094 "bdev_auto_examine": true, 00:32:03.094 "iobuf_small_cache_size": 128, 00:32:03.094 "iobuf_large_cache_size": 16 00:32:03.094 } 00:32:03.094 }, 00:32:03.094 { 00:32:03.094 "method": "bdev_raid_set_options", 00:32:03.094 "params": { 00:32:03.094 "process_window_size_kb": 1024 00:32:03.094 } 00:32:03.094 }, 00:32:03.094 { 00:32:03.094 "method": "bdev_iscsi_set_options", 00:32:03.094 "params": { 00:32:03.094 "timeout_sec": 30 00:32:03.094 } 00:32:03.094 }, 00:32:03.094 { 00:32:03.094 "method": "bdev_nvme_set_options", 00:32:03.094 "params": { 00:32:03.094 "action_on_timeout": "none", 00:32:03.094 "timeout_us": 0, 00:32:03.094 "timeout_admin_us": 0, 00:32:03.094 "keep_alive_timeout_ms": 10000, 00:32:03.094 "arbitration_burst": 0, 00:32:03.094 "low_priority_weight": 0, 00:32:03.094 "medium_priority_weight": 0, 00:32:03.094 "high_priority_weight": 0, 00:32:03.094 "nvme_adminq_poll_period_us": 10000, 00:32:03.094 "nvme_ioq_poll_period_us": 0, 00:32:03.094 "io_queue_requests": 512, 00:32:03.094 "delay_cmd_submit": true, 00:32:03.094 "transport_retry_count": 4, 00:32:03.094 "bdev_retry_count": 3, 00:32:03.094 "transport_ack_timeout": 0, 00:32:03.094 "ctrlr_loss_timeout_sec": 0, 00:32:03.094 "reconnect_delay_sec": 0, 00:32:03.094 "fast_io_fail_timeout_sec": 0, 00:32:03.094 "disable_auto_failback": false, 00:32:03.094 "generate_uuids": false, 00:32:03.094 "transport_tos": 0, 00:32:03.094 "nvme_error_stat": false, 00:32:03.094 "rdma_srq_size": 0, 00:32:03.094 "io_path_stat": false, 00:32:03.094 "allow_accel_sequence": false, 00:32:03.094 "rdma_max_cq_size": 0, 00:32:03.094 "rdma_cm_event_timeout_ms": 0, 00:32:03.094 "dhchap_digests": [ 00:32:03.094 "sha256", 00:32:03.094 "sha384", 00:32:03.094 "sha512" 00:32:03.094 ], 00:32:03.094 "dhchap_dhgroups": [ 00:32:03.094 "null", 00:32:03.094 "ffdhe2048", 00:32:03.094 "ffdhe3072", 00:32:03.094 "ffdhe4096", 00:32:03.094 "ffdhe6144", 00:32:03.094 "ffdhe8192" 00:32:03.094 ] 00:32:03.094 } 00:32:03.094 }, 00:32:03.094 { 00:32:03.094 "method": "bdev_nvme_attach_controller", 00:32:03.094 "params": { 00:32:03.094 "name": "nvme0", 00:32:03.094 "trtype": "TCP", 00:32:03.094 "adrfam": "IPv4", 00:32:03.094 "traddr": "10.0.0.2", 00:32:03.094 "trsvcid": "4420", 00:32:03.094 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:03.094 "prchk_reftag": false, 00:32:03.094 "prchk_guard": false, 00:32:03.094 "ctrlr_loss_timeout_sec": 0, 00:32:03.094 "reconnect_delay_sec": 0, 00:32:03.094 "fast_io_fail_timeout_sec": 0, 00:32:03.094 "psk": "key0", 00:32:03.094 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:03.094 "hdgst": false, 00:32:03.094 "ddgst": false 00:32:03.094 } 00:32:03.094 }, 00:32:03.094 { 00:32:03.094 "method": "bdev_nvme_set_hotplug", 00:32:03.094 "params": { 00:32:03.094 "period_us": 100000, 00:32:03.094 "enable": false 00:32:03.094 } 00:32:03.094 }, 00:32:03.094 { 00:32:03.094 "method": "bdev_enable_histogram", 00:32:03.094 "params": { 00:32:03.094 "name": "nvme0n1", 00:32:03.094 "enable": true 00:32:03.094 } 00:32:03.094 }, 00:32:03.094 { 00:32:03.094 "method": "bdev_wait_for_examine" 00:32:03.094 } 00:32:03.094 ] 00:32:03.094 }, 00:32:03.094 { 00:32:03.094 "subsystem": "nbd", 00:32:03.094 "config": [] 00:32:03.094 } 00:32:03.094 ] 00:32:03.095 }' 00:32:03.095 08:57:57 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 2321622 00:32:03.095 08:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2321622 ']' 00:32:03.095 08:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2321622 00:32:03.095 08:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:32:03.095 08:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:32:03.095 08:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2321622 00:32:03.095 08:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:32:03.095 08:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:32:03.095 08:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2321622' 00:32:03.095 killing process with pid 2321622 00:32:03.095 08:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2321622 00:32:03.095 Received shutdown signal, test time was about 1.000000 seconds 00:32:03.095 00:32:03.095 Latency(us) 00:32:03.095 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:03.095 =================================================================================================================== 00:32:03.095 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:03.095 08:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2321622 00:32:03.353 08:57:57 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 2321599 00:32:03.353 08:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2321599 ']' 00:32:03.353 08:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2321599 00:32:03.353 08:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:32:03.353 08:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:32:03.353 08:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2321599 00:32:03.353 08:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:32:03.353 08:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:32:03.353 08:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2321599' 00:32:03.353 killing process with pid 2321599 00:32:03.353 08:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2321599 00:32:03.353 [2024-05-15 08:57:57.923039] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:32:03.353 08:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2321599 00:32:03.613 08:57:58 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:32:03.613 08:57:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:03.613 08:57:58 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:32:03.613 "subsystems": [ 00:32:03.613 { 00:32:03.613 "subsystem": "keyring", 00:32:03.613 "config": [ 00:32:03.613 { 00:32:03.613 "method": "keyring_file_add_key", 00:32:03.613 "params": { 00:32:03.613 "name": "key0", 00:32:03.613 "path": "/tmp/tmp.ydBc68Nn4l" 00:32:03.613 } 00:32:03.613 } 00:32:03.613 ] 00:32:03.613 }, 00:32:03.613 { 00:32:03.613 "subsystem": "iobuf", 00:32:03.613 "config": [ 00:32:03.613 { 00:32:03.613 "method": "iobuf_set_options", 00:32:03.613 "params": { 00:32:03.613 "small_pool_count": 8192, 00:32:03.613 "large_pool_count": 1024, 00:32:03.613 "small_bufsize": 8192, 00:32:03.613 "large_bufsize": 135168 00:32:03.613 } 00:32:03.613 } 00:32:03.613 ] 00:32:03.613 }, 00:32:03.613 { 00:32:03.613 "subsystem": "sock", 00:32:03.613 "config": [ 00:32:03.613 { 00:32:03.613 "method": "sock_impl_set_options", 00:32:03.613 "params": { 00:32:03.613 "impl_name": "posix", 00:32:03.613 "recv_buf_size": 2097152, 00:32:03.613 "send_buf_size": 2097152, 00:32:03.613 "enable_recv_pipe": true, 00:32:03.613 "enable_quickack": false, 00:32:03.613 "enable_placement_id": 0, 00:32:03.613 "enable_zerocopy_send_server": true, 00:32:03.613 "enable_zerocopy_send_client": false, 00:32:03.613 "zerocopy_threshold": 0, 00:32:03.613 "tls_version": 0, 00:32:03.613 "enable_ktls": false 00:32:03.613 } 00:32:03.613 }, 00:32:03.613 { 00:32:03.613 "method": "sock_impl_set_options", 00:32:03.613 "params": { 00:32:03.613 "impl_name": "ssl", 00:32:03.613 "recv_buf_size": 4096, 00:32:03.613 "send_buf_size": 4096, 00:32:03.613 "enable_recv_pipe": true, 00:32:03.613 "enable_quickack": false, 00:32:03.613 "enable_placement_id": 0, 00:32:03.613 "enable_zerocopy_send_server": true, 00:32:03.613 "enable_zerocopy_send_client": false, 00:32:03.613 "zerocopy_threshold": 0, 00:32:03.613 "tls_version": 0, 00:32:03.613 "enable_ktls": false 00:32:03.613 } 00:32:03.613 } 00:32:03.613 ] 00:32:03.613 }, 00:32:03.613 { 00:32:03.613 "subsystem": "vmd", 00:32:03.613 "config": [] 00:32:03.613 }, 00:32:03.613 { 00:32:03.613 "subsystem": "accel", 00:32:03.613 "config": [ 00:32:03.613 { 00:32:03.613 "method": "accel_set_options", 00:32:03.613 "params": { 00:32:03.613 "small_cache_size": 128, 00:32:03.613 "large_cache_size": 16, 00:32:03.613 "task_count": 2048, 00:32:03.613 "sequence_count": 2048, 00:32:03.613 "buf_count": 2048 00:32:03.613 } 00:32:03.613 } 00:32:03.613 ] 00:32:03.613 }, 00:32:03.613 { 00:32:03.613 "subsystem": "bdev", 00:32:03.613 "config": [ 00:32:03.613 { 00:32:03.613 "method": "bdev_set_options", 00:32:03.613 "params": { 00:32:03.613 "bdev_io_pool_size": 65535, 00:32:03.613 "bdev_io_cache_size": 256, 00:32:03.613 "bdev_auto_examine": true, 00:32:03.613 "iobuf_small_cache_size": 128, 00:32:03.613 "iobuf_large_cache_size": 16 00:32:03.613 } 00:32:03.613 }, 00:32:03.613 { 00:32:03.613 "method": "bdev_raid_set_options", 00:32:03.613 "params": { 00:32:03.613 "process_window_size_kb": 1024 00:32:03.613 } 00:32:03.613 }, 00:32:03.613 { 00:32:03.613 "method": "bdev_iscsi_set_options", 00:32:03.613 "params": { 00:32:03.613 "timeout_sec": 30 00:32:03.613 } 00:32:03.613 }, 00:32:03.613 { 00:32:03.613 "method": "bdev_nvme_set_options", 00:32:03.613 "params": { 00:32:03.613 "action_on_timeout": "none", 00:32:03.613 "timeout_us": 0, 00:32:03.613 "timeout_admin_us": 0, 00:32:03.613 "keep_alive_timeout_ms": 10000, 00:32:03.613 "arbitration_burst": 0, 00:32:03.613 "low_priority_weight": 0, 00:32:03.613 "medium_priority_weight": 0, 00:32:03.613 "high_priority_weight": 0, 00:32:03.613 "nvme_adminq_poll_period_us": 10000, 00:32:03.613 "nvme_ioq_poll_period_us": 0, 00:32:03.613 "io_queue_requests": 0, 00:32:03.613 "delay_cmd_submit": true, 00:32:03.613 "transport_retry_count": 4, 00:32:03.613 "bdev_retry_count": 3, 00:32:03.613 "transport_ack_timeout": 0, 00:32:03.613 "ctrlr_loss_timeout_sec": 0, 00:32:03.613 "reconnect_delay_sec": 0, 00:32:03.613 "fast_io_fail_timeout_sec": 0, 00:32:03.613 "disable_auto_failback": false, 00:32:03.613 "generate_uuids": false, 00:32:03.613 "transport_tos": 0, 00:32:03.613 "nvme_error_stat": false, 00:32:03.613 "rdma_srq_size": 0, 00:32:03.613 "io_path_stat": false, 00:32:03.613 "allow_accel_sequence": false, 00:32:03.613 "rdma_max_cq_size": 0, 00:32:03.613 "rdma_cm_event_timeout_ms": 0, 00:32:03.613 "dhchap_digests": [ 00:32:03.613 "sha256", 00:32:03.613 "sha384", 00:32:03.613 "sha512" 00:32:03.613 ], 00:32:03.613 "dhchap_dhgroups": [ 00:32:03.613 "null", 00:32:03.613 "ffdhe2048", 00:32:03.613 "ffdhe3072", 00:32:03.613 "ffdhe4096", 00:32:03.613 "ffdhe6144", 00:32:03.613 "ffdhe8192" 00:32:03.613 ] 00:32:03.613 } 00:32:03.613 }, 00:32:03.613 { 00:32:03.613 "method": "bdev_nvme_set_hotplug", 00:32:03.613 "params": { 00:32:03.613 "period_us": 100000, 00:32:03.613 "enable": false 00:32:03.613 } 00:32:03.613 }, 00:32:03.613 { 00:32:03.613 "method": "bdev_malloc_create", 00:32:03.613 "params": { 00:32:03.613 "name": "malloc0", 00:32:03.613 "num_blocks": 8192, 00:32:03.613 "block_size": 4096, 00:32:03.613 "physical_block_size": 4096, 00:32:03.613 "uuid": "21b9e204-8077-4c02-8371-eb7355b76d77", 00:32:03.613 "optimal_io_boundary": 0 00:32:03.613 } 00:32:03.613 }, 00:32:03.613 { 00:32:03.613 "method": "bdev_wait_for_examine" 00:32:03.613 } 00:32:03.613 ] 00:32:03.613 }, 00:32:03.613 { 00:32:03.613 "subsystem": "nbd", 00:32:03.613 "config": [] 00:32:03.613 }, 00:32:03.613 { 00:32:03.613 "subsystem": "scheduler", 00:32:03.613 "config": [ 00:32:03.613 { 00:32:03.613 "method": "framework_set_scheduler", 00:32:03.613 "params": { 00:32:03.613 "name": "static" 00:32:03.613 } 00:32:03.613 } 00:32:03.613 ] 00:32:03.613 }, 00:32:03.613 { 00:32:03.613 "subsystem": "nvmf", 00:32:03.613 "config": [ 00:32:03.613 { 00:32:03.613 "method": "nvmf_set_config", 00:32:03.613 "params": { 00:32:03.613 "discovery_filter": "match_any", 00:32:03.613 "admin_cmd_passthru": { 00:32:03.613 "identify_ctrlr": false 00:32:03.613 } 00:32:03.613 } 00:32:03.613 }, 00:32:03.613 { 00:32:03.613 "method": "nvmf_set_max_subsystems", 00:32:03.613 "params": { 00:32:03.613 "max_subsystems": 1024 00:32:03.613 } 00:32:03.613 }, 00:32:03.613 { 00:32:03.613 "method": "nvmf_set_crdt", 00:32:03.613 "params": { 00:32:03.613 "crdt1": 0, 00:32:03.613 "crdt2": 0, 00:32:03.613 "crdt3": 0 00:32:03.613 } 00:32:03.613 }, 00:32:03.613 { 00:32:03.613 "method": "nvmf_create_transport", 00:32:03.613 "params": { 00:32:03.613 "trtype": "TCP", 00:32:03.613 "max_queue_depth": 128, 00:32:03.613 "max_io_qpairs_per_ctrlr": 127, 00:32:03.613 "in_capsule_data_size": 4096, 00:32:03.613 "max_io_size": 131072, 00:32:03.613 "io_unit_size": 131072, 00:32:03.613 "max_aq_depth": 128, 00:32:03.613 "num_shared_buffers": 511, 00:32:03.613 "buf_cache_size": 4294967295, 00:32:03.613 "dif_insert_or_strip": false, 00:32:03.613 "zcopy": false, 00:32:03.613 "c2h_success": false, 00:32:03.613 "sock_priority": 0, 00:32:03.613 "abort_timeout_sec": 1, 00:32:03.613 "ack_timeout": 0, 00:32:03.613 "data_wr_pool_size": 0 00:32:03.613 } 00:32:03.613 }, 00:32:03.613 { 00:32:03.614 "method": "nvmf_create_subsystem", 00:32:03.614 "params": { 00:32:03.614 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:03.614 "allow_any_host": false, 00:32:03.614 "serial_number": "00000000000000000000", 00:32:03.614 "model_number": "SPDK bdev Controller", 00:32:03.614 "max_namespaces": 32, 00:32:03.614 "min_cntlid": 1, 00:32:03.614 "max_cntlid": 65519, 00:32:03.614 "ana_reporting": false 00:32:03.614 } 00:32:03.614 }, 00:32:03.614 { 00:32:03.614 "method": "nvmf_subsystem_add_host", 00:32:03.614 "params": { 00:32:03.614 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:03.614 "host": "nqn.2016-06.io.spdk:host1", 00:32:03.614 "psk": "key0" 00:32:03.614 } 00:32:03.614 }, 00:32:03.614 { 00:32:03.614 "method": "nvmf_subsystem_add_ns", 00:32:03.614 "params": { 00:32:03.614 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:03.614 "namespace": { 00:32:03.614 "nsid": 1, 00:32:03.614 "bdev_name": "malloc0", 00:32:03.614 "nguid": "21B9E20480774C028371EB7355B76D77", 00:32:03.614 "uuid": "21b9e204-8077-4c02-8371-eb7355b76d77", 00:32:03.614 "no_auto_visible": false 00:32:03.614 } 00:32:03.614 } 00:32:03.614 }, 00:32:03.614 { 00:32:03.614 "method": "nvmf_subsystem_add_listener", 00:32:03.614 "params": { 00:32:03.614 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:03.614 "listen_address": { 00:32:03.614 "trtype": "TCP", 00:32:03.614 "adrfam": "IPv4", 00:32:03.614 "traddr": "10.0.0.2", 00:32:03.614 "trsvcid": "4420" 00:32:03.614 }, 00:32:03.614 "secure_channel": true 00:32:03.614 } 00:32:03.614 } 00:32:03.614 ] 00:32:03.614 } 00:32:03.614 ] 00:32:03.614 }' 00:32:03.614 08:57:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:32:03.614 08:57:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:03.614 08:57:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2322028 00:32:03.614 08:57:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:32:03.614 08:57:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2322028 00:32:03.614 08:57:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2322028 ']' 00:32:03.614 08:57:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:03.614 08:57:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:32:03.614 08:57:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:03.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:03.614 08:57:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:32:03.614 08:57:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:03.614 [2024-05-15 08:57:58.203237] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:32:03.614 [2024-05-15 08:57:58.203322] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:03.614 EAL: No free 2048 kB hugepages reported on node 1 00:32:03.614 [2024-05-15 08:57:58.280933] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:03.614 [2024-05-15 08:57:58.364832] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:03.614 [2024-05-15 08:57:58.364895] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:03.614 [2024-05-15 08:57:58.364912] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:03.614 [2024-05-15 08:57:58.364926] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:03.614 [2024-05-15 08:57:58.364939] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:03.614 [2024-05-15 08:57:58.365037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:03.872 [2024-05-15 08:57:58.601850] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:03.872 [2024-05-15 08:57:58.633820] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:32:03.872 [2024-05-15 08:57:58.633895] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:03.872 [2024-05-15 08:57:58.644430] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:04.437 08:57:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:32:04.437 08:57:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:32:04.437 08:57:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:04.437 08:57:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:32:04.437 08:57:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:04.437 08:57:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:04.437 08:57:59 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=2322180 00:32:04.437 08:57:59 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 2322180 /var/tmp/bdevperf.sock 00:32:04.438 08:57:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2322180 ']' 00:32:04.438 08:57:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:04.438 08:57:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:32:04.438 08:57:59 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:32:04.438 08:57:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:04.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:04.438 08:57:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:32:04.438 08:57:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:04.438 08:57:59 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:32:04.438 "subsystems": [ 00:32:04.438 { 00:32:04.438 "subsystem": "keyring", 00:32:04.438 "config": [ 00:32:04.438 { 00:32:04.438 "method": "keyring_file_add_key", 00:32:04.438 "params": { 00:32:04.438 "name": "key0", 00:32:04.438 "path": "/tmp/tmp.ydBc68Nn4l" 00:32:04.438 } 00:32:04.438 } 00:32:04.438 ] 00:32:04.438 }, 00:32:04.438 { 00:32:04.438 "subsystem": "iobuf", 00:32:04.438 "config": [ 00:32:04.438 { 00:32:04.438 "method": "iobuf_set_options", 00:32:04.438 "params": { 00:32:04.438 "small_pool_count": 8192, 00:32:04.438 "large_pool_count": 1024, 00:32:04.438 "small_bufsize": 8192, 00:32:04.438 "large_bufsize": 135168 00:32:04.438 } 00:32:04.438 } 00:32:04.438 ] 00:32:04.438 }, 00:32:04.438 { 00:32:04.438 "subsystem": "sock", 00:32:04.438 "config": [ 00:32:04.438 { 00:32:04.438 "method": "sock_impl_set_options", 00:32:04.438 "params": { 00:32:04.438 "impl_name": "posix", 00:32:04.438 "recv_buf_size": 2097152, 00:32:04.438 "send_buf_size": 2097152, 00:32:04.438 "enable_recv_pipe": true, 00:32:04.438 "enable_quickack": false, 00:32:04.438 "enable_placement_id": 0, 00:32:04.438 "enable_zerocopy_send_server": true, 00:32:04.438 "enable_zerocopy_send_client": false, 00:32:04.438 "zerocopy_threshold": 0, 00:32:04.438 "tls_version": 0, 00:32:04.438 "enable_ktls": false 00:32:04.438 } 00:32:04.438 }, 00:32:04.438 { 00:32:04.438 "method": "sock_impl_set_options", 00:32:04.438 "params": { 00:32:04.438 "impl_name": "ssl", 00:32:04.438 "recv_buf_size": 4096, 00:32:04.438 "send_buf_size": 4096, 00:32:04.438 "enable_recv_pipe": true, 00:32:04.438 "enable_quickack": false, 00:32:04.438 "enable_placement_id": 0, 00:32:04.438 "enable_zerocopy_send_server": true, 00:32:04.438 "enable_zerocopy_send_client": false, 00:32:04.438 "zerocopy_threshold": 0, 00:32:04.438 "tls_version": 0, 00:32:04.438 "enable_ktls": false 00:32:04.438 } 00:32:04.438 } 00:32:04.438 ] 00:32:04.438 }, 00:32:04.438 { 00:32:04.438 "subsystem": "vmd", 00:32:04.438 "config": [] 00:32:04.438 }, 00:32:04.438 { 00:32:04.438 "subsystem": "accel", 00:32:04.438 "config": [ 00:32:04.438 { 00:32:04.438 "method": "accel_set_options", 00:32:04.438 "params": { 00:32:04.438 "small_cache_size": 128, 00:32:04.438 "large_cache_size": 16, 00:32:04.438 "task_count": 2048, 00:32:04.438 "sequence_count": 2048, 00:32:04.438 "buf_count": 2048 00:32:04.438 } 00:32:04.438 } 00:32:04.438 ] 00:32:04.438 }, 00:32:04.438 { 00:32:04.438 "subsystem": "bdev", 00:32:04.438 "config": [ 00:32:04.438 { 00:32:04.438 "method": "bdev_set_options", 00:32:04.438 "params": { 00:32:04.438 "bdev_io_pool_size": 65535, 00:32:04.438 "bdev_io_cache_size": 256, 00:32:04.438 "bdev_auto_examine": true, 00:32:04.438 "iobuf_small_cache_size": 128, 00:32:04.438 "iobuf_large_cache_size": 16 00:32:04.438 } 00:32:04.438 }, 00:32:04.438 { 00:32:04.438 "method": "bdev_raid_set_options", 00:32:04.438 "params": { 00:32:04.438 "process_window_size_kb": 1024 00:32:04.438 } 00:32:04.438 }, 00:32:04.438 { 00:32:04.438 "method": "bdev_iscsi_set_options", 00:32:04.438 "params": { 00:32:04.438 "timeout_sec": 30 00:32:04.438 } 00:32:04.438 }, 00:32:04.438 { 00:32:04.438 "method": "bdev_nvme_set_options", 00:32:04.438 "params": { 00:32:04.438 "action_on_timeout": "none", 00:32:04.438 "timeout_us": 0, 00:32:04.438 "timeout_admin_us": 0, 00:32:04.438 "keep_alive_timeout_ms": 10000, 00:32:04.438 "arbitration_burst": 0, 00:32:04.438 "low_priority_weight": 0, 00:32:04.438 "medium_priority_weight": 0, 00:32:04.438 "high_priority_weight": 0, 00:32:04.438 "nvme_adminq_poll_period_us": 10000, 00:32:04.438 "nvme_ioq_poll_period_us": 0, 00:32:04.438 "io_queue_requests": 512, 00:32:04.438 "delay_cmd_submit": true, 00:32:04.438 "transport_retry_count": 4, 00:32:04.438 "bdev_retry_count": 3, 00:32:04.438 "transport_ack_timeout": 0, 00:32:04.438 "ctrlr_loss_timeout_sec": 0, 00:32:04.438 "reconnect_delay_sec": 0, 00:32:04.438 "fast_io_fail_timeout_sec": 0, 00:32:04.438 "disable_auto_failback": false, 00:32:04.438 "generate_uuids": false, 00:32:04.438 "transport_tos": 0, 00:32:04.438 "nvme_error_stat": false, 00:32:04.438 "rdma_srq_size": 0, 00:32:04.438 "io_path_stat": false, 00:32:04.438 "allow_accel_sequence": false, 00:32:04.438 "rdma_max_cq_size": 0, 00:32:04.438 "rdma_cm_event_timeout_ms": 0, 00:32:04.438 "dhchap_digests": [ 00:32:04.438 "sha256", 00:32:04.438 "sha384", 00:32:04.438 "sha512" 00:32:04.438 ], 00:32:04.438 "dhchap_dhgroups": [ 00:32:04.438 "null", 00:32:04.438 "ffdhe2048", 00:32:04.438 "ffdhe3072", 00:32:04.438 "ffdhe4096", 00:32:04.438 "ffdhe6144", 00:32:04.438 "ffdhe8192" 00:32:04.438 ] 00:32:04.438 } 00:32:04.438 }, 00:32:04.438 { 00:32:04.438 "method": "bdev_nvme_attach_controller", 00:32:04.438 "params": { 00:32:04.438 "name": "nvme0", 00:32:04.438 "trtype": "TCP", 00:32:04.438 "adrfam": "IPv4", 00:32:04.438 "traddr": "10.0.0.2", 00:32:04.438 "trsvcid": "4420", 00:32:04.438 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:04.438 "prchk_reftag": false, 00:32:04.438 "prchk_guard": false, 00:32:04.438 "ctrlr_loss_timeout_sec": 0, 00:32:04.438 "reconnect_delay_sec": 0, 00:32:04.438 "fast_io_fail_timeout_sec": 0, 00:32:04.438 "psk": "key0", 00:32:04.438 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:04.438 "hdgst": false, 00:32:04.438 "ddgst": false 00:32:04.438 } 00:32:04.438 }, 00:32:04.438 { 00:32:04.438 "method": "bdev_nvme_set_hotplug", 00:32:04.438 "params": { 00:32:04.438 "period_us": 100000, 00:32:04.438 "enable": false 00:32:04.438 } 00:32:04.438 }, 00:32:04.438 { 00:32:04.438 "method": "bdev_enable_histogram", 00:32:04.438 "params": { 00:32:04.438 "name": "nvme0n1", 00:32:04.438 "enable": true 00:32:04.438 } 00:32:04.438 }, 00:32:04.438 { 00:32:04.438 "method": "bdev_wait_for_examine" 00:32:04.438 } 00:32:04.438 ] 00:32:04.438 }, 00:32:04.438 { 00:32:04.438 "subsystem": "nbd", 00:32:04.438 "config": [] 00:32:04.438 } 00:32:04.438 ] 00:32:04.438 }' 00:32:04.697 [2024-05-15 08:57:59.250577] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:32:04.697 [2024-05-15 08:57:59.250662] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2322180 ] 00:32:04.697 EAL: No free 2048 kB hugepages reported on node 1 00:32:04.697 [2024-05-15 08:57:59.326414] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:04.697 [2024-05-15 08:57:59.412588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:04.955 [2024-05-15 08:57:59.576959] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:05.521 08:58:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:32:05.521 08:58:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:32:05.521 08:58:00 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:05.521 08:58:00 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:32:05.779 08:58:00 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:05.779 08:58:00 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:05.779 Running I/O for 1 seconds... 00:32:07.152 00:32:07.152 Latency(us) 00:32:07.152 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:07.152 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:07.152 Verification LBA range: start 0x0 length 0x2000 00:32:07.152 nvme0n1 : 1.03 3026.26 11.82 0.00 0.00 41650.37 6844.87 54370.61 00:32:07.152 =================================================================================================================== 00:32:07.152 Total : 3026.26 11.82 0.00 0.00 41650.37 6844.87 54370.61 00:32:07.152 0 00:32:07.152 08:58:01 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:32:07.152 08:58:01 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:32:07.152 08:58:01 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:32:07.152 08:58:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@805 -- # type=--id 00:32:07.152 08:58:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # id=0 00:32:07.152 08:58:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # '[' --id = --pid ']' 00:32:07.152 08:58:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@811 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:32:07.153 08:58:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@811 -- # shm_files=nvmf_trace.0 00:32:07.153 08:58:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@813 -- # [[ -z nvmf_trace.0 ]] 00:32:07.153 08:58:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@817 -- # for n in $shm_files 00:32:07.153 08:58:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:32:07.153 nvmf_trace.0 00:32:07.153 08:58:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@820 -- # return 0 00:32:07.153 08:58:01 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 2322180 00:32:07.153 08:58:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2322180 ']' 00:32:07.153 08:58:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2322180 00:32:07.153 08:58:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:32:07.153 08:58:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:32:07.153 08:58:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2322180 00:32:07.153 08:58:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:32:07.153 08:58:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:32:07.153 08:58:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2322180' 00:32:07.153 killing process with pid 2322180 00:32:07.153 08:58:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2322180 00:32:07.153 Received shutdown signal, test time was about 1.000000 seconds 00:32:07.153 00:32:07.153 Latency(us) 00:32:07.153 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:07.153 =================================================================================================================== 00:32:07.153 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:07.153 08:58:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2322180 00:32:07.153 08:58:01 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:32:07.153 08:58:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:07.153 08:58:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:32:07.411 08:58:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:07.411 08:58:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:32:07.411 08:58:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:07.411 08:58:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:07.411 rmmod nvme_tcp 00:32:07.411 rmmod nvme_fabrics 00:32:07.411 rmmod nvme_keyring 00:32:07.411 08:58:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:07.411 08:58:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:32:07.411 08:58:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:32:07.411 08:58:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 2322028 ']' 00:32:07.411 08:58:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 2322028 00:32:07.411 08:58:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2322028 ']' 00:32:07.411 08:58:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2322028 00:32:07.411 08:58:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:32:07.411 08:58:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:32:07.411 08:58:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2322028 00:32:07.411 08:58:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:32:07.411 08:58:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:32:07.411 08:58:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2322028' 00:32:07.411 killing process with pid 2322028 00:32:07.411 08:58:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2322028 00:32:07.411 [2024-05-15 08:58:02.032125] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:32:07.411 08:58:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2322028 00:32:07.671 08:58:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:07.671 08:58:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:07.671 08:58:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:07.671 08:58:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:07.671 08:58:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:07.671 08:58:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:07.671 08:58:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:07.671 08:58:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:09.572 08:58:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:09.572 08:58:04 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.rKruHcyuOo /tmp/tmp.aS5suBF3g1 /tmp/tmp.ydBc68Nn4l 00:32:09.572 00:32:09.572 real 1m19.339s 00:32:09.572 user 2m4.958s 00:32:09.572 sys 0m27.342s 00:32:09.572 08:58:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # xtrace_disable 00:32:09.572 08:58:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:09.572 ************************************ 00:32:09.572 END TEST nvmf_tls 00:32:09.572 ************************************ 00:32:09.572 08:58:04 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:32:09.572 08:58:04 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:32:09.572 08:58:04 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:32:09.572 08:58:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:09.830 ************************************ 00:32:09.830 START TEST nvmf_fips 00:32:09.830 ************************************ 00:32:09.830 08:58:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:32:09.830 * Looking for test storage... 00:32:09.830 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:32:09.830 08:58:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:09.830 08:58:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:32:09.830 08:58:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:09.830 08:58:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:09.830 08:58:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:09.830 08:58:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:09.830 08:58:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:09.830 08:58:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:09.830 08:58:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:09.830 08:58:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:09.830 08:58:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:09.830 08:58:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:09.830 08:58:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:32:09.830 08:58:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:32:09.830 08:58:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:09.830 08:58:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@649 -- # local es=0 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # valid_exec_arg openssl md5 /dev/fd/62 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@637 -- # local arg=openssl 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # type -t openssl 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # type -P openssl 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # arg=/usr/bin/openssl 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # [[ -x /usr/bin/openssl ]] 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@652 -- # openssl md5 /dev/fd/62 00:32:09.831 Error setting digest 00:32:09.831 00824DE3537F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:32:09.831 00824DE3537F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@652 -- # es=1 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:32:09.831 08:58:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:32:12.392 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:12.392 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:32:12.392 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:12.392 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:12.392 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:12.392 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:12.392 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:12.392 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:32:12.392 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:12.392 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:32:12.392 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:32:12.392 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:32:12.392 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:32:12.392 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:32:12.392 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:32:12.392 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:12.392 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:12.392 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:12.392 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:12.392 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:12.392 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:12.392 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:12.392 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:12.392 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:12.392 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:12.392 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:12.392 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:12.392 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:12.392 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:12.392 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:12.392 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:12.392 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:12.392 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:12.392 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:32:12.392 Found 0000:09:00.0 (0x8086 - 0x159b) 00:32:12.392 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:12.392 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:12.392 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:12.392 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:12.392 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:12.392 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:12.392 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:32:12.392 Found 0000:09:00.1 (0x8086 - 0x159b) 00:32:12.393 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:12.393 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:12.393 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:12.393 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:12.393 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:12.393 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:12.393 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:12.393 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:12.393 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:12.393 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:12.393 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:12.393 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:12.393 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:12.393 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:12.393 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:12.393 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:32:12.393 Found net devices under 0000:09:00.0: cvl_0_0 00:32:12.393 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:12.393 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:12.393 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:12.393 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:12.393 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:12.393 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:12.393 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:12.393 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:12.393 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:32:12.393 Found net devices under 0000:09:00.1: cvl_0_1 00:32:12.393 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:12.393 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:12.393 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:32:12.393 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:12.393 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:12.393 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:12.393 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:12.393 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:12.393 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:12.393 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:12.393 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:12.393 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:12.393 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:12.393 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:12.393 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:12.393 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:12.393 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:12.393 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:12.393 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:12.393 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:12.393 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:12.651 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:12.651 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:12.651 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:12.651 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:12.651 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:12.651 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:12.651 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:32:12.651 00:32:12.651 --- 10.0.0.2 ping statistics --- 00:32:12.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:12.651 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:32:12.651 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:12.651 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:12.651 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:32:12.651 00:32:12.651 --- 10.0.0.1 ping statistics --- 00:32:12.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:12.651 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:32:12.651 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:12.651 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:32:12.651 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:12.651 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:12.651 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:12.651 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:12.651 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:12.651 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:12.651 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:12.651 08:58:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:32:12.651 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:12.652 08:58:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@721 -- # xtrace_disable 00:32:12.652 08:58:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:32:12.652 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=2324834 00:32:12.652 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:12.652 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 2324834 00:32:12.652 08:58:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@828 -- # '[' -z 2324834 ']' 00:32:12.652 08:58:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:12.652 08:58:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local max_retries=100 00:32:12.652 08:58:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:12.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:12.652 08:58:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@837 -- # xtrace_disable 00:32:12.652 08:58:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:32:12.652 [2024-05-15 08:58:07.314398] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:32:12.652 [2024-05-15 08:58:07.314489] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:12.652 EAL: No free 2048 kB hugepages reported on node 1 00:32:12.652 [2024-05-15 08:58:07.387970] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:12.910 [2024-05-15 08:58:07.471789] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:12.910 [2024-05-15 08:58:07.471848] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:12.910 [2024-05-15 08:58:07.471862] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:12.910 [2024-05-15 08:58:07.471872] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:12.910 [2024-05-15 08:58:07.471882] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:12.910 [2024-05-15 08:58:07.471908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:12.910 08:58:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:32:12.910 08:58:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@861 -- # return 0 00:32:12.910 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:12.910 08:58:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@727 -- # xtrace_disable 00:32:12.910 08:58:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:32:12.910 08:58:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:12.910 08:58:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:32:12.910 08:58:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:32:12.910 08:58:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:32:12.910 08:58:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:32:12.910 08:58:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:32:12.910 08:58:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:32:12.910 08:58:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:32:12.910 08:58:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:13.168 [2024-05-15 08:58:07.892596] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:13.168 [2024-05-15 08:58:07.908533] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:32:13.168 [2024-05-15 08:58:07.908608] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:13.168 [2024-05-15 08:58:07.908861] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:13.168 [2024-05-15 08:58:07.941024] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:32:13.168 malloc0 00:32:13.426 08:58:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:13.426 08:58:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=2324869 00:32:13.426 08:58:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:32:13.426 08:58:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 2324869 /var/tmp/bdevperf.sock 00:32:13.426 08:58:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@828 -- # '[' -z 2324869 ']' 00:32:13.426 08:58:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:13.426 08:58:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local max_retries=100 00:32:13.426 08:58:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:13.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:13.426 08:58:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@837 -- # xtrace_disable 00:32:13.426 08:58:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:32:13.426 [2024-05-15 08:58:08.031812] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:32:13.426 [2024-05-15 08:58:08.031906] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2324869 ] 00:32:13.426 EAL: No free 2048 kB hugepages reported on node 1 00:32:13.426 [2024-05-15 08:58:08.097770] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:13.426 [2024-05-15 08:58:08.179023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:13.684 08:58:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:32:13.684 08:58:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@861 -- # return 0 00:32:13.684 08:58:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:32:13.942 [2024-05-15 08:58:08.504397] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:13.942 [2024-05-15 08:58:08.504550] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:32:13.942 TLSTESTn1 00:32:13.942 08:58:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:13.942 Running I/O for 10 seconds... 00:32:26.139 00:32:26.139 Latency(us) 00:32:26.139 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:26.139 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:32:26.139 Verification LBA range: start 0x0 length 0x2000 00:32:26.139 TLSTESTn1 : 10.04 3003.84 11.73 0.00 0.00 42514.94 9077.95 67574.90 00:32:26.139 =================================================================================================================== 00:32:26.139 Total : 3003.84 11.73 0.00 0.00 42514.94 9077.95 67574.90 00:32:26.139 0 00:32:26.139 08:58:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:32:26.139 08:58:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:32:26.139 08:58:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@805 -- # type=--id 00:32:26.139 08:58:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # id=0 00:32:26.139 08:58:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # '[' --id = --pid ']' 00:32:26.139 08:58:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@811 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:32:26.139 08:58:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@811 -- # shm_files=nvmf_trace.0 00:32:26.139 08:58:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@813 -- # [[ -z nvmf_trace.0 ]] 00:32:26.139 08:58:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@817 -- # for n in $shm_files 00:32:26.139 08:58:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:32:26.139 nvmf_trace.0 00:32:26.139 08:58:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@820 -- # return 0 00:32:26.139 08:58:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2324869 00:32:26.139 08:58:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@947 -- # '[' -z 2324869 ']' 00:32:26.139 08:58:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # kill -0 2324869 00:32:26.139 08:58:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # uname 00:32:26.139 08:58:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:32:26.139 08:58:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2324869 00:32:26.139 08:58:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:32:26.139 08:58:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:32:26.139 08:58:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2324869' 00:32:26.139 killing process with pid 2324869 00:32:26.139 08:58:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # kill 2324869 00:32:26.139 Received shutdown signal, test time was about 10.000000 seconds 00:32:26.139 00:32:26.139 Latency(us) 00:32:26.139 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:26.139 =================================================================================================================== 00:32:26.139 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:26.139 [2024-05-15 08:58:18.870214] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:32:26.139 08:58:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@971 -- # wait 2324869 00:32:26.139 08:58:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:32:26.139 08:58:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:26.139 08:58:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:32:26.139 08:58:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:26.139 08:58:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:32:26.139 08:58:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:26.139 08:58:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:26.139 rmmod nvme_tcp 00:32:26.139 rmmod nvme_fabrics 00:32:26.139 rmmod nvme_keyring 00:32:26.139 08:58:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:26.139 08:58:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:32:26.139 08:58:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:32:26.139 08:58:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 2324834 ']' 00:32:26.139 08:58:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 2324834 00:32:26.139 08:58:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@947 -- # '[' -z 2324834 ']' 00:32:26.139 08:58:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # kill -0 2324834 00:32:26.139 08:58:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # uname 00:32:26.139 08:58:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:32:26.139 08:58:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2324834 00:32:26.139 08:58:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:32:26.139 08:58:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:32:26.139 08:58:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2324834' 00:32:26.139 killing process with pid 2324834 00:32:26.139 08:58:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # kill 2324834 00:32:26.139 [2024-05-15 08:58:19.190323] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:32:26.139 [2024-05-15 08:58:19.190377] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:32:26.139 08:58:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@971 -- # wait 2324834 00:32:26.139 08:58:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:26.139 08:58:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:26.139 08:58:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:26.139 08:58:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:26.139 08:58:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:26.139 08:58:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:26.139 08:58:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:26.139 08:58:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:26.705 08:58:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:26.705 08:58:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:32:26.705 00:32:26.705 real 0m17.090s 00:32:26.705 user 0m18.096s 00:32:26.705 sys 0m7.184s 00:32:26.705 08:58:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # xtrace_disable 00:32:26.705 08:58:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:32:26.705 ************************************ 00:32:26.705 END TEST nvmf_fips 00:32:26.705 ************************************ 00:32:26.705 08:58:21 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:32:26.705 08:58:21 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:32:26.705 08:58:21 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:32:26.705 08:58:21 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:32:26.705 08:58:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:26.963 ************************************ 00:32:26.963 START TEST nvmf_fuzz 00:32:26.963 ************************************ 00:32:26.963 08:58:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:32:26.963 * Looking for test storage... 00:32:26.963 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:26.963 08:58:21 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:26.963 08:58:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:32:26.963 08:58:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:26.963 08:58:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:26.963 08:58:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:26.963 08:58:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:26.963 08:58:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:26.963 08:58:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:26.963 08:58:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:26.963 08:58:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:26.963 08:58:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:26.963 08:58:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:26.963 08:58:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:32:26.963 08:58:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:32:26.963 08:58:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:26.963 08:58:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:26.963 08:58:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:26.963 08:58:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:26.963 08:58:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:26.963 08:58:21 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:26.963 08:58:21 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:26.963 08:58:21 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:26.963 08:58:21 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.963 08:58:21 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.963 08:58:21 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.963 08:58:21 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:32:26.963 08:58:21 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.963 08:58:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:32:26.963 08:58:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:26.963 08:58:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:26.963 08:58:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:26.963 08:58:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:26.963 08:58:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:26.963 08:58:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:26.963 08:58:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:26.963 08:58:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:26.963 08:58:21 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:32:26.963 08:58:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:26.963 08:58:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:26.963 08:58:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:26.963 08:58:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:26.963 08:58:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:26.963 08:58:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:26.963 08:58:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:26.963 08:58:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:26.963 08:58:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:26.963 08:58:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:26.963 08:58:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:32:26.963 08:58:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:32:29.493 Found 0000:09:00.0 (0x8086 - 0x159b) 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:32:29.493 Found 0000:09:00.1 (0x8086 - 0x159b) 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:32:29.493 Found net devices under 0000:09:00.0: cvl_0_0 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:32:29.493 Found net devices under 0000:09:00.1: cvl_0_1 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:29.493 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:29.493 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:32:29.493 00:32:29.493 --- 10.0.0.2 ping statistics --- 00:32:29.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:29.493 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:29.493 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:29.493 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:32:29.493 00:32:29.493 --- 10.0.0.1 ping statistics --- 00:32:29.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:29.493 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:29.493 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:32:29.494 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:29.494 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:29.494 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:29.494 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:29.494 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:29.494 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:29.494 08:58:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:29.494 08:58:24 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=2328442 00:32:29.494 08:58:24 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:32:29.494 08:58:24 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:32:29.494 08:58:24 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 2328442 00:32:29.494 08:58:24 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@828 -- # '[' -z 2328442 ']' 00:32:29.494 08:58:24 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:29.494 08:58:24 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@833 -- # local max_retries=100 00:32:29.494 08:58:24 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:29.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:29.494 08:58:24 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@837 -- # xtrace_disable 00:32:29.494 08:58:24 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:32:30.058 08:58:24 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:32:30.058 08:58:24 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@861 -- # return 0 00:32:30.058 08:58:24 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:30.058 08:58:24 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:30.058 08:58:24 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:32:30.058 08:58:24 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:30.058 08:58:24 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:32:30.058 08:58:24 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:30.058 08:58:24 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:32:30.058 Malloc0 00:32:30.058 08:58:24 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:30.058 08:58:24 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:30.058 08:58:24 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:30.058 08:58:24 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:32:30.058 08:58:24 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:30.058 08:58:24 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:30.058 08:58:24 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:30.058 08:58:24 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:32:30.058 08:58:24 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:30.058 08:58:24 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:30.058 08:58:24 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:30.058 08:58:24 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:32:30.058 08:58:24 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:30.058 08:58:24 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:32:30.058 08:58:24 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:33:02.113 Fuzzing completed. Shutting down the fuzz application 00:33:02.113 00:33:02.113 Dumping successful admin opcodes: 00:33:02.113 8, 9, 10, 24, 00:33:02.113 Dumping successful io opcodes: 00:33:02.113 0, 9, 00:33:02.113 NS: 0x200003aeff00 I/O qp, Total commands completed: 445056, total successful commands: 2586, random_seed: 4051927616 00:33:02.113 NS: 0x200003aeff00 admin qp, Total commands completed: 55760, total successful commands: 444, random_seed: 2214475264 00:33:02.113 08:58:55 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:33:02.113 Fuzzing completed. Shutting down the fuzz application 00:33:02.113 00:33:02.113 Dumping successful admin opcodes: 00:33:02.113 24, 00:33:02.113 Dumping successful io opcodes: 00:33:02.113 00:33:02.113 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1798228358 00:33:02.113 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 1798359322 00:33:02.113 08:58:56 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:02.113 08:58:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:02.113 08:58:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:33:02.113 08:58:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:02.113 08:58:56 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:33:02.113 08:58:56 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:33:02.113 08:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:02.113 08:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:33:02.113 08:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:02.113 08:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:33:02.113 08:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:02.113 08:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:02.113 rmmod nvme_tcp 00:33:02.113 rmmod nvme_fabrics 00:33:02.113 rmmod nvme_keyring 00:33:02.113 08:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:02.113 08:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:33:02.113 08:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:33:02.113 08:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 2328442 ']' 00:33:02.113 08:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 2328442 00:33:02.113 08:58:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@947 -- # '[' -z 2328442 ']' 00:33:02.114 08:58:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # kill -0 2328442 00:33:02.114 08:58:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # uname 00:33:02.114 08:58:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:33:02.114 08:58:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2328442 00:33:02.114 08:58:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:33:02.114 08:58:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:33:02.114 08:58:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2328442' 00:33:02.114 killing process with pid 2328442 00:33:02.114 08:58:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@966 -- # kill 2328442 00:33:02.114 08:58:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@971 -- # wait 2328442 00:33:02.114 08:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:02.114 08:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:02.114 08:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:02.114 08:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:02.114 08:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:02.114 08:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:02.114 08:58:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:02.114 08:58:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:04.046 08:58:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:04.046 08:58:58 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:33:04.046 00:33:04.046 real 0m37.272s 00:33:04.046 user 0m50.463s 00:33:04.046 sys 0m15.696s 00:33:04.046 08:58:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1123 -- # xtrace_disable 00:33:04.046 08:58:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:33:04.046 ************************************ 00:33:04.046 END TEST nvmf_fuzz 00:33:04.046 ************************************ 00:33:04.046 08:58:58 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:33:04.046 08:58:58 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:33:04.046 08:58:58 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:33:04.046 08:58:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:04.303 ************************************ 00:33:04.303 START TEST nvmf_multiconnection 00:33:04.303 ************************************ 00:33:04.303 08:58:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:33:04.303 * Looking for test storage... 00:33:04.303 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:04.303 08:58:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:04.303 08:58:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:33:04.303 08:58:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:04.303 08:58:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:04.303 08:58:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:04.303 08:58:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:04.303 08:58:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:04.303 08:58:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:04.303 08:58:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:04.303 08:58:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:04.303 08:58:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:04.303 08:58:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:04.303 08:58:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:33:04.303 08:58:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:33:04.303 08:58:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:04.303 08:58:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:04.303 08:58:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:04.303 08:58:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:04.304 08:58:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:04.304 08:58:58 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:04.304 08:58:58 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:04.304 08:58:58 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:04.304 08:58:58 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:04.304 08:58:58 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:04.304 08:58:58 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:04.304 08:58:58 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:33:04.304 08:58:58 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:04.304 08:58:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:33:04.304 08:58:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:04.304 08:58:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:04.304 08:58:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:04.304 08:58:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:04.304 08:58:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:04.304 08:58:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:04.304 08:58:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:04.304 08:58:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:04.304 08:58:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:04.304 08:58:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:04.304 08:58:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:33:04.304 08:58:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:33:04.304 08:58:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:04.304 08:58:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:04.304 08:58:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:04.304 08:58:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:04.304 08:58:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:04.304 08:58:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:04.304 08:58:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:04.304 08:58:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:04.304 08:58:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:04.304 08:58:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:04.304 08:58:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:33:04.304 08:58:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:33:06.830 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:06.830 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:33:06.830 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:06.830 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:06.830 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:06.830 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:06.830 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:06.830 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:33:06.830 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:06.830 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:33:06.830 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:33:06.830 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:33:06.830 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:33:06.830 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:33:06.830 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:33:06.830 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:06.830 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:06.830 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:06.830 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:06.830 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:06.830 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:06.830 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:06.830 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:06.830 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:06.830 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:06.830 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:06.830 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:06.830 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:06.830 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:06.830 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:06.830 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:06.830 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:06.830 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:06.830 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:33:06.830 Found 0000:09:00.0 (0x8086 - 0x159b) 00:33:06.830 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:06.830 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:06.830 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:06.830 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:06.830 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:06.830 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:06.830 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:33:06.830 Found 0000:09:00.1 (0x8086 - 0x159b) 00:33:06.830 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:06.830 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:06.830 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:06.830 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:06.830 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:06.830 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:06.830 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:06.830 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:06.830 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:06.830 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:06.830 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:06.830 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:06.830 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:06.830 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:06.830 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:06.830 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:33:06.830 Found net devices under 0000:09:00.0: cvl_0_0 00:33:06.830 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:06.830 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:06.830 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:06.830 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:06.830 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:06.830 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:06.831 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:06.831 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:06.831 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:33:06.831 Found net devices under 0000:09:00.1: cvl_0_1 00:33:06.831 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:06.831 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:06.831 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:33:06.831 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:06.831 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:06.831 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:06.831 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:06.831 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:06.831 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:06.831 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:06.831 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:06.831 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:06.831 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:06.831 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:06.831 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:06.831 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:06.831 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:06.831 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:06.831 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:06.831 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:06.831 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:06.831 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:06.831 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:06.831 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:06.831 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:06.831 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:06.831 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:06.831 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:33:06.831 00:33:06.831 --- 10.0.0.2 ping statistics --- 00:33:06.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:06.831 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:33:06.831 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:06.831 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:06.831 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 00:33:06.831 00:33:06.831 --- 10.0.0.1 ping statistics --- 00:33:06.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:06.831 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:33:06.831 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:06.831 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:33:06.831 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:06.831 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:06.831 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:06.831 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:06.831 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:06.831 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:06.831 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:06.831 08:59:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:33:06.831 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:06.831 08:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@721 -- # xtrace_disable 00:33:06.831 08:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:33:06.831 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=2334417 00:33:06.831 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:33:06.831 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 2334417 00:33:06.831 08:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@828 -- # '[' -z 2334417 ']' 00:33:06.831 08:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:06.831 08:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@833 -- # local max_retries=100 00:33:06.831 08:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:06.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:06.831 08:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@837 -- # xtrace_disable 00:33:06.831 08:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:33:07.089 [2024-05-15 08:59:01.662961] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:33:07.089 [2024-05-15 08:59:01.663042] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:07.089 EAL: No free 2048 kB hugepages reported on node 1 00:33:07.089 [2024-05-15 08:59:01.736984] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:07.089 [2024-05-15 08:59:01.822932] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:07.089 [2024-05-15 08:59:01.823003] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:07.089 [2024-05-15 08:59:01.823017] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:07.089 [2024-05-15 08:59:01.823028] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:07.089 [2024-05-15 08:59:01.823038] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:07.089 [2024-05-15 08:59:01.823117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:07.089 [2024-05-15 08:59:01.823183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:07.089 [2024-05-15 08:59:01.823250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:07.089 [2024-05-15 08:59:01.823254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:07.347 08:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:33:07.347 08:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@861 -- # return 0 00:33:07.347 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:07.347 08:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@727 -- # xtrace_disable 00:33:07.347 08:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:33:07.347 08:59:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:07.347 08:59:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:07.347 08:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:07.347 08:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:33:07.348 [2024-05-15 08:59:01.975872] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:07.348 08:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:07.348 08:59:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:33:07.348 08:59:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:33:07.348 08:59:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:33:07.348 08:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:07.348 08:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:33:07.348 Malloc1 00:33:07.348 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:07.348 08:59:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:33:07.348 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:07.348 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:33:07.348 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:07.348 08:59:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:33:07.348 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:07.348 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:33:07.348 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:07.348 08:59:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:07.348 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:07.348 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:33:07.348 [2024-05-15 08:59:02.031548] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:33:07.348 [2024-05-15 08:59:02.031879] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:07.348 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:07.348 08:59:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:33:07.348 08:59:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:33:07.348 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:07.348 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:33:07.348 Malloc2 00:33:07.348 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:07.348 08:59:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:33:07.348 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:07.348 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:33:07.348 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:07.348 08:59:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:33:07.348 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:07.348 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:33:07.348 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:07.348 08:59:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:07.348 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:07.348 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:33:07.348 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:07.348 08:59:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:33:07.348 08:59:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:33:07.348 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:07.348 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:33:07.348 Malloc3 00:33:07.348 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:07.348 08:59:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:33:07.348 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:07.348 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:33:07.348 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:07.348 08:59:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:33:07.348 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:07.348 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:33:07.348 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:07.348 08:59:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:33:07.348 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:07.348 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:33:07.348 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:07.348 08:59:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:33:07.348 08:59:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:33:07.348 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:07.348 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:33:07.607 Malloc4 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:33:07.607 Malloc5 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:33:07.607 Malloc6 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:33:07.607 Malloc7 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:33:07.607 Malloc8 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:33:07.607 Malloc9 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:07.607 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:33:07.865 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:07.865 08:59:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:33:07.865 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:07.865 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:33:07.865 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:07.865 08:59:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:33:07.865 08:59:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:33:07.865 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:07.865 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:33:07.865 Malloc10 00:33:07.865 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:07.865 08:59:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:33:07.865 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:07.865 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:33:07.865 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:07.865 08:59:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:33:07.865 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:07.865 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:33:07.865 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:07.865 08:59:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:33:07.865 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:07.866 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:33:07.866 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:07.866 08:59:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:33:07.866 08:59:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:33:07.866 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:07.866 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:33:07.866 Malloc11 00:33:07.866 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:07.866 08:59:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:33:07.866 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:07.866 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:33:07.866 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:07.866 08:59:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:33:07.866 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:07.866 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:33:07.866 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:07.866 08:59:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:33:07.866 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:07.866 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:33:07.866 08:59:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:07.866 08:59:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:33:07.866 08:59:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:33:07.866 08:59:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:33:08.430 08:59:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:33:08.430 08:59:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local i=0 00:33:08.430 08:59:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:33:08.430 08:59:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:33:08.430 08:59:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # sleep 2 00:33:10.955 08:59:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:33:10.955 08:59:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:33:10.955 08:59:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # grep -c SPDK1 00:33:10.955 08:59:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:33:10.955 08:59:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:33:10.955 08:59:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # return 0 00:33:10.955 08:59:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:33:10.955 08:59:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:33:10.955 08:59:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:33:10.955 08:59:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local i=0 00:33:10.955 08:59:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:33:10.955 08:59:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:33:10.955 08:59:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # sleep 2 00:33:13.481 08:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:33:13.481 08:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:33:13.481 08:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # grep -c SPDK2 00:33:13.481 08:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:33:13.481 08:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:33:13.481 08:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # return 0 00:33:13.481 08:59:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:33:13.481 08:59:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:33:13.739 08:59:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:33:13.739 08:59:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local i=0 00:33:13.739 08:59:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:33:13.739 08:59:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:33:13.739 08:59:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # sleep 2 00:33:15.636 08:59:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:33:15.636 08:59:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:33:15.636 08:59:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # grep -c SPDK3 00:33:15.636 08:59:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:33:15.636 08:59:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:33:15.636 08:59:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # return 0 00:33:15.636 08:59:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:33:15.636 08:59:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:33:16.568 08:59:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:33:16.568 08:59:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local i=0 00:33:16.568 08:59:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:33:16.568 08:59:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:33:16.568 08:59:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # sleep 2 00:33:18.465 08:59:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:33:18.466 08:59:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:33:18.466 08:59:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # grep -c SPDK4 00:33:18.466 08:59:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:33:18.466 08:59:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:33:18.466 08:59:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # return 0 00:33:18.466 08:59:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:33:18.466 08:59:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:33:19.029 08:59:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:33:19.029 08:59:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local i=0 00:33:19.029 08:59:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:33:19.029 08:59:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:33:19.029 08:59:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # sleep 2 00:33:20.981 08:59:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:33:20.981 08:59:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:33:20.981 08:59:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # grep -c SPDK5 00:33:20.981 08:59:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:33:20.981 08:59:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:33:20.981 08:59:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # return 0 00:33:20.981 08:59:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:33:20.981 08:59:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:33:21.914 08:59:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:33:21.914 08:59:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local i=0 00:33:21.914 08:59:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:33:21.914 08:59:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:33:21.914 08:59:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # sleep 2 00:33:23.811 08:59:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:33:23.811 08:59:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:33:23.811 08:59:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # grep -c SPDK6 00:33:23.811 08:59:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:33:23.811 08:59:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:33:23.811 08:59:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # return 0 00:33:23.811 08:59:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:33:23.811 08:59:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:33:24.744 08:59:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:33:24.744 08:59:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local i=0 00:33:24.744 08:59:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:33:24.744 08:59:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:33:24.744 08:59:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # sleep 2 00:33:26.640 08:59:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:33:26.640 08:59:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:33:26.640 08:59:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # grep -c SPDK7 00:33:26.640 08:59:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:33:26.640 08:59:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:33:26.640 08:59:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # return 0 00:33:26.640 08:59:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:33:26.640 08:59:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:33:27.205 08:59:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:33:27.205 08:59:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local i=0 00:33:27.205 08:59:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:33:27.205 08:59:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:33:27.205 08:59:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # sleep 2 00:33:29.727 08:59:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:33:29.727 08:59:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:33:29.727 08:59:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # grep -c SPDK8 00:33:29.727 08:59:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:33:29.727 08:59:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:33:29.727 08:59:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # return 0 00:33:29.727 08:59:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:33:29.727 08:59:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:33:29.985 08:59:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:33:29.985 08:59:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local i=0 00:33:29.985 08:59:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:33:29.985 08:59:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:33:29.985 08:59:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # sleep 2 00:33:32.512 08:59:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:33:32.512 08:59:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:33:32.512 08:59:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # grep -c SPDK9 00:33:32.512 08:59:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:33:32.512 08:59:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:33:32.512 08:59:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # return 0 00:33:32.512 08:59:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:33:32.512 08:59:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:33:33.078 08:59:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:33:33.078 08:59:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local i=0 00:33:33.078 08:59:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:33:33.078 08:59:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:33:33.078 08:59:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # sleep 2 00:33:34.974 08:59:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:33:34.974 08:59:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:33:34.974 08:59:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # grep -c SPDK10 00:33:34.974 08:59:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:33:34.974 08:59:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:33:34.974 08:59:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # return 0 00:33:34.974 08:59:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:33:34.974 08:59:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:33:35.906 08:59:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:33:35.906 08:59:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local i=0 00:33:35.906 08:59:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:33:35.906 08:59:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:33:35.906 08:59:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # sleep 2 00:33:37.803 08:59:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:33:37.803 08:59:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:33:37.803 08:59:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # grep -c SPDK11 00:33:37.803 08:59:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:33:37.803 08:59:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:33:37.803 08:59:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # return 0 00:33:37.803 08:59:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:33:37.803 [global] 00:33:37.803 thread=1 00:33:37.803 invalidate=1 00:33:37.803 rw=read 00:33:37.803 time_based=1 00:33:37.803 runtime=10 00:33:37.803 ioengine=libaio 00:33:37.803 direct=1 00:33:37.803 bs=262144 00:33:37.803 iodepth=64 00:33:37.803 norandommap=1 00:33:37.803 numjobs=1 00:33:37.803 00:33:37.803 [job0] 00:33:37.803 filename=/dev/nvme0n1 00:33:37.803 [job1] 00:33:37.803 filename=/dev/nvme10n1 00:33:37.803 [job2] 00:33:37.803 filename=/dev/nvme1n1 00:33:37.803 [job3] 00:33:37.803 filename=/dev/nvme2n1 00:33:37.803 [job4] 00:33:37.803 filename=/dev/nvme3n1 00:33:37.803 [job5] 00:33:37.803 filename=/dev/nvme4n1 00:33:38.061 [job6] 00:33:38.061 filename=/dev/nvme5n1 00:33:38.061 [job7] 00:33:38.061 filename=/dev/nvme6n1 00:33:38.061 [job8] 00:33:38.061 filename=/dev/nvme7n1 00:33:38.061 [job9] 00:33:38.061 filename=/dev/nvme8n1 00:33:38.061 [job10] 00:33:38.061 filename=/dev/nvme9n1 00:33:38.061 Could not set queue depth (nvme0n1) 00:33:38.061 Could not set queue depth (nvme10n1) 00:33:38.061 Could not set queue depth (nvme1n1) 00:33:38.061 Could not set queue depth (nvme2n1) 00:33:38.061 Could not set queue depth (nvme3n1) 00:33:38.061 Could not set queue depth (nvme4n1) 00:33:38.061 Could not set queue depth (nvme5n1) 00:33:38.061 Could not set queue depth (nvme6n1) 00:33:38.061 Could not set queue depth (nvme7n1) 00:33:38.061 Could not set queue depth (nvme8n1) 00:33:38.061 Could not set queue depth (nvme9n1) 00:33:38.347 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:33:38.347 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:33:38.347 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:33:38.347 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:33:38.347 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:33:38.347 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:33:38.347 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:33:38.347 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:33:38.347 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:33:38.347 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:33:38.347 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:33:38.347 fio-3.35 00:33:38.347 Starting 11 threads 00:33:50.550 00:33:50.550 job0: (groupid=0, jobs=1): err= 0: pid=2338576: Wed May 15 08:59:43 2024 00:33:50.550 read: IOPS=663, BW=166MiB/s (174MB/s)(1674MiB/10094msec) 00:33:50.550 slat (usec): min=9, max=123757, avg=1047.15, stdev=4656.90 00:33:50.550 clat (usec): min=721, max=295880, avg=95327.46, stdev=52428.18 00:33:50.550 lat (usec): min=749, max=378969, avg=96374.61, stdev=53146.82 00:33:50.550 clat percentiles (msec): 00:33:50.550 | 1.00th=[ 5], 5.00th=[ 17], 10.00th=[ 34], 20.00th=[ 50], 00:33:50.550 | 30.00th=[ 66], 40.00th=[ 75], 50.00th=[ 87], 60.00th=[ 101], 00:33:50.550 | 70.00th=[ 122], 80.00th=[ 146], 90.00th=[ 165], 95.00th=[ 190], 00:33:50.550 | 99.00th=[ 239], 99.50th=[ 251], 99.90th=[ 259], 99.95th=[ 288], 00:33:50.550 | 99.99th=[ 296] 00:33:50.550 bw ( KiB/s): min=94720, max=283648, per=8.68%, avg=169804.80, stdev=58055.92, samples=20 00:33:50.550 iops : min= 370, max= 1108, avg=663.30, stdev=226.78, samples=20 00:33:50.550 lat (usec) : 750=0.04%, 1000=0.01% 00:33:50.550 lat (msec) : 2=0.27%, 4=0.51%, 10=2.82%, 20=1.69%, 50=15.53% 00:33:50.550 lat (msec) : 100=39.16%, 250=39.49%, 500=0.48% 00:33:50.550 cpu : usr=0.32%, sys=1.83%, ctx=1424, majf=0, minf=4097 00:33:50.550 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:33:50.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:50.550 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:33:50.550 issued rwts: total=6696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:50.550 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:50.550 job1: (groupid=0, jobs=1): err= 0: pid=2338605: Wed May 15 08:59:43 2024 00:33:50.550 read: IOPS=731, BW=183MiB/s (192MB/s)(1837MiB/10044msec) 00:33:50.550 slat (usec): min=9, max=151913, avg=1010.59, stdev=3852.55 00:33:50.550 clat (msec): min=2, max=234, avg=86.37, stdev=41.91 00:33:50.550 lat (msec): min=2, max=295, avg=87.38, stdev=42.41 00:33:50.550 clat percentiles (msec): 00:33:50.550 | 1.00th=[ 12], 5.00th=[ 26], 10.00th=[ 36], 20.00th=[ 53], 00:33:50.550 | 30.00th=[ 64], 40.00th=[ 73], 50.00th=[ 83], 60.00th=[ 92], 00:33:50.550 | 70.00th=[ 103], 80.00th=[ 115], 90.00th=[ 144], 95.00th=[ 167], 00:33:50.550 | 99.00th=[ 209], 99.50th=[ 218], 99.90th=[ 228], 99.95th=[ 232], 00:33:50.550 | 99.99th=[ 236] 00:33:50.550 bw ( KiB/s): min=87040, max=406528, per=9.53%, avg=186496.00, stdev=74461.82, samples=20 00:33:50.550 iops : min= 340, max= 1588, avg=728.50, stdev=290.87, samples=20 00:33:50.550 lat (msec) : 4=0.01%, 10=0.61%, 20=2.97%, 50=14.94%, 100=49.18% 00:33:50.550 lat (msec) : 250=32.28% 00:33:50.550 cpu : usr=0.35%, sys=2.23%, ctx=1558, majf=0, minf=4097 00:33:50.550 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:33:50.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:50.550 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:33:50.550 issued rwts: total=7348,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:50.550 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:50.550 job2: (groupid=0, jobs=1): err= 0: pid=2338635: Wed May 15 08:59:43 2024 00:33:50.550 read: IOPS=604, BW=151MiB/s (158MB/s)(1521MiB/10067msec) 00:33:50.550 slat (usec): min=9, max=59856, avg=1424.99, stdev=4292.59 00:33:50.550 clat (usec): min=1162, max=249275, avg=104367.38, stdev=44003.26 00:33:50.550 lat (usec): min=1185, max=254493, avg=105792.37, stdev=44669.19 00:33:50.550 clat percentiles (msec): 00:33:50.550 | 1.00th=[ 8], 5.00th=[ 28], 10.00th=[ 53], 20.00th=[ 73], 00:33:50.550 | 30.00th=[ 82], 40.00th=[ 91], 50.00th=[ 101], 60.00th=[ 109], 00:33:50.550 | 70.00th=[ 122], 80.00th=[ 144], 90.00th=[ 163], 95.00th=[ 180], 00:33:50.550 | 99.00th=[ 228], 99.50th=[ 234], 99.90th=[ 247], 99.95th=[ 247], 00:33:50.550 | 99.99th=[ 249] 00:33:50.550 bw ( KiB/s): min=82432, max=216064, per=7.88%, avg=154112.00, stdev=43152.76, samples=20 00:33:50.550 iops : min= 322, max= 844, avg=602.00, stdev=168.57, samples=20 00:33:50.550 lat (msec) : 2=0.02%, 4=0.20%, 10=1.46%, 20=1.86%, 50=5.97% 00:33:50.550 lat (msec) : 100=40.82%, 250=49.68% 00:33:50.551 cpu : usr=0.30%, sys=2.05%, ctx=1327, majf=0, minf=4097 00:33:50.551 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:33:50.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:50.551 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:33:50.551 issued rwts: total=6083,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:50.551 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:50.551 job3: (groupid=0, jobs=1): err= 0: pid=2338653: Wed May 15 08:59:43 2024 00:33:50.551 read: IOPS=818, BW=205MiB/s (215MB/s)(2066MiB/10096msec) 00:33:50.551 slat (usec): min=9, max=64958, avg=910.58, stdev=3225.40 00:33:50.551 clat (msec): min=2, max=239, avg=77.20, stdev=42.86 00:33:50.551 lat (msec): min=2, max=243, avg=78.11, stdev=43.33 00:33:50.551 clat percentiles (msec): 00:33:50.551 | 1.00th=[ 8], 5.00th=[ 29], 10.00th=[ 32], 20.00th=[ 40], 00:33:50.551 | 30.00th=[ 52], 40.00th=[ 59], 50.00th=[ 69], 60.00th=[ 79], 00:33:50.551 | 70.00th=[ 92], 80.00th=[ 109], 90.00th=[ 142], 95.00th=[ 163], 00:33:50.551 | 99.00th=[ 201], 99.50th=[ 220], 99.90th=[ 228], 99.95th=[ 230], 00:33:50.551 | 99.99th=[ 241] 00:33:50.551 bw ( KiB/s): min=89600, max=419840, per=10.73%, avg=209920.00, stdev=87718.33, samples=20 00:33:50.551 iops : min= 350, max= 1640, avg=820.00, stdev=342.65, samples=20 00:33:50.551 lat (msec) : 4=0.31%, 10=2.25%, 20=0.76%, 50=25.25%, 100=46.82% 00:33:50.551 lat (msec) : 250=24.60% 00:33:50.551 cpu : usr=0.36%, sys=2.65%, ctx=1684, majf=0, minf=4097 00:33:50.551 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:33:50.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:50.551 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:33:50.551 issued rwts: total=8263,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:50.551 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:50.551 job4: (groupid=0, jobs=1): err= 0: pid=2338661: Wed May 15 08:59:43 2024 00:33:50.551 read: IOPS=1027, BW=257MiB/s (269MB/s)(2594MiB/10099msec) 00:33:50.551 slat (usec): min=9, max=103024, avg=702.19, stdev=3256.45 00:33:50.551 clat (msec): min=2, max=244, avg=61.52, stdev=44.13 00:33:50.551 lat (msec): min=2, max=244, avg=62.22, stdev=44.56 00:33:50.551 clat percentiles (msec): 00:33:50.551 | 1.00th=[ 7], 5.00th=[ 23], 10.00th=[ 29], 20.00th=[ 31], 00:33:50.551 | 30.00th=[ 32], 40.00th=[ 33], 50.00th=[ 37], 60.00th=[ 53], 00:33:50.551 | 70.00th=[ 77], 80.00th=[ 102], 90.00th=[ 132], 95.00th=[ 159], 00:33:50.551 | 99.00th=[ 188], 99.50th=[ 197], 99.90th=[ 203], 99.95th=[ 205], 00:33:50.551 | 99.99th=[ 211] 00:33:50.551 bw ( KiB/s): min=110080, max=533504, per=13.49%, avg=263985.65, stdev=121781.76, samples=20 00:33:50.551 iops : min= 430, max= 2084, avg=1031.15, stdev=475.71, samples=20 00:33:50.551 lat (msec) : 4=0.21%, 10=1.23%, 20=2.90%, 50=54.46%, 100=20.93% 00:33:50.551 lat (msec) : 250=20.26% 00:33:50.551 cpu : usr=0.55%, sys=3.02%, ctx=2089, majf=0, minf=3721 00:33:50.551 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:33:50.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:50.551 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:33:50.551 issued rwts: total=10376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:50.551 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:50.551 job5: (groupid=0, jobs=1): err= 0: pid=2338672: Wed May 15 08:59:43 2024 00:33:50.551 read: IOPS=608, BW=152MiB/s (159MB/s)(1535MiB/10100msec) 00:33:50.551 slat (usec): min=9, max=131562, avg=1032.73, stdev=5019.23 00:33:50.551 clat (usec): min=1182, max=310859, avg=104100.08, stdev=55293.40 00:33:50.551 lat (usec): min=1215, max=316335, avg=105132.81, stdev=55898.24 00:33:50.551 clat percentiles (msec): 00:33:50.551 | 1.00th=[ 7], 5.00th=[ 16], 10.00th=[ 24], 20.00th=[ 62], 00:33:50.551 | 30.00th=[ 79], 40.00th=[ 89], 50.00th=[ 99], 60.00th=[ 113], 00:33:50.551 | 70.00th=[ 130], 80.00th=[ 155], 90.00th=[ 176], 95.00th=[ 199], 00:33:50.551 | 99.00th=[ 243], 99.50th=[ 259], 99.90th=[ 292], 99.95th=[ 292], 00:33:50.551 | 99.99th=[ 313] 00:33:50.551 bw ( KiB/s): min=71680, max=252928, per=7.95%, avg=155610.80, stdev=53062.71, samples=20 00:33:50.551 iops : min= 280, max= 988, avg=607.85, stdev=207.28, samples=20 00:33:50.551 lat (msec) : 2=0.10%, 4=0.18%, 10=2.21%, 20=5.18%, 50=8.61% 00:33:50.551 lat (msec) : 100=35.74%, 250=47.29%, 500=0.68% 00:33:50.551 cpu : usr=0.26%, sys=1.74%, ctx=1345, majf=0, minf=4097 00:33:50.551 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:33:50.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:50.551 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:33:50.551 issued rwts: total=6141,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:50.551 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:50.551 job6: (groupid=0, jobs=1): err= 0: pid=2338675: Wed May 15 08:59:43 2024 00:33:50.551 read: IOPS=632, BW=158MiB/s (166MB/s)(1586MiB/10032msec) 00:33:50.551 slat (usec): min=9, max=117186, avg=1354.08, stdev=4910.04 00:33:50.551 clat (msec): min=4, max=333, avg=99.75, stdev=50.06 00:33:50.551 lat (msec): min=4, max=333, avg=101.11, stdev=50.87 00:33:50.551 clat percentiles (msec): 00:33:50.551 | 1.00th=[ 15], 5.00th=[ 32], 10.00th=[ 43], 20.00th=[ 58], 00:33:50.551 | 30.00th=[ 73], 40.00th=[ 84], 50.00th=[ 92], 60.00th=[ 102], 00:33:50.551 | 70.00th=[ 115], 80.00th=[ 138], 90.00th=[ 167], 95.00th=[ 194], 00:33:50.551 | 99.00th=[ 253], 99.50th=[ 262], 99.90th=[ 284], 99.95th=[ 284], 00:33:50.551 | 99.99th=[ 334] 00:33:50.551 bw ( KiB/s): min=72192, max=296960, per=8.22%, avg=160783.65, stdev=57034.78, samples=20 00:33:50.551 iops : min= 282, max= 1160, avg=628.05, stdev=222.79, samples=20 00:33:50.551 lat (msec) : 10=0.54%, 20=1.64%, 50=13.05%, 100=43.35%, 250=40.20% 00:33:50.551 lat (msec) : 500=1.21% 00:33:50.551 cpu : usr=0.35%, sys=2.08%, ctx=1302, majf=0, minf=4097 00:33:50.551 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:33:50.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:50.551 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:33:50.551 issued rwts: total=6343,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:50.551 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:50.551 job7: (groupid=0, jobs=1): err= 0: pid=2338676: Wed May 15 08:59:43 2024 00:33:50.551 read: IOPS=567, BW=142MiB/s (149MB/s)(1428MiB/10070msec) 00:33:50.551 slat (usec): min=10, max=113749, avg=1412.55, stdev=5408.90 00:33:50.551 clat (usec): min=1333, max=308529, avg=111308.63, stdev=54500.93 00:33:50.551 lat (usec): min=1385, max=334818, avg=112721.18, stdev=55292.80 00:33:50.551 clat percentiles (msec): 00:33:50.551 | 1.00th=[ 5], 5.00th=[ 26], 10.00th=[ 45], 20.00th=[ 65], 00:33:50.551 | 30.00th=[ 82], 40.00th=[ 96], 50.00th=[ 107], 60.00th=[ 121], 00:33:50.551 | 70.00th=[ 140], 80.00th=[ 155], 90.00th=[ 182], 95.00th=[ 215], 00:33:50.551 | 99.00th=[ 251], 99.50th=[ 262], 99.90th=[ 296], 99.95th=[ 305], 00:33:50.551 | 99.99th=[ 309] 00:33:50.551 bw ( KiB/s): min=81920, max=207360, per=7.39%, avg=144588.80, stdev=36535.58, samples=20 00:33:50.551 iops : min= 320, max= 810, avg=564.80, stdev=142.72, samples=20 00:33:50.551 lat (msec) : 2=0.26%, 4=0.56%, 10=2.14%, 20=1.45%, 50=7.56% 00:33:50.551 lat (msec) : 100=32.67%, 250=54.33%, 500=1.02% 00:33:50.551 cpu : usr=0.37%, sys=2.02%, ctx=1321, majf=0, minf=4097 00:33:50.551 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:33:50.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:50.551 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:33:50.551 issued rwts: total=5711,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:50.551 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:50.551 job8: (groupid=0, jobs=1): err= 0: pid=2338679: Wed May 15 08:59:43 2024 00:33:50.551 read: IOPS=702, BW=176MiB/s (184MB/s)(1762MiB/10031msec) 00:33:50.551 slat (usec): min=10, max=107799, avg=1137.98, stdev=4561.68 00:33:50.551 clat (usec): min=1935, max=289380, avg=89871.08, stdev=55808.87 00:33:50.551 lat (usec): min=1986, max=289428, avg=91009.06, stdev=56474.16 00:33:50.551 clat percentiles (msec): 00:33:50.551 | 1.00th=[ 6], 5.00th=[ 21], 10.00th=[ 30], 20.00th=[ 40], 00:33:50.551 | 30.00th=[ 51], 40.00th=[ 62], 50.00th=[ 80], 60.00th=[ 93], 00:33:50.551 | 70.00th=[ 114], 80.00th=[ 144], 90.00th=[ 174], 95.00th=[ 199], 00:33:50.551 | 99.00th=[ 232], 99.50th=[ 239], 99.90th=[ 251], 99.95th=[ 264], 00:33:50.551 | 99.99th=[ 288] 00:33:50.551 bw ( KiB/s): min=85504, max=481280, per=9.14%, avg=178764.80, stdev=94901.87, samples=20 00:33:50.551 iops : min= 334, max= 1880, avg=698.30, stdev=370.71, samples=20 00:33:50.551 lat (msec) : 2=0.03%, 4=0.58%, 10=1.49%, 20=2.34%, 50=25.01% 00:33:50.551 lat (msec) : 100=34.76%, 250=35.59%, 500=0.20% 00:33:50.551 cpu : usr=0.43%, sys=2.27%, ctx=1533, majf=0, minf=4097 00:33:50.551 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:33:50.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:50.551 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:33:50.551 issued rwts: total=7046,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:50.551 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:50.551 job9: (groupid=0, jobs=1): err= 0: pid=2338680: Wed May 15 08:59:43 2024 00:33:50.551 read: IOPS=690, BW=173MiB/s (181MB/s)(1732MiB/10042msec) 00:33:50.551 slat (usec): min=9, max=73455, avg=1229.96, stdev=4075.89 00:33:50.551 clat (msec): min=2, max=280, avg=91.42, stdev=44.12 00:33:50.551 lat (msec): min=2, max=280, avg=92.65, stdev=44.76 00:33:50.551 clat percentiles (msec): 00:33:50.551 | 1.00th=[ 5], 5.00th=[ 22], 10.00th=[ 35], 20.00th=[ 58], 00:33:50.551 | 30.00th=[ 71], 40.00th=[ 80], 50.00th=[ 88], 60.00th=[ 99], 00:33:50.551 | 70.00th=[ 108], 80.00th=[ 123], 90.00th=[ 150], 95.00th=[ 167], 00:33:50.551 | 99.00th=[ 228], 99.50th=[ 255], 99.90th=[ 271], 99.95th=[ 271], 00:33:50.551 | 99.99th=[ 279] 00:33:50.551 bw ( KiB/s): min=89600, max=310784, per=8.98%, avg=175769.60, stdev=51417.30, samples=20 00:33:50.551 iops : min= 350, max= 1214, avg=686.60, stdev=200.85, samples=20 00:33:50.551 lat (msec) : 4=0.91%, 10=2.11%, 20=1.76%, 50=9.58%, 100=47.47% 00:33:50.551 lat (msec) : 250=37.57%, 500=0.61% 00:33:50.551 cpu : usr=0.38%, sys=2.22%, ctx=1420, majf=0, minf=4097 00:33:50.551 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:33:50.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:50.551 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:33:50.551 issued rwts: total=6929,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:50.551 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:50.551 job10: (groupid=0, jobs=1): err= 0: pid=2338683: Wed May 15 08:59:43 2024 00:33:50.552 read: IOPS=622, BW=156MiB/s (163MB/s)(1567MiB/10066msec) 00:33:50.552 slat (usec): min=12, max=102942, avg=1313.63, stdev=4303.04 00:33:50.552 clat (msec): min=2, max=339, avg=101.39, stdev=53.81 00:33:50.552 lat (msec): min=2, max=351, avg=102.70, stdev=54.56 00:33:50.552 clat percentiles (msec): 00:33:50.552 | 1.00th=[ 7], 5.00th=[ 22], 10.00th=[ 31], 20.00th=[ 43], 00:33:50.552 | 30.00th=[ 79], 40.00th=[ 91], 50.00th=[ 100], 60.00th=[ 110], 00:33:50.552 | 70.00th=[ 125], 80.00th=[ 146], 90.00th=[ 169], 95.00th=[ 190], 00:33:50.552 | 99.00th=[ 247], 99.50th=[ 259], 99.90th=[ 271], 99.95th=[ 275], 00:33:50.552 | 99.99th=[ 338] 00:33:50.552 bw ( KiB/s): min=62976, max=347136, per=8.11%, avg=158781.10, stdev=67635.91, samples=20 00:33:50.552 iops : min= 246, max= 1356, avg=620.20, stdev=264.20, samples=20 00:33:50.552 lat (msec) : 4=0.19%, 10=2.78%, 20=1.55%, 50=15.90%, 100=30.79% 00:33:50.552 lat (msec) : 250=48.07%, 500=0.73% 00:33:50.552 cpu : usr=0.40%, sys=2.15%, ctx=1437, majf=0, minf=4097 00:33:50.552 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:33:50.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:50.552 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:33:50.552 issued rwts: total=6266,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:50.552 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:50.552 00:33:50.552 Run status group 0 (all jobs): 00:33:50.552 READ: bw=1911MiB/s (2004MB/s), 142MiB/s-257MiB/s (149MB/s-269MB/s), io=18.8GiB (20.2GB), run=10031-10100msec 00:33:50.552 00:33:50.552 Disk stats (read/write): 00:33:50.552 nvme0n1: ios=13147/0, merge=0/0, ticks=1238021/0, in_queue=1238021, util=96.88% 00:33:50.552 nvme10n1: ios=14421/0, merge=0/0, ticks=1238060/0, in_queue=1238060, util=97.14% 00:33:50.552 nvme1n1: ios=11899/0, merge=0/0, ticks=1229455/0, in_queue=1229455, util=97.43% 00:33:50.552 nvme2n1: ios=16273/0, merge=0/0, ticks=1231588/0, in_queue=1231588, util=97.61% 00:33:50.552 nvme3n1: ios=20510/0, merge=0/0, ticks=1235811/0, in_queue=1235811, util=97.69% 00:33:50.552 nvme4n1: ios=12030/0, merge=0/0, ticks=1238846/0, in_queue=1238846, util=98.05% 00:33:50.552 nvme5n1: ios=12406/0, merge=0/0, ticks=1232310/0, in_queue=1232310, util=98.24% 00:33:50.552 nvme6n1: ios=11161/0, merge=0/0, ticks=1230283/0, in_queue=1230283, util=98.39% 00:33:50.552 nvme7n1: ios=13861/0, merge=0/0, ticks=1232127/0, in_queue=1232127, util=98.85% 00:33:50.552 nvme8n1: ios=13492/0, merge=0/0, ticks=1234343/0, in_queue=1234343, util=99.05% 00:33:50.552 nvme9n1: ios=12276/0, merge=0/0, ticks=1231759/0, in_queue=1231759, util=99.20% 00:33:50.552 08:59:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:33:50.552 [global] 00:33:50.552 thread=1 00:33:50.552 invalidate=1 00:33:50.552 rw=randwrite 00:33:50.552 time_based=1 00:33:50.552 runtime=10 00:33:50.552 ioengine=libaio 00:33:50.552 direct=1 00:33:50.552 bs=262144 00:33:50.552 iodepth=64 00:33:50.552 norandommap=1 00:33:50.552 numjobs=1 00:33:50.552 00:33:50.552 [job0] 00:33:50.552 filename=/dev/nvme0n1 00:33:50.552 [job1] 00:33:50.552 filename=/dev/nvme10n1 00:33:50.552 [job2] 00:33:50.552 filename=/dev/nvme1n1 00:33:50.552 [job3] 00:33:50.552 filename=/dev/nvme2n1 00:33:50.552 [job4] 00:33:50.552 filename=/dev/nvme3n1 00:33:50.552 [job5] 00:33:50.552 filename=/dev/nvme4n1 00:33:50.552 [job6] 00:33:50.552 filename=/dev/nvme5n1 00:33:50.552 [job7] 00:33:50.552 filename=/dev/nvme6n1 00:33:50.552 [job8] 00:33:50.552 filename=/dev/nvme7n1 00:33:50.552 [job9] 00:33:50.552 filename=/dev/nvme8n1 00:33:50.552 [job10] 00:33:50.552 filename=/dev/nvme9n1 00:33:50.552 Could not set queue depth (nvme0n1) 00:33:50.552 Could not set queue depth (nvme10n1) 00:33:50.552 Could not set queue depth (nvme1n1) 00:33:50.552 Could not set queue depth (nvme2n1) 00:33:50.552 Could not set queue depth (nvme3n1) 00:33:50.552 Could not set queue depth (nvme4n1) 00:33:50.552 Could not set queue depth (nvme5n1) 00:33:50.552 Could not set queue depth (nvme6n1) 00:33:50.552 Could not set queue depth (nvme7n1) 00:33:50.552 Could not set queue depth (nvme8n1) 00:33:50.552 Could not set queue depth (nvme9n1) 00:33:50.552 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:33:50.552 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:33:50.552 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:33:50.552 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:33:50.552 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:33:50.552 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:33:50.552 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:33:50.552 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:33:50.552 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:33:50.552 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:33:50.552 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:33:50.552 fio-3.35 00:33:50.552 Starting 11 threads 00:34:00.523 00:34:00.523 job0: (groupid=0, jobs=1): err= 0: pid=2339702: Wed May 15 08:59:54 2024 00:34:00.523 write: IOPS=610, BW=153MiB/s (160MB/s)(1555MiB/10184msec); 0 zone resets 00:34:00.523 slat (usec): min=17, max=63831, avg=1317.32, stdev=3695.03 00:34:00.523 clat (usec): min=1153, max=380321, avg=103370.66, stdev=75048.84 00:34:00.523 lat (usec): min=1692, max=380362, avg=104687.98, stdev=76021.54 00:34:00.523 clat percentiles (msec): 00:34:00.523 | 1.00th=[ 7], 5.00th=[ 19], 10.00th=[ 31], 20.00th=[ 44], 00:34:00.523 | 30.00th=[ 48], 40.00th=[ 58], 50.00th=[ 80], 60.00th=[ 93], 00:34:00.523 | 70.00th=[ 134], 80.00th=[ 176], 90.00th=[ 232], 95.00th=[ 249], 00:34:00.523 | 99.00th=[ 288], 99.50th=[ 296], 99.90th=[ 355], 99.95th=[ 368], 00:34:00.523 | 99.99th=[ 380] 00:34:00.523 bw ( KiB/s): min=63488, max=377856, per=10.94%, avg=157644.80, stdev=103943.35, samples=20 00:34:00.523 iops : min= 248, max= 1476, avg=615.80, stdev=406.03, samples=20 00:34:00.523 lat (msec) : 2=0.06%, 4=0.34%, 10=2.06%, 20=3.01%, 50=31.30% 00:34:00.523 lat (msec) : 100=25.62%, 250=33.05%, 500=4.57% 00:34:00.523 cpu : usr=2.04%, sys=1.69%, ctx=2861, majf=0, minf=1 00:34:00.523 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:34:00.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:00.523 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:34:00.523 issued rwts: total=0,6221,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:00.523 latency : target=0, window=0, percentile=100.00%, depth=64 00:34:00.523 job1: (groupid=0, jobs=1): err= 0: pid=2339704: Wed May 15 08:59:54 2024 00:34:00.523 write: IOPS=512, BW=128MiB/s (134MB/s)(1300MiB/10139msec); 0 zone resets 00:34:00.523 slat (usec): min=16, max=53883, avg=1302.95, stdev=3872.94 00:34:00.523 clat (msec): min=2, max=366, avg=123.44, stdev=77.24 00:34:00.523 lat (msec): min=2, max=377, avg=124.75, stdev=78.25 00:34:00.523 clat percentiles (msec): 00:34:00.523 | 1.00th=[ 6], 5.00th=[ 14], 10.00th=[ 24], 20.00th=[ 48], 00:34:00.523 | 30.00th=[ 82], 40.00th=[ 101], 50.00th=[ 113], 60.00th=[ 131], 00:34:00.523 | 70.00th=[ 155], 80.00th=[ 192], 90.00th=[ 236], 95.00th=[ 259], 00:34:00.523 | 99.00th=[ 330], 99.50th=[ 342], 99.90th=[ 351], 99.95th=[ 359], 00:34:00.523 | 99.99th=[ 368] 00:34:00.523 bw ( KiB/s): min=51712, max=316416, per=9.12%, avg=131481.60, stdev=58286.53, samples=20 00:34:00.523 iops : min= 202, max= 1236, avg=513.60, stdev=227.68, samples=20 00:34:00.523 lat (msec) : 4=0.29%, 10=2.48%, 20=5.46%, 50=12.04%, 100=19.83% 00:34:00.523 lat (msec) : 250=53.88%, 500=6.02% 00:34:00.523 cpu : usr=1.63%, sys=1.67%, ctx=3199, majf=0, minf=1 00:34:00.523 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:34:00.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:00.523 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:34:00.523 issued rwts: total=0,5199,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:00.523 latency : target=0, window=0, percentile=100.00%, depth=64 00:34:00.523 job2: (groupid=0, jobs=1): err= 0: pid=2339705: Wed May 15 08:59:54 2024 00:34:00.523 write: IOPS=452, BW=113MiB/s (118MB/s)(1151MiB/10185msec); 0 zone resets 00:34:00.523 slat (usec): min=22, max=125602, avg=1499.42, stdev=5145.75 00:34:00.523 clat (msec): min=2, max=396, avg=139.99, stdev=88.60 00:34:00.523 lat (msec): min=2, max=396, avg=141.49, stdev=89.68 00:34:00.523 clat percentiles (msec): 00:34:00.523 | 1.00th=[ 13], 5.00th=[ 31], 10.00th=[ 40], 20.00th=[ 50], 00:34:00.523 | 30.00th=[ 73], 40.00th=[ 95], 50.00th=[ 124], 60.00th=[ 153], 00:34:00.523 | 70.00th=[ 194], 80.00th=[ 226], 90.00th=[ 266], 95.00th=[ 292], 00:34:00.523 | 99.00th=[ 355], 99.50th=[ 363], 99.90th=[ 384], 99.95th=[ 384], 00:34:00.523 | 99.99th=[ 397] 00:34:00.523 bw ( KiB/s): min=46080, max=304640, per=8.07%, avg=116249.60, stdev=64815.08, samples=20 00:34:00.523 iops : min= 180, max= 1190, avg=454.10, stdev=253.18, samples=20 00:34:00.523 lat (msec) : 4=0.07%, 10=0.59%, 20=1.69%, 50=17.79%, 100=21.39% 00:34:00.523 lat (msec) : 250=43.92%, 500=14.55% 00:34:00.523 cpu : usr=1.09%, sys=1.57%, ctx=2444, majf=0, minf=1 00:34:00.523 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:34:00.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:00.523 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:34:00.523 issued rwts: total=0,4604,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:00.523 latency : target=0, window=0, percentile=100.00%, depth=64 00:34:00.523 job3: (groupid=0, jobs=1): err= 0: pid=2339718: Wed May 15 08:59:54 2024 00:34:00.523 write: IOPS=408, BW=102MiB/s (107MB/s)(1040MiB/10182msec); 0 zone resets 00:34:00.523 slat (usec): min=15, max=68584, avg=2163.51, stdev=5024.47 00:34:00.523 clat (msec): min=5, max=374, avg=154.46, stdev=75.66 00:34:00.523 lat (msec): min=5, max=374, avg=156.63, stdev=76.58 00:34:00.523 clat percentiles (msec): 00:34:00.523 | 1.00th=[ 39], 5.00th=[ 78], 10.00th=[ 80], 20.00th=[ 83], 00:34:00.523 | 30.00th=[ 87], 40.00th=[ 93], 50.00th=[ 146], 60.00th=[ 186], 00:34:00.523 | 70.00th=[ 209], 80.00th=[ 224], 90.00th=[ 264], 95.00th=[ 292], 00:34:00.523 | 99.00th=[ 326], 99.50th=[ 338], 99.90th=[ 363], 99.95th=[ 363], 00:34:00.523 | 99.99th=[ 376] 00:34:00.523 bw ( KiB/s): min=57344, max=196608, per=7.27%, avg=104832.00, stdev=48184.41, samples=20 00:34:00.523 iops : min= 224, max= 768, avg=409.50, stdev=188.22, samples=20 00:34:00.523 lat (msec) : 10=0.07%, 20=0.31%, 50=1.35%, 100=42.45%, 250=44.40% 00:34:00.523 lat (msec) : 500=11.42% 00:34:00.523 cpu : usr=1.35%, sys=1.34%, ctx=1530, majf=0, minf=1 00:34:00.523 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:34:00.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:00.523 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:34:00.523 issued rwts: total=0,4158,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:00.523 latency : target=0, window=0, percentile=100.00%, depth=64 00:34:00.523 job4: (groupid=0, jobs=1): err= 0: pid=2339719: Wed May 15 08:59:54 2024 00:34:00.523 write: IOPS=559, BW=140MiB/s (147MB/s)(1417MiB/10139msec); 0 zone resets 00:34:00.523 slat (usec): min=14, max=59822, avg=1371.07, stdev=3908.51 00:34:00.523 clat (usec): min=881, max=455845, avg=113052.58, stdev=78467.47 00:34:00.523 lat (usec): min=972, max=455886, avg=114423.66, stdev=79560.99 00:34:00.523 clat percentiles (msec): 00:34:00.523 | 1.00th=[ 5], 5.00th=[ 14], 10.00th=[ 26], 20.00th=[ 51], 00:34:00.523 | 30.00th=[ 62], 40.00th=[ 89], 50.00th=[ 104], 60.00th=[ 117], 00:34:00.523 | 70.00th=[ 138], 80.00th=[ 157], 90.00th=[ 205], 95.00th=[ 255], 00:34:00.523 | 99.00th=[ 380], 99.50th=[ 422], 99.90th=[ 456], 99.95th=[ 456], 00:34:00.523 | 99.99th=[ 456] 00:34:00.523 bw ( KiB/s): min=38912, max=338944, per=9.96%, avg=143488.00, stdev=66098.40, samples=20 00:34:00.523 iops : min= 152, max= 1324, avg=560.50, stdev=258.20, samples=20 00:34:00.523 lat (usec) : 1000=0.05% 00:34:00.523 lat (msec) : 2=0.11%, 4=0.55%, 10=2.96%, 20=4.11%, 50=12.23% 00:34:00.523 lat (msec) : 100=26.57%, 250=47.28%, 500=6.14% 00:34:00.523 cpu : usr=1.64%, sys=1.81%, ctx=2863, majf=0, minf=1 00:34:00.523 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:34:00.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:00.523 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:34:00.523 issued rwts: total=0,5668,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:00.523 latency : target=0, window=0, percentile=100.00%, depth=64 00:34:00.523 job5: (groupid=0, jobs=1): err= 0: pid=2339720: Wed May 15 08:59:54 2024 00:34:00.523 write: IOPS=641, BW=160MiB/s (168MB/s)(1611MiB/10038msec); 0 zone resets 00:34:00.523 slat (usec): min=15, max=143980, avg=1133.50, stdev=4042.51 00:34:00.523 clat (usec): min=860, max=329858, avg=98361.69, stdev=78338.89 00:34:00.523 lat (usec): min=921, max=333750, avg=99495.19, stdev=79115.95 00:34:00.523 clat percentiles (msec): 00:34:00.523 | 1.00th=[ 7], 5.00th=[ 19], 10.00th=[ 35], 20.00th=[ 42], 00:34:00.523 | 30.00th=[ 43], 40.00th=[ 44], 50.00th=[ 56], 60.00th=[ 82], 00:34:00.523 | 70.00th=[ 136], 80.00th=[ 178], 90.00th=[ 228], 95.00th=[ 253], 00:34:00.523 | 99.00th=[ 296], 99.50th=[ 309], 99.90th=[ 326], 99.95th=[ 326], 00:34:00.523 | 99.99th=[ 330] 00:34:00.523 bw ( KiB/s): min=71680, max=379904, per=11.33%, avg=163302.40, stdev=98933.07, samples=20 00:34:00.523 iops : min= 280, max= 1484, avg=637.90, stdev=386.46, samples=20 00:34:00.523 lat (usec) : 1000=0.03% 00:34:00.523 lat (msec) : 2=0.03%, 4=0.29%, 10=1.68%, 20=3.66%, 50=41.90% 00:34:00.523 lat (msec) : 100=17.26%, 250=30.07%, 500=5.08% 00:34:00.523 cpu : usr=1.92%, sys=1.83%, ctx=2979, majf=0, minf=1 00:34:00.523 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:34:00.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:00.523 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:34:00.523 issued rwts: total=0,6442,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:00.523 latency : target=0, window=0, percentile=100.00%, depth=64 00:34:00.524 job6: (groupid=0, jobs=1): err= 0: pid=2339721: Wed May 15 08:59:54 2024 00:34:00.524 write: IOPS=344, BW=86.0MiB/s (90.2MB/s)(876MiB/10179msec); 0 zone resets 00:34:00.524 slat (usec): min=21, max=81525, avg=2611.47, stdev=5576.31 00:34:00.524 clat (usec): min=1427, max=378102, avg=183251.80, stdev=65421.92 00:34:00.524 lat (usec): min=1462, max=378144, avg=185863.27, stdev=66319.57 00:34:00.524 clat percentiles (msec): 00:34:00.524 | 1.00th=[ 13], 5.00th=[ 41], 10.00th=[ 82], 20.00th=[ 136], 00:34:00.524 | 30.00th=[ 174], 40.00th=[ 190], 50.00th=[ 199], 60.00th=[ 205], 00:34:00.524 | 70.00th=[ 213], 80.00th=[ 226], 90.00th=[ 255], 95.00th=[ 271], 00:34:00.524 | 99.00th=[ 321], 99.50th=[ 330], 99.90th=[ 363], 99.95th=[ 380], 00:34:00.524 | 99.99th=[ 380] 00:34:00.524 bw ( KiB/s): min=55296, max=167424, per=6.11%, avg=88056.80, stdev=25798.13, samples=20 00:34:00.524 iops : min= 216, max= 654, avg=343.95, stdev=100.79, samples=20 00:34:00.524 lat (msec) : 2=0.03%, 4=0.20%, 10=0.31%, 20=2.28%, 50=3.74% 00:34:00.524 lat (msec) : 100=4.54%, 250=76.45%, 500=12.45% 00:34:00.524 cpu : usr=1.14%, sys=1.12%, ctx=1376, majf=0, minf=1 00:34:00.524 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:34:00.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:00.524 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:34:00.524 issued rwts: total=0,3503,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:00.524 latency : target=0, window=0, percentile=100.00%, depth=64 00:34:00.524 job7: (groupid=0, jobs=1): err= 0: pid=2339722: Wed May 15 08:59:54 2024 00:34:00.524 write: IOPS=445, BW=111MiB/s (117MB/s)(1133MiB/10181msec); 0 zone resets 00:34:00.524 slat (usec): min=23, max=100416, avg=1692.70, stdev=4500.63 00:34:00.524 clat (msec): min=2, max=422, avg=141.96, stdev=73.20 00:34:00.524 lat (msec): min=2, max=422, avg=143.66, stdev=74.21 00:34:00.524 clat percentiles (msec): 00:34:00.524 | 1.00th=[ 10], 5.00th=[ 21], 10.00th=[ 37], 20.00th=[ 100], 00:34:00.524 | 30.00th=[ 110], 40.00th=[ 122], 50.00th=[ 138], 60.00th=[ 150], 00:34:00.524 | 70.00th=[ 167], 80.00th=[ 201], 90.00th=[ 232], 95.00th=[ 279], 00:34:00.524 | 99.00th=[ 359], 99.50th=[ 376], 99.90th=[ 414], 99.95th=[ 414], 00:34:00.524 | 99.99th=[ 422] 00:34:00.524 bw ( KiB/s): min=56320, max=176640, per=7.94%, avg=114380.80, stdev=32575.18, samples=20 00:34:00.524 iops : min= 220, max= 690, avg=446.80, stdev=127.25, samples=20 00:34:00.524 lat (msec) : 4=0.09%, 10=1.04%, 20=3.80%, 50=7.95%, 100=7.81% 00:34:00.524 lat (msec) : 250=72.88%, 500=6.44% 00:34:00.524 cpu : usr=1.45%, sys=1.46%, ctx=2319, majf=0, minf=1 00:34:00.524 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:34:00.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:00.524 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:34:00.524 issued rwts: total=0,4531,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:00.524 latency : target=0, window=0, percentile=100.00%, depth=64 00:34:00.524 job8: (groupid=0, jobs=1): err= 0: pid=2339723: Wed May 15 08:59:54 2024 00:34:00.524 write: IOPS=615, BW=154MiB/s (161MB/s)(1544MiB/10039msec); 0 zone resets 00:34:00.524 slat (usec): min=17, max=83639, avg=1356.93, stdev=3692.97 00:34:00.524 clat (usec): min=1255, max=371064, avg=102666.02, stdev=71070.34 00:34:00.524 lat (usec): min=1897, max=371118, avg=104022.96, stdev=72049.17 00:34:00.524 clat percentiles (msec): 00:34:00.524 | 1.00th=[ 6], 5.00th=[ 16], 10.00th=[ 27], 20.00th=[ 41], 00:34:00.524 | 30.00th=[ 55], 40.00th=[ 75], 50.00th=[ 80], 60.00th=[ 109], 00:34:00.524 | 70.00th=[ 128], 80.00th=[ 155], 90.00th=[ 194], 95.00th=[ 247], 00:34:00.524 | 99.00th=[ 342], 99.50th=[ 359], 99.90th=[ 368], 99.95th=[ 372], 00:34:00.524 | 99.99th=[ 372] 00:34:00.524 bw ( KiB/s): min=47104, max=386560, per=10.85%, avg=156417.45, stdev=90614.77, samples=20 00:34:00.524 iops : min= 184, max= 1510, avg=611.00, stdev=353.96, samples=20 00:34:00.524 lat (msec) : 2=0.03%, 4=0.45%, 10=2.40%, 20=4.78%, 50=21.27% 00:34:00.524 lat (msec) : 100=25.61%, 250=40.95%, 500=4.52% 00:34:00.524 cpu : usr=1.63%, sys=1.86%, ctx=2787, majf=0, minf=1 00:34:00.524 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:34:00.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:00.524 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:34:00.524 issued rwts: total=0,6174,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:00.524 latency : target=0, window=0, percentile=100.00%, depth=64 00:34:00.524 job9: (groupid=0, jobs=1): err= 0: pid=2339724: Wed May 15 08:59:54 2024 00:34:00.524 write: IOPS=442, BW=111MiB/s (116MB/s)(1128MiB/10183msec); 0 zone resets 00:34:00.524 slat (usec): min=14, max=65764, avg=1670.45, stdev=4352.71 00:34:00.524 clat (usec): min=1646, max=403517, avg=142742.53, stdev=72491.82 00:34:00.524 lat (usec): min=1807, max=403552, avg=144412.99, stdev=73201.59 00:34:00.524 clat percentiles (msec): 00:34:00.524 | 1.00th=[ 7], 5.00th=[ 27], 10.00th=[ 60], 20.00th=[ 82], 00:34:00.524 | 30.00th=[ 100], 40.00th=[ 109], 50.00th=[ 126], 60.00th=[ 161], 00:34:00.524 | 70.00th=[ 188], 80.00th=[ 213], 90.00th=[ 236], 95.00th=[ 259], 00:34:00.524 | 99.00th=[ 317], 99.50th=[ 347], 99.90th=[ 393], 99.95th=[ 393], 00:34:00.524 | 99.99th=[ 405] 00:34:00.524 bw ( KiB/s): min=58880, max=220160, per=7.90%, avg=113843.20, stdev=48298.35, samples=20 00:34:00.524 iops : min= 230, max= 860, avg=444.70, stdev=188.67, samples=20 00:34:00.524 lat (msec) : 2=0.07%, 4=0.44%, 10=0.98%, 20=2.08%, 50=5.08% 00:34:00.524 lat (msec) : 100=22.20%, 250=62.62%, 500=6.54% 00:34:00.524 cpu : usr=1.43%, sys=1.53%, ctx=2122, majf=0, minf=1 00:34:00.524 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:34:00.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:00.524 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:34:00.524 issued rwts: total=0,4510,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:00.524 latency : target=0, window=0, percentile=100.00%, depth=64 00:34:00.524 job10: (groupid=0, jobs=1): err= 0: pid=2339725: Wed May 15 08:59:54 2024 00:34:00.524 write: IOPS=624, BW=156MiB/s (164MB/s)(1582MiB/10135msec); 0 zone resets 00:34:00.524 slat (usec): min=22, max=105539, avg=1251.67, stdev=3824.76 00:34:00.524 clat (usec): min=1111, max=344445, avg=101159.52, stdev=73730.73 00:34:00.524 lat (usec): min=1198, max=344491, avg=102411.19, stdev=74668.67 00:34:00.524 clat percentiles (msec): 00:34:00.524 | 1.00th=[ 4], 5.00th=[ 19], 10.00th=[ 28], 20.00th=[ 44], 00:34:00.524 | 30.00th=[ 61], 40.00th=[ 75], 50.00th=[ 79], 60.00th=[ 83], 00:34:00.524 | 70.00th=[ 95], 80.00th=[ 165], 90.00th=[ 228], 95.00th=[ 264], 00:34:00.524 | 99.00th=[ 300], 99.50th=[ 309], 99.90th=[ 330], 99.95th=[ 334], 00:34:00.524 | 99.99th=[ 347] 00:34:00.524 bw ( KiB/s): min=61440, max=335360, per=11.13%, avg=160358.40, stdev=87928.47, samples=20 00:34:00.524 iops : min= 240, max= 1310, avg=626.40, stdev=343.47, samples=20 00:34:00.524 lat (msec) : 2=0.40%, 4=0.96%, 10=1.30%, 20=3.32%, 50=17.95% 00:34:00.524 lat (msec) : 100=46.89%, 250=22.30%, 500=6.88% 00:34:00.524 cpu : usr=1.84%, sys=1.90%, ctx=3046, majf=0, minf=1 00:34:00.524 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:34:00.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:00.524 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:34:00.524 issued rwts: total=0,6327,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:00.524 latency : target=0, window=0, percentile=100.00%, depth=64 00:34:00.524 00:34:00.524 Run status group 0 (all jobs): 00:34:00.524 WRITE: bw=1407MiB/s (1476MB/s), 86.0MiB/s-160MiB/s (90.2MB/s-168MB/s), io=14.0GiB (15.0GB), run=10038-10185msec 00:34:00.524 00:34:00.524 Disk stats (read/write): 00:34:00.524 nvme0n1: ios=51/12433, merge=0/0, ticks=1186/1240222, in_queue=1241408, util=99.48% 00:34:00.524 nvme10n1: ios=49/10204, merge=0/0, ticks=45/1218462, in_queue=1218507, util=97.52% 00:34:00.524 nvme1n1: ios=49/9193, merge=0/0, ticks=41/1246546, in_queue=1246587, util=97.77% 00:34:00.524 nvme2n1: ios=41/8306, merge=0/0, ticks=29/1237073, in_queue=1237102, util=97.85% 00:34:00.524 nvme3n1: ios=27/11142, merge=0/0, ticks=181/1212887, in_queue=1213068, util=98.22% 00:34:00.524 nvme4n1: ios=44/12532, merge=0/0, ticks=3879/1210637, in_queue=1214516, util=100.00% 00:34:00.524 nvme5n1: ios=44/6989, merge=0/0, ticks=2679/1234769, in_queue=1237448, util=100.00% 00:34:00.524 nvme6n1: ios=39/9053, merge=0/0, ticks=1216/1242356, in_queue=1243572, util=100.00% 00:34:00.524 nvme7n1: ios=44/11964, merge=0/0, ticks=1592/1217434, in_queue=1219026, util=100.00% 00:34:00.524 nvme8n1: ios=0/9009, merge=0/0, ticks=0/1244605, in_queue=1244605, util=99.00% 00:34:00.524 nvme9n1: ios=40/12466, merge=0/0, ticks=617/1212699, in_queue=1213316, util=100.00% 00:34:00.524 08:59:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:34:00.524 08:59:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:34:00.524 08:59:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:34:00.524 08:59:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:00.524 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:00.524 08:59:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:34:00.524 08:59:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # local i=0 00:34:00.524 08:59:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:34:00.524 08:59:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # grep -q -w SPDK1 00:34:00.524 08:59:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:34:00.524 08:59:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:34:00.524 08:59:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1228 -- # return 0 00:34:00.524 08:59:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:00.524 08:59:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:00.524 08:59:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:34:00.524 08:59:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:00.524 08:59:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:34:00.524 08:59:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:34:00.524 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:34:00.524 08:59:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:34:00.524 08:59:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # local i=0 00:34:00.524 08:59:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:34:00.524 08:59:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # grep -q -w SPDK2 00:34:00.524 08:59:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:34:00.525 08:59:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:34:00.525 08:59:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1228 -- # return 0 00:34:00.525 08:59:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:34:00.525 08:59:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:00.525 08:59:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:34:00.525 08:59:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:00.525 08:59:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:34:00.525 08:59:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:34:00.783 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:34:00.783 08:59:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:34:00.783 08:59:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # local i=0 00:34:00.783 08:59:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:34:00.783 08:59:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # grep -q -w SPDK3 00:34:00.783 08:59:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:34:00.783 08:59:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:34:00.783 08:59:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1228 -- # return 0 00:34:00.783 08:59:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:34:00.783 08:59:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:00.783 08:59:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:34:00.783 08:59:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:00.783 08:59:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:34:00.783 08:59:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:34:01.041 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:34:01.041 08:59:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:34:01.041 08:59:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # local i=0 00:34:01.041 08:59:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:34:01.041 08:59:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # grep -q -w SPDK4 00:34:01.041 08:59:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:34:01.041 08:59:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:34:01.041 08:59:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1228 -- # return 0 00:34:01.041 08:59:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:34:01.041 08:59:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:01.041 08:59:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:34:01.041 08:59:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:01.041 08:59:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:34:01.041 08:59:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:34:01.299 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:34:01.299 08:59:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:34:01.299 08:59:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # local i=0 00:34:01.299 08:59:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:34:01.299 08:59:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # grep -q -w SPDK5 00:34:01.299 08:59:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:34:01.299 08:59:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:34:01.299 08:59:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1228 -- # return 0 00:34:01.299 08:59:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:34:01.299 08:59:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:01.299 08:59:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:34:01.299 08:59:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:01.299 08:59:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:34:01.299 08:59:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:34:01.299 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:34:01.299 08:59:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:34:01.299 08:59:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # local i=0 00:34:01.299 08:59:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:34:01.299 08:59:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # grep -q -w SPDK6 00:34:01.299 08:59:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:34:01.299 08:59:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:34:01.299 08:59:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1228 -- # return 0 00:34:01.299 08:59:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:34:01.299 08:59:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:01.299 08:59:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:34:01.299 08:59:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:01.299 08:59:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:34:01.299 08:59:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:34:01.557 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:34:01.557 08:59:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:34:01.557 08:59:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # local i=0 00:34:01.557 08:59:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:34:01.557 08:59:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # grep -q -w SPDK7 00:34:01.557 08:59:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:34:01.557 08:59:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:34:01.557 08:59:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1228 -- # return 0 00:34:01.557 08:59:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:34:01.557 08:59:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:01.557 08:59:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:34:01.557 08:59:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:01.557 08:59:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:34:01.557 08:59:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:34:01.815 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:34:01.815 08:59:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:34:01.815 08:59:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # local i=0 00:34:01.815 08:59:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:34:01.815 08:59:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # grep -q -w SPDK8 00:34:01.815 08:59:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:34:01.815 08:59:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:34:01.815 08:59:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1228 -- # return 0 00:34:01.815 08:59:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:34:01.815 08:59:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:01.815 08:59:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:34:01.815 08:59:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:01.815 08:59:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:34:01.815 08:59:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:34:01.815 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:34:01.815 08:59:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:34:01.815 08:59:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # local i=0 00:34:01.815 08:59:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:34:01.815 08:59:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # grep -q -w SPDK9 00:34:01.815 08:59:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:34:01.815 08:59:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:34:01.815 08:59:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1228 -- # return 0 00:34:01.815 08:59:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:34:01.815 08:59:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:01.815 08:59:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:34:01.815 08:59:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:01.815 08:59:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:34:01.815 08:59:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:34:02.073 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:34:02.073 08:59:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:34:02.073 08:59:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # local i=0 00:34:02.073 08:59:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:34:02.073 08:59:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # grep -q -w SPDK10 00:34:02.073 08:59:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:34:02.073 08:59:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:34:02.073 08:59:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1228 -- # return 0 00:34:02.073 08:59:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:34:02.073 08:59:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:02.073 08:59:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:34:02.074 08:59:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:02.074 08:59:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:34:02.074 08:59:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:34:02.074 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:34:02.074 08:59:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:34:02.074 08:59:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # local i=0 00:34:02.074 08:59:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:34:02.074 08:59:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # grep -q -w SPDK11 00:34:02.074 08:59:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:34:02.074 08:59:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:34:02.074 08:59:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1228 -- # return 0 00:34:02.074 08:59:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:34:02.074 08:59:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:02.074 08:59:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:34:02.074 08:59:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:02.074 08:59:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:34:02.074 08:59:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:34:02.074 08:59:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:34:02.074 08:59:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:02.074 08:59:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:34:02.074 08:59:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:02.074 08:59:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:34:02.074 08:59:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:02.074 08:59:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:02.074 rmmod nvme_tcp 00:34:02.074 rmmod nvme_fabrics 00:34:02.074 rmmod nvme_keyring 00:34:02.332 08:59:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:02.332 08:59:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:34:02.332 08:59:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:34:02.332 08:59:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 2334417 ']' 00:34:02.332 08:59:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 2334417 00:34:02.332 08:59:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@947 -- # '[' -z 2334417 ']' 00:34:02.332 08:59:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # kill -0 2334417 00:34:02.332 08:59:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # uname 00:34:02.332 08:59:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:34:02.332 08:59:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2334417 00:34:02.332 08:59:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:34:02.332 08:59:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:34:02.332 08:59:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2334417' 00:34:02.332 killing process with pid 2334417 00:34:02.332 08:59:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@966 -- # kill 2334417 00:34:02.332 [2024-05-15 08:59:56.905703] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:34:02.332 08:59:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@971 -- # wait 2334417 00:34:02.898 08:59:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:02.898 08:59:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:02.898 08:59:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:02.898 08:59:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:02.898 08:59:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:02.898 08:59:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:02.898 08:59:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:02.898 08:59:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:04.797 08:59:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:04.797 00:34:04.797 real 1m0.664s 00:34:04.797 user 3m23.095s 00:34:04.797 sys 0m24.226s 00:34:04.797 08:59:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1123 -- # xtrace_disable 00:34:04.797 08:59:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:34:04.797 ************************************ 00:34:04.797 END TEST nvmf_multiconnection 00:34:04.797 ************************************ 00:34:04.797 08:59:59 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:34:04.797 08:59:59 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:34:04.797 08:59:59 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:34:04.797 08:59:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:04.797 ************************************ 00:34:04.797 START TEST nvmf_initiator_timeout 00:34:04.797 ************************************ 00:34:04.797 08:59:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:34:05.055 * Looking for test storage... 00:34:05.055 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:05.055 08:59:59 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:05.055 08:59:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:34:05.055 08:59:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:05.055 08:59:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:05.055 08:59:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:05.055 08:59:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:05.055 08:59:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:05.055 08:59:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:05.055 08:59:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:05.055 08:59:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:05.055 08:59:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:05.055 08:59:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:05.055 08:59:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:34:05.055 08:59:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:34:05.055 08:59:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:05.055 08:59:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:05.055 08:59:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:05.055 08:59:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:05.055 08:59:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:05.055 08:59:59 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:05.055 08:59:59 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:05.055 08:59:59 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:05.055 08:59:59 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.055 08:59:59 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.055 08:59:59 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.055 08:59:59 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:34:05.056 08:59:59 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.056 08:59:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:34:05.056 08:59:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:05.056 08:59:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:05.056 08:59:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:05.056 08:59:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:05.056 08:59:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:05.056 08:59:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:05.056 08:59:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:05.056 08:59:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:05.056 08:59:59 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:05.056 08:59:59 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:05.056 08:59:59 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:34:05.056 08:59:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:05.056 08:59:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:05.056 08:59:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:05.056 08:59:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:05.056 08:59:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:05.056 08:59:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:05.056 08:59:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:05.056 08:59:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:05.056 08:59:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:05.056 08:59:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:05.056 08:59:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:34:05.056 08:59:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:34:07.576 09:00:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:07.577 09:00:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:34:07.577 09:00:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:07.577 09:00:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:07.577 09:00:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:07.577 09:00:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:07.577 09:00:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:07.577 09:00:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:34:07.577 09:00:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:07.577 09:00:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:34:07.577 09:00:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:34:07.577 09:00:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:34:07.577 09:00:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:34:07.577 09:00:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:34:07.577 09:00:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:34:07.577 09:00:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:07.577 09:00:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:07.577 09:00:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:07.577 09:00:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:07.577 09:00:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:07.577 09:00:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:07.577 09:00:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:07.577 09:00:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:07.577 09:00:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:07.577 09:00:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:07.577 09:00:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:07.577 09:00:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:07.577 09:00:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:07.577 09:00:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:07.577 09:00:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:07.577 09:00:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:07.577 09:00:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:07.577 09:00:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:07.577 09:00:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:34:07.577 Found 0000:09:00.0 (0x8086 - 0x159b) 00:34:07.577 09:00:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:07.577 09:00:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:07.577 09:00:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:07.577 09:00:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:07.577 09:00:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:07.577 09:00:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:07.577 09:00:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:34:07.577 Found 0000:09:00.1 (0x8086 - 0x159b) 00:34:07.577 09:00:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:07.577 09:00:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:07.577 09:00:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:07.577 09:00:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:07.577 09:00:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:07.577 09:00:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:07.577 09:00:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:07.577 09:00:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:07.577 09:00:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:07.577 09:00:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:07.577 09:00:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:07.577 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:07.577 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:07.577 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:07.577 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:07.577 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:34:07.577 Found net devices under 0000:09:00.0: cvl_0_0 00:34:07.577 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:07.577 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:07.577 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:07.577 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:07.577 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:07.577 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:07.577 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:07.577 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:07.577 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:34:07.577 Found net devices under 0000:09:00.1: cvl_0_1 00:34:07.577 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:07.577 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:07.577 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:34:07.577 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:07.577 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:07.577 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:07.577 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:07.577 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:07.577 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:07.577 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:07.577 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:07.577 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:07.577 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:07.577 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:07.577 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:07.577 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:07.577 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:07.577 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:07.577 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:07.577 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:07.577 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:07.577 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:07.577 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:07.577 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:07.577 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:07.577 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:07.577 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:07.577 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:34:07.577 00:34:07.577 --- 10.0.0.2 ping statistics --- 00:34:07.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:07.577 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:34:07.577 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:07.577 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:07.577 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:34:07.577 00:34:07.577 --- 10.0.0.1 ping statistics --- 00:34:07.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:07.577 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:34:07.577 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:07.577 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:34:07.577 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:07.577 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:07.577 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:07.577 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:07.577 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:07.577 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:07.577 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:07.577 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:34:07.577 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:07.577 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@721 -- # xtrace_disable 00:34:07.577 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:34:07.577 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=2343531 00:34:07.577 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:34:07.577 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 2343531 00:34:07.577 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@828 -- # '[' -z 2343531 ']' 00:34:07.577 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:07.577 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # local max_retries=100 00:34:07.577 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:07.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:07.577 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@837 -- # xtrace_disable 00:34:07.577 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:34:07.577 [2024-05-15 09:00:02.208863] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:34:07.577 [2024-05-15 09:00:02.208962] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:07.577 EAL: No free 2048 kB hugepages reported on node 1 00:34:07.577 [2024-05-15 09:00:02.290531] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:07.836 [2024-05-15 09:00:02.378275] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:07.836 [2024-05-15 09:00:02.378324] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:07.836 [2024-05-15 09:00:02.378348] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:07.836 [2024-05-15 09:00:02.378359] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:07.836 [2024-05-15 09:00:02.378370] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:07.836 [2024-05-15 09:00:02.378507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:07.836 [2024-05-15 09:00:02.378572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:07.836 [2024-05-15 09:00:02.378640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:34:07.836 [2024-05-15 09:00:02.378642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:07.836 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:34:07.836 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@861 -- # return 0 00:34:07.836 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:07.836 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@727 -- # xtrace_disable 00:34:07.836 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:34:07.836 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:07.836 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:34:07.836 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:07.836 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:07.836 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:34:07.836 Malloc0 00:34:07.836 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:07.836 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:34:07.836 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:07.836 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:34:07.836 Delay0 00:34:07.836 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:07.836 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:07.836 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:07.836 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:34:07.836 [2024-05-15 09:00:02.581433] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:07.836 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:07.836 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:07.836 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:07.836 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:34:07.836 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:07.836 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:07.836 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:07.836 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:34:07.836 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:07.836 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:07.836 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:07.836 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:34:07.836 [2024-05-15 09:00:02.609426] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:34:07.836 [2024-05-15 09:00:02.609770] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:07.836 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:07.836 09:00:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:08.805 09:00:03 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:34:08.805 09:00:03 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1195 -- # local i=0 00:34:08.805 09:00:03 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:34:08.805 09:00:03 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:34:08.805 09:00:03 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # sleep 2 00:34:10.700 09:00:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:34:10.700 09:00:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:34:10.700 09:00:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:34:10.700 09:00:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:34:10.700 09:00:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:34:10.700 09:00:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # return 0 00:34:10.700 09:00:05 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=2343902 00:34:10.700 09:00:05 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:34:10.700 09:00:05 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:34:10.700 [global] 00:34:10.700 thread=1 00:34:10.700 invalidate=1 00:34:10.700 rw=write 00:34:10.700 time_based=1 00:34:10.700 runtime=60 00:34:10.700 ioengine=libaio 00:34:10.700 direct=1 00:34:10.700 bs=4096 00:34:10.700 iodepth=1 00:34:10.700 norandommap=0 00:34:10.700 numjobs=1 00:34:10.700 00:34:10.700 verify_dump=1 00:34:10.700 verify_backlog=512 00:34:10.700 verify_state_save=0 00:34:10.700 do_verify=1 00:34:10.700 verify=crc32c-intel 00:34:10.700 [job0] 00:34:10.700 filename=/dev/nvme0n1 00:34:10.700 Could not set queue depth (nvme0n1) 00:34:10.700 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:10.700 fio-3.35 00:34:10.700 Starting 1 thread 00:34:13.975 09:00:08 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:34:13.975 09:00:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:13.975 09:00:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:34:13.975 true 00:34:13.975 09:00:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:13.975 09:00:08 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:34:13.975 09:00:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:13.975 09:00:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:34:13.975 true 00:34:13.975 09:00:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:13.975 09:00:08 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:34:13.976 09:00:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:13.976 09:00:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:34:13.976 true 00:34:13.976 09:00:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:13.976 09:00:08 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:34:13.976 09:00:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:13.976 09:00:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:34:13.976 true 00:34:13.976 09:00:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:13.976 09:00:08 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:34:16.513 09:00:11 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:34:16.771 09:00:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:16.771 09:00:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:34:16.771 true 00:34:16.771 09:00:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:16.771 09:00:11 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:34:16.771 09:00:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:16.771 09:00:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:34:16.771 true 00:34:16.771 09:00:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:16.771 09:00:11 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:34:16.771 09:00:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:16.771 09:00:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:34:16.771 true 00:34:16.771 09:00:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:16.771 09:00:11 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:34:16.771 09:00:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:16.771 09:00:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:34:16.771 true 00:34:16.771 09:00:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:16.771 09:00:11 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:34:16.771 09:00:11 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 2343902 00:35:12.974 00:35:12.974 job0: (groupid=0, jobs=1): err= 0: pid=2343971: Wed May 15 09:01:05 2024 00:35:12.974 read: IOPS=126, BW=505KiB/s (517kB/s)(29.6MiB/60013msec) 00:35:12.974 slat (usec): min=5, max=13885, avg=13.28, stdev=159.47 00:35:12.974 clat (usec): min=269, max=41051k, avg=7642.86, stdev=471501.96 00:35:12.974 lat (usec): min=276, max=41051k, avg=7656.14, stdev=471502.30 00:35:12.974 clat percentiles (usec): 00:35:12.974 | 1.00th=[ 281], 5.00th=[ 289], 10.00th=[ 293], 00:35:12.974 | 20.00th=[ 302], 30.00th=[ 310], 40.00th=[ 318], 00:35:12.974 | 50.00th=[ 330], 60.00th=[ 334], 70.00th=[ 343], 00:35:12.974 | 80.00th=[ 355], 90.00th=[ 392], 95.00th=[ 562], 00:35:12.974 | 99.00th=[ 42206], 99.50th=[ 42206], 99.90th=[ 42206], 00:35:12.974 | 99.95th=[ 42206], 99.99th=[17112761] 00:35:12.974 write: IOPS=127, BW=512KiB/s (524kB/s)(30.0MiB/60013msec); 0 zone resets 00:35:12.974 slat (usec): min=7, max=29107, avg=17.94, stdev=332.07 00:35:12.974 clat (usec): min=183, max=3734, avg=230.65, stdev=52.03 00:35:12.974 lat (usec): min=191, max=29463, avg=248.59, stdev=337.96 00:35:12.974 clat percentiles (usec): 00:35:12.974 | 1.00th=[ 192], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 208], 00:35:12.975 | 30.00th=[ 215], 40.00th=[ 219], 50.00th=[ 223], 60.00th=[ 229], 00:35:12.975 | 70.00th=[ 235], 80.00th=[ 243], 90.00th=[ 269], 95.00th=[ 289], 00:35:12.975 | 99.00th=[ 351], 99.50th=[ 383], 99.90th=[ 441], 99.95th=[ 840], 00:35:12.975 | 99.99th=[ 3720] 00:35:12.975 bw ( KiB/s): min= 384, max= 8192, per=100.00%, avg=5585.45, stdev=2712.08, samples=11 00:35:12.975 iops : min= 96, max= 2048, avg=1396.36, stdev=678.02, samples=11 00:35:12.975 lat (usec) : 250=42.48%, 500=54.67%, 750=0.50%, 1000=0.05% 00:35:12.975 lat (msec) : 2=0.01%, 4=0.01%, 50=2.27%, >=2000=0.01% 00:35:12.975 cpu : usr=0.25%, sys=0.43%, ctx=15267, majf=0, minf=2 00:35:12.975 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:12.975 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:12.975 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:12.975 issued rwts: total=7582,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:12.975 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:12.975 00:35:12.975 Run status group 0 (all jobs): 00:35:12.975 READ: bw=505KiB/s (517kB/s), 505KiB/s-505KiB/s (517kB/s-517kB/s), io=29.6MiB (31.1MB), run=60013-60013msec 00:35:12.975 WRITE: bw=512KiB/s (524kB/s), 512KiB/s-512KiB/s (524kB/s-524kB/s), io=30.0MiB (31.5MB), run=60013-60013msec 00:35:12.975 00:35:12.975 Disk stats (read/write): 00:35:12.975 nvme0n1: ios=7631/7680, merge=0/0, ticks=17952/1702, in_queue=19654, util=99.83% 00:35:12.975 09:01:05 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:12.975 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:12.975 09:01:05 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:12.975 09:01:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # local i=0 00:35:12.975 09:01:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:35:12.975 09:01:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:12.975 09:01:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:35:12.975 09:01:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:12.975 09:01:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1228 -- # return 0 00:35:12.975 09:01:05 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:35:12.975 09:01:05 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:35:12.975 nvmf hotplug test: fio successful as expected 00:35:12.975 09:01:05 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:12.975 09:01:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:12.975 09:01:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:35:12.975 09:01:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:12.975 09:01:05 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:35:12.975 09:01:05 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:35:12.975 09:01:05 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:35:12.975 09:01:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:12.975 09:01:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:35:12.975 09:01:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:12.975 09:01:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:35:12.975 09:01:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:12.975 09:01:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:12.975 rmmod nvme_tcp 00:35:12.975 rmmod nvme_fabrics 00:35:12.975 rmmod nvme_keyring 00:35:12.975 09:01:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:12.975 09:01:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:35:12.975 09:01:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:35:12.975 09:01:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 2343531 ']' 00:35:12.975 09:01:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 2343531 00:35:12.975 09:01:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@947 -- # '[' -z 2343531 ']' 00:35:12.975 09:01:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # kill -0 2343531 00:35:12.975 09:01:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # uname 00:35:12.975 09:01:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:35:12.975 09:01:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2343531 00:35:12.975 09:01:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:35:12.975 09:01:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:35:12.975 09:01:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2343531' 00:35:12.975 killing process with pid 2343531 00:35:12.975 09:01:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@966 -- # kill 2343531 00:35:12.975 [2024-05-15 09:01:05.774161] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:35:12.975 09:01:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@971 -- # wait 2343531 00:35:12.975 09:01:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:12.975 09:01:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:12.975 09:01:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:12.975 09:01:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:12.975 09:01:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:12.975 09:01:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:12.975 09:01:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:12.975 09:01:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:13.543 09:01:08 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:13.543 00:35:13.543 real 1m8.484s 00:35:13.543 user 4m10.740s 00:35:13.543 sys 0m6.937s 00:35:13.543 09:01:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1123 -- # xtrace_disable 00:35:13.543 09:01:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:35:13.543 ************************************ 00:35:13.543 END TEST nvmf_initiator_timeout 00:35:13.543 ************************************ 00:35:13.543 09:01:08 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:35:13.543 09:01:08 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:35:13.543 09:01:08 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:35:13.543 09:01:08 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:35:13.543 09:01:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:16.113 09:01:10 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:16.113 09:01:10 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:35:16.113 09:01:10 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:16.113 09:01:10 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:16.113 09:01:10 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:16.113 09:01:10 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:16.113 09:01:10 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:16.113 09:01:10 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:35:16.113 09:01:10 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:16.113 09:01:10 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:35:16.113 09:01:10 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:35:16.113 09:01:10 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:35:16.113 09:01:10 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:35:16.113 09:01:10 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:35:16.113 09:01:10 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:35:16.113 09:01:10 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:16.113 09:01:10 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:16.113 09:01:10 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:16.114 09:01:10 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:16.114 09:01:10 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:16.114 09:01:10 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:16.114 09:01:10 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:16.114 09:01:10 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:16.114 09:01:10 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:16.114 09:01:10 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:16.114 09:01:10 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:16.114 09:01:10 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:16.114 09:01:10 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:16.114 09:01:10 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:16.114 09:01:10 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:16.114 09:01:10 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:16.114 09:01:10 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:16.114 09:01:10 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:16.114 09:01:10 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:35:16.114 Found 0000:09:00.0 (0x8086 - 0x159b) 00:35:16.114 09:01:10 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:16.114 09:01:10 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:16.114 09:01:10 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:16.114 09:01:10 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:16.114 09:01:10 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:16.114 09:01:10 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:16.114 09:01:10 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:35:16.114 Found 0000:09:00.1 (0x8086 - 0x159b) 00:35:16.114 09:01:10 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:16.114 09:01:10 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:16.114 09:01:10 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:16.114 09:01:10 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:16.114 09:01:10 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:16.114 09:01:10 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:16.114 09:01:10 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:16.114 09:01:10 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:16.114 09:01:10 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:16.114 09:01:10 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:16.114 09:01:10 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:16.114 09:01:10 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:16.114 09:01:10 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:16.114 09:01:10 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:16.114 09:01:10 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:16.114 09:01:10 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:35:16.114 Found net devices under 0000:09:00.0: cvl_0_0 00:35:16.114 09:01:10 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:16.114 09:01:10 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:16.114 09:01:10 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:16.114 09:01:10 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:16.114 09:01:10 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:16.114 09:01:10 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:16.114 09:01:10 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:16.114 09:01:10 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:16.114 09:01:10 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:35:16.114 Found net devices under 0000:09:00.1: cvl_0_1 00:35:16.114 09:01:10 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:16.114 09:01:10 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:16.114 09:01:10 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:16.114 09:01:10 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:35:16.114 09:01:10 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:35:16.114 09:01:10 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:35:16.114 09:01:10 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:35:16.114 09:01:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:16.114 ************************************ 00:35:16.114 START TEST nvmf_perf_adq 00:35:16.114 ************************************ 00:35:16.114 09:01:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:35:16.114 * Looking for test storage... 00:35:16.114 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:16.114 09:01:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:16.114 09:01:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:35:16.114 09:01:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:16.114 09:01:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:16.114 09:01:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:16.114 09:01:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:16.114 09:01:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:16.114 09:01:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:16.115 09:01:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:16.115 09:01:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:16.115 09:01:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:16.115 09:01:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:16.115 09:01:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:35:16.115 09:01:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:35:16.115 09:01:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:16.115 09:01:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:16.115 09:01:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:16.115 09:01:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:16.115 09:01:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:16.115 09:01:10 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:16.115 09:01:10 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:16.115 09:01:10 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:16.115 09:01:10 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:16.115 09:01:10 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:16.115 09:01:10 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:16.115 09:01:10 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:35:16.115 09:01:10 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:16.115 09:01:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:35:16.115 09:01:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:16.115 09:01:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:16.115 09:01:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:16.115 09:01:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:16.115 09:01:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:16.115 09:01:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:16.115 09:01:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:16.115 09:01:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:16.115 09:01:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:35:16.115 09:01:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:35:16.115 09:01:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:35:18.647 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:18.647 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:35:18.647 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:18.647 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:18.647 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:18.647 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:18.647 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:18.647 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:35:18.647 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:18.647 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:35:18.647 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:35:18.647 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:35:18.647 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:35:18.647 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:35:18.647 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:35:18.647 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:18.647 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:18.647 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:18.647 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:18.647 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:18.647 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:18.647 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:18.647 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:18.647 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:18.647 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:18.647 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:18.647 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:18.647 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:18.647 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:18.647 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:18.647 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:18.647 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:18.647 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:18.647 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:35:18.647 Found 0000:09:00.0 (0x8086 - 0x159b) 00:35:18.647 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:18.647 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:18.647 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:18.647 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:18.647 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:18.647 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:18.647 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:35:18.647 Found 0000:09:00.1 (0x8086 - 0x159b) 00:35:18.647 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:18.647 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:18.647 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:18.647 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:18.647 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:18.647 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:18.647 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:18.647 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:18.647 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:18.647 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:18.647 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:18.647 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:18.647 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:18.647 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:18.647 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:18.647 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:35:18.648 Found net devices under 0000:09:00.0: cvl_0_0 00:35:18.648 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:18.648 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:18.648 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:18.648 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:18.648 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:18.648 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:18.648 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:18.648 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:18.648 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:35:18.648 Found net devices under 0000:09:00.1: cvl_0_1 00:35:18.648 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:18.648 09:01:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:18.648 09:01:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:18.648 09:01:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:35:18.648 09:01:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:35:18.648 09:01:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:35:18.648 09:01:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:35:19.213 09:01:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:35:20.584 09:01:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:35:25.845 Found 0000:09:00.0 (0x8086 - 0x159b) 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:35:25.845 Found 0000:09:00.1 (0x8086 - 0x159b) 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:35:25.845 Found net devices under 0000:09:00.0: cvl_0_0 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:35:25.845 Found net devices under 0000:09:00.1: cvl_0_1 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:25.845 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:25.846 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:25.846 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:25.846 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:25.846 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:25.846 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:25.846 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:25.846 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:25.846 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:25.846 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:25.846 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:25.846 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:25.846 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:25.846 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:25.846 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:25.846 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:25.846 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:25.846 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:25.846 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:25.846 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:25.846 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:35:25.846 00:35:25.846 --- 10.0.0.2 ping statistics --- 00:35:25.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:25.846 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:35:25.846 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:25.846 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:25.846 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:35:25.846 00:35:25.846 --- 10.0.0.1 ping statistics --- 00:35:25.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:25.846 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:35:25.846 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:25.846 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:35:25.846 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:25.846 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:25.846 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:25.846 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:25.846 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:25.846 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:25.846 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:25.846 09:01:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:35:25.846 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:25.846 09:01:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@721 -- # xtrace_disable 00:35:25.846 09:01:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:35:25.846 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2356697 00:35:25.846 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:35:25.846 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2356697 00:35:25.846 09:01:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@828 -- # '[' -z 2356697 ']' 00:35:25.846 09:01:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:25.846 09:01:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local max_retries=100 00:35:25.846 09:01:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:25.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:25.846 09:01:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@837 -- # xtrace_disable 00:35:25.846 09:01:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:35:25.846 [2024-05-15 09:01:20.414981] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:35:25.846 [2024-05-15 09:01:20.415093] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:25.846 EAL: No free 2048 kB hugepages reported on node 1 00:35:25.846 [2024-05-15 09:01:20.493428] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:25.846 [2024-05-15 09:01:20.582520] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:25.846 [2024-05-15 09:01:20.582576] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:25.846 [2024-05-15 09:01:20.582590] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:25.846 [2024-05-15 09:01:20.582601] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:25.846 [2024-05-15 09:01:20.582610] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:25.846 [2024-05-15 09:01:20.582699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:25.846 [2024-05-15 09:01:20.582774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:35:25.846 [2024-05-15 09:01:20.582842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:35:25.846 [2024-05-15 09:01:20.582844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:25.846 09:01:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:35:25.846 09:01:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@861 -- # return 0 00:35:25.846 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:25.846 09:01:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@727 -- # xtrace_disable 00:35:25.846 09:01:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:35:26.105 09:01:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:26.105 09:01:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:35:26.105 09:01:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:35:26.105 09:01:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:35:26.105 09:01:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:26.105 09:01:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:35:26.105 09:01:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:26.105 09:01:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:35:26.105 09:01:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:35:26.105 09:01:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:26.105 09:01:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:35:26.105 09:01:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:26.105 09:01:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:35:26.105 09:01:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:26.105 09:01:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:35:26.105 09:01:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:26.105 09:01:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:35:26.105 09:01:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:26.105 09:01:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:35:26.105 [2024-05-15 09:01:20.828125] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:26.105 09:01:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:26.105 09:01:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:35:26.105 09:01:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:26.105 09:01:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:35:26.105 Malloc1 00:35:26.105 09:01:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:26.105 09:01:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:26.105 09:01:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:26.105 09:01:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:35:26.105 09:01:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:26.105 09:01:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:35:26.105 09:01:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:26.105 09:01:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:35:26.106 09:01:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:26.106 09:01:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:26.106 09:01:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:26.106 09:01:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:35:26.106 [2024-05-15 09:01:20.879126] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:35:26.106 [2024-05-15 09:01:20.879466] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:26.106 09:01:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:26.106 09:01:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=2356727 00:35:26.106 09:01:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:26.106 09:01:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:35:26.363 EAL: No free 2048 kB hugepages reported on node 1 00:35:28.263 09:01:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:35:28.263 09:01:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:28.263 09:01:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:35:28.263 09:01:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:28.263 09:01:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:35:28.263 "tick_rate": 2700000000, 00:35:28.263 "poll_groups": [ 00:35:28.263 { 00:35:28.263 "name": "nvmf_tgt_poll_group_000", 00:35:28.263 "admin_qpairs": 1, 00:35:28.263 "io_qpairs": 1, 00:35:28.263 "current_admin_qpairs": 1, 00:35:28.263 "current_io_qpairs": 1, 00:35:28.263 "pending_bdev_io": 0, 00:35:28.263 "completed_nvme_io": 17504, 00:35:28.263 "transports": [ 00:35:28.263 { 00:35:28.263 "trtype": "TCP" 00:35:28.263 } 00:35:28.263 ] 00:35:28.263 }, 00:35:28.263 { 00:35:28.263 "name": "nvmf_tgt_poll_group_001", 00:35:28.263 "admin_qpairs": 0, 00:35:28.263 "io_qpairs": 1, 00:35:28.263 "current_admin_qpairs": 0, 00:35:28.263 "current_io_qpairs": 1, 00:35:28.263 "pending_bdev_io": 0, 00:35:28.263 "completed_nvme_io": 20366, 00:35:28.263 "transports": [ 00:35:28.263 { 00:35:28.263 "trtype": "TCP" 00:35:28.263 } 00:35:28.263 ] 00:35:28.263 }, 00:35:28.263 { 00:35:28.263 "name": "nvmf_tgt_poll_group_002", 00:35:28.263 "admin_qpairs": 0, 00:35:28.263 "io_qpairs": 1, 00:35:28.263 "current_admin_qpairs": 0, 00:35:28.263 "current_io_qpairs": 1, 00:35:28.263 "pending_bdev_io": 0, 00:35:28.263 "completed_nvme_io": 20049, 00:35:28.263 "transports": [ 00:35:28.263 { 00:35:28.263 "trtype": "TCP" 00:35:28.263 } 00:35:28.263 ] 00:35:28.263 }, 00:35:28.263 { 00:35:28.263 "name": "nvmf_tgt_poll_group_003", 00:35:28.263 "admin_qpairs": 0, 00:35:28.263 "io_qpairs": 1, 00:35:28.263 "current_admin_qpairs": 0, 00:35:28.263 "current_io_qpairs": 1, 00:35:28.263 "pending_bdev_io": 0, 00:35:28.263 "completed_nvme_io": 20101, 00:35:28.263 "transports": [ 00:35:28.263 { 00:35:28.263 "trtype": "TCP" 00:35:28.263 } 00:35:28.263 ] 00:35:28.263 } 00:35:28.263 ] 00:35:28.263 }' 00:35:28.263 09:01:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:35:28.264 09:01:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:35:28.264 09:01:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:35:28.264 09:01:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:35:28.264 09:01:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 2356727 00:35:36.372 Initializing NVMe Controllers 00:35:36.372 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:36.372 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:35:36.372 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:35:36.372 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:35:36.372 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:35:36.372 Initialization complete. Launching workers. 00:35:36.372 ======================================================== 00:35:36.372 Latency(us) 00:35:36.372 Device Information : IOPS MiB/s Average min max 00:35:36.372 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10571.44 41.29 6054.27 2381.94 9163.66 00:35:36.372 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10697.34 41.79 5984.89 2584.37 8732.98 00:35:36.372 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10526.25 41.12 6081.58 2731.86 10130.94 00:35:36.372 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9158.97 35.78 6989.26 2799.78 10477.14 00:35:36.372 ======================================================== 00:35:36.372 Total : 40954.00 159.98 6252.27 2381.94 10477.14 00:35:36.372 00:35:36.372 09:01:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:35:36.372 09:01:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:36.372 09:01:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:35:36.372 09:01:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:36.372 09:01:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:35:36.372 09:01:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:36.372 09:01:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:36.372 rmmod nvme_tcp 00:35:36.372 rmmod nvme_fabrics 00:35:36.372 rmmod nvme_keyring 00:35:36.372 09:01:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:36.372 09:01:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:35:36.372 09:01:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:35:36.372 09:01:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2356697 ']' 00:35:36.372 09:01:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2356697 00:35:36.372 09:01:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@947 -- # '[' -z 2356697 ']' 00:35:36.372 09:01:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # kill -0 2356697 00:35:36.372 09:01:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # uname 00:35:36.372 09:01:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:35:36.372 09:01:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2356697 00:35:36.372 09:01:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:35:36.372 09:01:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:35:36.372 09:01:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2356697' 00:35:36.372 killing process with pid 2356697 00:35:36.372 09:01:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # kill 2356697 00:35:36.372 [2024-05-15 09:01:31.099054] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:35:36.372 09:01:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@971 -- # wait 2356697 00:35:36.629 09:01:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:36.629 09:01:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:36.629 09:01:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:36.629 09:01:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:36.629 09:01:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:36.630 09:01:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:36.630 09:01:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:36.630 09:01:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:39.174 09:01:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:39.174 09:01:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:35:39.174 09:01:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:35:39.447 09:01:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:35:40.840 09:01:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:35:46.101 09:01:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:35:46.101 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:46.101 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:46.101 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:46.101 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:46.101 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:46.101 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:46.101 09:01:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:46.101 09:01:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:46.101 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:46.101 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:46.101 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:35:46.101 09:01:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:35:46.101 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:46.101 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:35:46.101 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:46.101 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:46.101 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:46.101 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:46.101 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:46.101 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:35:46.101 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:35:46.102 Found 0000:09:00.0 (0x8086 - 0x159b) 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:35:46.102 Found 0000:09:00.1 (0x8086 - 0x159b) 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:35:46.102 Found net devices under 0000:09:00.0: cvl_0_0 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:35:46.102 Found net devices under 0000:09:00.1: cvl_0_1 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:46.102 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:46.102 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:35:46.102 00:35:46.102 --- 10.0.0.2 ping statistics --- 00:35:46.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:46.102 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:46.102 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:46.102 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:35:46.102 00:35:46.102 --- 10.0.0.1 ping statistics --- 00:35:46.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:46.102 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:35:46.102 net.core.busy_poll = 1 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:35:46.102 net.core.busy_read = 1 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@721 -- # xtrace_disable 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2359213 00:35:46.102 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:35:46.103 09:01:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2359213 00:35:46.103 09:01:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@828 -- # '[' -z 2359213 ']' 00:35:46.103 09:01:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:46.103 09:01:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local max_retries=100 00:35:46.103 09:01:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:46.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:46.103 09:01:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@837 -- # xtrace_disable 00:35:46.103 09:01:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:35:46.103 [2024-05-15 09:01:40.820021] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:35:46.103 [2024-05-15 09:01:40.820099] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:46.103 EAL: No free 2048 kB hugepages reported on node 1 00:35:46.361 [2024-05-15 09:01:40.896923] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:46.361 [2024-05-15 09:01:40.982117] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:46.361 [2024-05-15 09:01:40.982180] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:46.361 [2024-05-15 09:01:40.982208] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:46.361 [2024-05-15 09:01:40.982226] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:46.361 [2024-05-15 09:01:40.982237] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:46.361 [2024-05-15 09:01:40.982324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:46.361 [2024-05-15 09:01:40.982443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:35:46.361 [2024-05-15 09:01:40.982492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:35:46.361 [2024-05-15 09:01:40.982495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:46.361 09:01:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:35:46.361 09:01:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@861 -- # return 0 00:35:46.361 09:01:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:46.361 09:01:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@727 -- # xtrace_disable 00:35:46.361 09:01:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:35:46.361 09:01:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:46.361 09:01:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:35:46.361 09:01:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:35:46.361 09:01:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:46.361 09:01:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:35:46.361 09:01:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:35:46.361 09:01:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:46.361 09:01:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:35:46.361 09:01:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:35:46.361 09:01:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:46.361 09:01:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:35:46.361 09:01:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:46.361 09:01:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:35:46.361 09:01:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:46.361 09:01:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:35:46.619 09:01:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:46.619 09:01:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:35:46.619 09:01:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:46.619 09:01:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:35:46.619 [2024-05-15 09:01:41.226042] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:46.619 09:01:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:46.619 09:01:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:35:46.619 09:01:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:46.619 09:01:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:35:46.619 Malloc1 00:35:46.619 09:01:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:46.619 09:01:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:46.619 09:01:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:46.619 09:01:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:35:46.619 09:01:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:46.619 09:01:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:35:46.619 09:01:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:46.619 09:01:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:35:46.619 09:01:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:46.619 09:01:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:46.619 09:01:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:46.619 09:01:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:35:46.619 [2024-05-15 09:01:41.278812] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:35:46.619 [2024-05-15 09:01:41.279121] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:46.619 09:01:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:46.619 09:01:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=2359359 00:35:46.619 09:01:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:35:46.619 09:01:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:46.619 EAL: No free 2048 kB hugepages reported on node 1 00:35:48.518 09:01:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:35:48.518 09:01:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:48.518 09:01:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:35:48.518 09:01:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:48.518 09:01:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:35:48.518 "tick_rate": 2700000000, 00:35:48.518 "poll_groups": [ 00:35:48.518 { 00:35:48.518 "name": "nvmf_tgt_poll_group_000", 00:35:48.518 "admin_qpairs": 1, 00:35:48.518 "io_qpairs": 2, 00:35:48.518 "current_admin_qpairs": 1, 00:35:48.518 "current_io_qpairs": 2, 00:35:48.518 "pending_bdev_io": 0, 00:35:48.518 "completed_nvme_io": 26709, 00:35:48.518 "transports": [ 00:35:48.518 { 00:35:48.518 "trtype": "TCP" 00:35:48.518 } 00:35:48.518 ] 00:35:48.518 }, 00:35:48.518 { 00:35:48.518 "name": "nvmf_tgt_poll_group_001", 00:35:48.518 "admin_qpairs": 0, 00:35:48.518 "io_qpairs": 2, 00:35:48.518 "current_admin_qpairs": 0, 00:35:48.518 "current_io_qpairs": 2, 00:35:48.518 "pending_bdev_io": 0, 00:35:48.518 "completed_nvme_io": 23453, 00:35:48.518 "transports": [ 00:35:48.518 { 00:35:48.518 "trtype": "TCP" 00:35:48.518 } 00:35:48.518 ] 00:35:48.518 }, 00:35:48.518 { 00:35:48.518 "name": "nvmf_tgt_poll_group_002", 00:35:48.518 "admin_qpairs": 0, 00:35:48.518 "io_qpairs": 0, 00:35:48.518 "current_admin_qpairs": 0, 00:35:48.518 "current_io_qpairs": 0, 00:35:48.518 "pending_bdev_io": 0, 00:35:48.518 "completed_nvme_io": 0, 00:35:48.518 "transports": [ 00:35:48.518 { 00:35:48.518 "trtype": "TCP" 00:35:48.518 } 00:35:48.518 ] 00:35:48.518 }, 00:35:48.518 { 00:35:48.518 "name": "nvmf_tgt_poll_group_003", 00:35:48.518 "admin_qpairs": 0, 00:35:48.518 "io_qpairs": 0, 00:35:48.518 "current_admin_qpairs": 0, 00:35:48.518 "current_io_qpairs": 0, 00:35:48.518 "pending_bdev_io": 0, 00:35:48.518 "completed_nvme_io": 0, 00:35:48.518 "transports": [ 00:35:48.518 { 00:35:48.518 "trtype": "TCP" 00:35:48.518 } 00:35:48.518 ] 00:35:48.518 } 00:35:48.518 ] 00:35:48.518 }' 00:35:48.518 09:01:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:35:48.518 09:01:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:35:48.774 09:01:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:35:48.774 09:01:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:35:48.774 09:01:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 2359359 00:35:56.879 Initializing NVMe Controllers 00:35:56.879 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:56.879 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:35:56.879 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:35:56.879 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:35:56.879 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:35:56.879 Initialization complete. Launching workers. 00:35:56.879 ======================================================== 00:35:56.879 Latency(us) 00:35:56.879 Device Information : IOPS MiB/s Average min max 00:35:56.879 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6349.20 24.80 10085.27 1906.51 55621.97 00:35:56.879 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5900.20 23.05 10847.01 2007.20 55449.02 00:35:56.879 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7385.10 28.85 8667.11 1742.58 53240.16 00:35:56.879 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6673.50 26.07 9591.19 2071.56 54897.91 00:35:56.879 ======================================================== 00:35:56.879 Total : 26307.99 102.77 9732.67 1742.58 55621.97 00:35:56.879 00:35:56.879 09:01:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:35:56.879 09:01:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:56.879 09:01:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:35:56.879 09:01:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:56.879 09:01:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:35:56.879 09:01:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:56.879 09:01:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:56.879 rmmod nvme_tcp 00:35:56.879 rmmod nvme_fabrics 00:35:56.879 rmmod nvme_keyring 00:35:56.879 09:01:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:56.880 09:01:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:35:56.880 09:01:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:35:56.880 09:01:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2359213 ']' 00:35:56.880 09:01:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2359213 00:35:56.880 09:01:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@947 -- # '[' -z 2359213 ']' 00:35:56.880 09:01:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # kill -0 2359213 00:35:56.880 09:01:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # uname 00:35:56.880 09:01:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:35:56.880 09:01:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2359213 00:35:56.880 09:01:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:35:56.880 09:01:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:35:56.880 09:01:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2359213' 00:35:56.880 killing process with pid 2359213 00:35:56.880 09:01:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # kill 2359213 00:35:56.880 [2024-05-15 09:01:51.497829] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:35:56.880 09:01:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@971 -- # wait 2359213 00:35:57.138 09:01:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:57.138 09:01:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:57.138 09:01:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:57.138 09:01:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:57.138 09:01:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:57.138 09:01:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:57.138 09:01:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:57.138 09:01:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:00.422 09:01:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:00.422 09:01:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:36:00.422 00:36:00.422 real 0m44.338s 00:36:00.422 user 2m35.256s 00:36:00.422 sys 0m11.571s 00:36:00.422 09:01:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # xtrace_disable 00:36:00.422 09:01:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:36:00.422 ************************************ 00:36:00.422 END TEST nvmf_perf_adq 00:36:00.422 ************************************ 00:36:00.422 09:01:54 nvmf_tcp -- nvmf/nvmf.sh@82 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:36:00.422 09:01:54 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:36:00.422 09:01:54 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:36:00.422 09:01:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:00.422 ************************************ 00:36:00.422 START TEST nvmf_shutdown 00:36:00.422 ************************************ 00:36:00.422 09:01:54 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:36:00.422 * Looking for test storage... 00:36:00.422 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:00.422 09:01:54 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:00.422 09:01:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:36:00.422 09:01:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:00.422 09:01:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:00.422 09:01:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:00.422 09:01:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:00.422 09:01:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:00.422 09:01:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:00.422 09:01:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:00.422 09:01:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:00.422 09:01:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:00.422 09:01:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:00.422 09:01:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:36:00.422 09:01:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:36:00.422 09:01:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:00.422 09:01:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:00.422 09:01:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:00.422 09:01:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:00.422 09:01:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:00.422 09:01:54 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:00.422 09:01:54 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:00.422 09:01:54 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:00.422 09:01:54 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:00.422 09:01:54 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:00.422 09:01:54 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:00.422 09:01:54 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:36:00.422 09:01:54 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:00.422 09:01:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:36:00.422 09:01:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:00.422 09:01:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:00.422 09:01:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:00.422 09:01:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:00.422 09:01:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:00.422 09:01:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:00.422 09:01:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:00.422 09:01:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:00.422 09:01:54 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:00.422 09:01:54 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:00.422 09:01:54 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:36:00.422 09:01:54 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:36:00.422 09:01:54 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1104 -- # xtrace_disable 00:36:00.422 09:01:54 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:36:00.422 ************************************ 00:36:00.422 START TEST nvmf_shutdown_tc1 00:36:00.422 ************************************ 00:36:00.422 09:01:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1122 -- # nvmf_shutdown_tc1 00:36:00.422 09:01:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:36:00.422 09:01:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:36:00.422 09:01:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:00.422 09:01:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:00.422 09:01:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:00.422 09:01:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:00.422 09:01:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:00.422 09:01:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:00.422 09:01:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:00.422 09:01:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:00.422 09:01:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:00.422 09:01:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:00.422 09:01:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:36:00.422 09:01:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:36:02.950 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:02.950 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:36:02.950 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:02.950 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:02.950 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:02.950 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:02.950 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:02.950 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:36:02.950 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:02.950 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:36:02.950 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:36:02.950 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:36:02.950 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:36:02.950 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:36:02.950 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:36:02.950 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:02.950 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:02.950 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:02.950 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:02.950 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:02.950 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:02.950 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:02.950 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:02.950 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:02.950 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:02.950 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:02.950 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:02.950 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:02.950 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:02.950 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:02.950 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:02.950 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:02.950 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:02.950 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:36:02.950 Found 0000:09:00.0 (0x8086 - 0x159b) 00:36:02.950 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:02.950 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:02.950 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:36:02.951 Found 0000:09:00.1 (0x8086 - 0x159b) 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:36:02.951 Found net devices under 0000:09:00.0: cvl_0_0 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:36:02.951 Found net devices under 0000:09:00.1: cvl_0_1 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:02.951 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:02.951 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.150 ms 00:36:02.951 00:36:02.951 --- 10.0.0.2 ping statistics --- 00:36:02.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:02.951 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:02.951 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:02.951 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:36:02.951 00:36:02.951 --- 10.0.0.1 ping statistics --- 00:36:02.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:02.951 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@721 -- # xtrace_disable 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=2362949 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 2362949 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@828 -- # '[' -z 2362949 ']' 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local max_retries=100 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:02.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # xtrace_disable 00:36:02.951 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:36:02.951 [2024-05-15 09:01:57.599252] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:36:02.951 [2024-05-15 09:01:57.599340] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:02.951 EAL: No free 2048 kB hugepages reported on node 1 00:36:02.951 [2024-05-15 09:01:57.672387] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:03.209 [2024-05-15 09:01:57.755628] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:03.209 [2024-05-15 09:01:57.755676] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:03.209 [2024-05-15 09:01:57.755699] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:03.209 [2024-05-15 09:01:57.755710] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:03.209 [2024-05-15 09:01:57.755720] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:03.209 [2024-05-15 09:01:57.755834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:36:03.209 [2024-05-15 09:01:57.755902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:36:03.209 [2024-05-15 09:01:57.755968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:36:03.209 [2024-05-15 09:01:57.755970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:03.209 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:36:03.209 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@861 -- # return 0 00:36:03.209 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:03.209 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@727 -- # xtrace_disable 00:36:03.210 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:36:03.210 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:03.210 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:03.210 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:03.210 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:36:03.210 [2024-05-15 09:01:57.912993] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:03.210 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:03.210 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:36:03.210 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:36:03.210 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@721 -- # xtrace_disable 00:36:03.210 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:36:03.210 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:36:03.210 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:36:03.210 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:36:03.210 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:36:03.210 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:36:03.210 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:36:03.210 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:36:03.210 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:36:03.210 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:36:03.210 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:36:03.210 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:36:03.210 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:36:03.210 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:36:03.210 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:36:03.210 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:36:03.210 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:36:03.210 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:36:03.210 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:36:03.210 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:36:03.210 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:36:03.210 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:36:03.210 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:36:03.210 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:03.210 09:01:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:36:03.210 Malloc1 00:36:03.210 [2024-05-15 09:01:58.000993] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:36:03.210 [2024-05-15 09:01:58.001326] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:03.468 Malloc2 00:36:03.468 Malloc3 00:36:03.468 Malloc4 00:36:03.468 Malloc5 00:36:03.468 Malloc6 00:36:03.755 Malloc7 00:36:03.755 Malloc8 00:36:03.755 Malloc9 00:36:03.755 Malloc10 00:36:03.755 09:01:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:03.755 09:01:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:36:03.755 09:01:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@727 -- # xtrace_disable 00:36:03.755 09:01:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:36:03.755 09:01:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=2363120 00:36:03.755 09:01:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 2363120 /var/tmp/bdevperf.sock 00:36:03.755 09:01:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@828 -- # '[' -z 2363120 ']' 00:36:03.755 09:01:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:36:03.755 09:01:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:36:03.755 09:01:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:36:03.755 09:01:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local max_retries=100 00:36:03.755 09:01:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:36:03.755 09:01:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:36:03.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:36:03.755 09:01:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:36:03.755 09:01:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # xtrace_disable 00:36:03.755 09:01:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:03.755 09:01:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:36:03.755 09:01:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:03.755 { 00:36:03.755 "params": { 00:36:03.755 "name": "Nvme$subsystem", 00:36:03.755 "trtype": "$TEST_TRANSPORT", 00:36:03.755 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:03.755 "adrfam": "ipv4", 00:36:03.755 "trsvcid": "$NVMF_PORT", 00:36:03.755 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:03.755 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:03.755 "hdgst": ${hdgst:-false}, 00:36:03.755 "ddgst": ${ddgst:-false} 00:36:03.755 }, 00:36:03.755 "method": "bdev_nvme_attach_controller" 00:36:03.755 } 00:36:03.755 EOF 00:36:03.755 )") 00:36:03.755 09:01:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:36:03.755 09:01:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:03.755 09:01:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:03.755 { 00:36:03.755 "params": { 00:36:03.755 "name": "Nvme$subsystem", 00:36:03.755 "trtype": "$TEST_TRANSPORT", 00:36:03.755 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:03.755 "adrfam": "ipv4", 00:36:03.755 "trsvcid": "$NVMF_PORT", 00:36:03.755 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:03.755 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:03.755 "hdgst": ${hdgst:-false}, 00:36:03.755 "ddgst": ${ddgst:-false} 00:36:03.755 }, 00:36:03.755 "method": "bdev_nvme_attach_controller" 00:36:03.755 } 00:36:03.755 EOF 00:36:03.755 )") 00:36:03.755 09:01:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:36:03.755 09:01:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:03.755 09:01:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:03.755 { 00:36:03.755 "params": { 00:36:03.755 "name": "Nvme$subsystem", 00:36:03.755 "trtype": "$TEST_TRANSPORT", 00:36:03.755 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:03.755 "adrfam": "ipv4", 00:36:03.755 "trsvcid": "$NVMF_PORT", 00:36:03.755 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:03.755 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:03.755 "hdgst": ${hdgst:-false}, 00:36:03.755 "ddgst": ${ddgst:-false} 00:36:03.755 }, 00:36:03.755 "method": "bdev_nvme_attach_controller" 00:36:03.755 } 00:36:03.755 EOF 00:36:03.755 )") 00:36:03.755 09:01:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:36:03.755 09:01:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:03.755 09:01:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:03.755 { 00:36:03.755 "params": { 00:36:03.755 "name": "Nvme$subsystem", 00:36:03.755 "trtype": "$TEST_TRANSPORT", 00:36:03.755 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:03.755 "adrfam": "ipv4", 00:36:03.755 "trsvcid": "$NVMF_PORT", 00:36:03.755 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:03.755 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:03.755 "hdgst": ${hdgst:-false}, 00:36:03.755 "ddgst": ${ddgst:-false} 00:36:03.755 }, 00:36:03.755 "method": "bdev_nvme_attach_controller" 00:36:03.755 } 00:36:03.755 EOF 00:36:03.755 )") 00:36:03.755 09:01:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:36:03.755 09:01:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:03.755 09:01:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:03.755 { 00:36:03.755 "params": { 00:36:03.755 "name": "Nvme$subsystem", 00:36:03.755 "trtype": "$TEST_TRANSPORT", 00:36:03.755 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:03.755 "adrfam": "ipv4", 00:36:03.755 "trsvcid": "$NVMF_PORT", 00:36:03.755 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:03.755 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:03.755 "hdgst": ${hdgst:-false}, 00:36:03.755 "ddgst": ${ddgst:-false} 00:36:03.755 }, 00:36:03.755 "method": "bdev_nvme_attach_controller" 00:36:03.755 } 00:36:03.755 EOF 00:36:03.755 )") 00:36:03.755 09:01:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:36:03.755 09:01:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:03.755 09:01:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:03.755 { 00:36:03.755 "params": { 00:36:03.755 "name": "Nvme$subsystem", 00:36:03.755 "trtype": "$TEST_TRANSPORT", 00:36:03.755 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:03.755 "adrfam": "ipv4", 00:36:03.755 "trsvcid": "$NVMF_PORT", 00:36:03.755 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:03.755 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:03.755 "hdgst": ${hdgst:-false}, 00:36:03.755 "ddgst": ${ddgst:-false} 00:36:03.755 }, 00:36:03.755 "method": "bdev_nvme_attach_controller" 00:36:03.755 } 00:36:03.755 EOF 00:36:03.755 )") 00:36:03.756 09:01:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:36:03.756 09:01:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:03.756 09:01:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:03.756 { 00:36:03.756 "params": { 00:36:03.756 "name": "Nvme$subsystem", 00:36:03.756 "trtype": "$TEST_TRANSPORT", 00:36:03.756 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:03.756 "adrfam": "ipv4", 00:36:03.756 "trsvcid": "$NVMF_PORT", 00:36:03.756 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:03.756 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:03.756 "hdgst": ${hdgst:-false}, 00:36:03.756 "ddgst": ${ddgst:-false} 00:36:03.756 }, 00:36:03.756 "method": "bdev_nvme_attach_controller" 00:36:03.756 } 00:36:03.756 EOF 00:36:03.756 )") 00:36:03.756 09:01:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:36:03.756 09:01:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:03.756 09:01:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:03.756 { 00:36:03.756 "params": { 00:36:03.756 "name": "Nvme$subsystem", 00:36:03.756 "trtype": "$TEST_TRANSPORT", 00:36:03.756 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:03.756 "adrfam": "ipv4", 00:36:03.756 "trsvcid": "$NVMF_PORT", 00:36:03.756 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:03.756 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:03.756 "hdgst": ${hdgst:-false}, 00:36:03.756 "ddgst": ${ddgst:-false} 00:36:03.756 }, 00:36:03.756 "method": "bdev_nvme_attach_controller" 00:36:03.756 } 00:36:03.756 EOF 00:36:03.756 )") 00:36:03.756 09:01:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:36:03.756 09:01:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:03.756 09:01:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:03.756 { 00:36:03.756 "params": { 00:36:03.756 "name": "Nvme$subsystem", 00:36:03.756 "trtype": "$TEST_TRANSPORT", 00:36:03.756 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:03.756 "adrfam": "ipv4", 00:36:03.756 "trsvcid": "$NVMF_PORT", 00:36:03.756 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:03.756 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:03.756 "hdgst": ${hdgst:-false}, 00:36:03.756 "ddgst": ${ddgst:-false} 00:36:03.756 }, 00:36:03.756 "method": "bdev_nvme_attach_controller" 00:36:03.756 } 00:36:03.756 EOF 00:36:03.756 )") 00:36:03.756 09:01:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:36:03.756 09:01:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:03.756 09:01:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:03.756 { 00:36:03.756 "params": { 00:36:03.756 "name": "Nvme$subsystem", 00:36:03.756 "trtype": "$TEST_TRANSPORT", 00:36:03.756 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:03.756 "adrfam": "ipv4", 00:36:03.756 "trsvcid": "$NVMF_PORT", 00:36:03.756 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:03.756 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:03.756 "hdgst": ${hdgst:-false}, 00:36:03.756 "ddgst": ${ddgst:-false} 00:36:03.756 }, 00:36:03.756 "method": "bdev_nvme_attach_controller" 00:36:03.756 } 00:36:03.756 EOF 00:36:03.756 )") 00:36:03.756 09:01:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:36:03.756 09:01:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:36:03.756 09:01:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:36:03.756 09:01:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:03.756 "params": { 00:36:03.756 "name": "Nvme1", 00:36:03.756 "trtype": "tcp", 00:36:03.756 "traddr": "10.0.0.2", 00:36:03.756 "adrfam": "ipv4", 00:36:03.756 "trsvcid": "4420", 00:36:03.756 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:03.756 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:03.756 "hdgst": false, 00:36:03.756 "ddgst": false 00:36:03.756 }, 00:36:03.756 "method": "bdev_nvme_attach_controller" 00:36:03.756 },{ 00:36:03.756 "params": { 00:36:03.756 "name": "Nvme2", 00:36:03.756 "trtype": "tcp", 00:36:03.756 "traddr": "10.0.0.2", 00:36:03.756 "adrfam": "ipv4", 00:36:03.756 "trsvcid": "4420", 00:36:03.756 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:36:03.756 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:36:03.756 "hdgst": false, 00:36:03.756 "ddgst": false 00:36:03.756 }, 00:36:03.756 "method": "bdev_nvme_attach_controller" 00:36:03.756 },{ 00:36:03.756 "params": { 00:36:03.756 "name": "Nvme3", 00:36:03.756 "trtype": "tcp", 00:36:03.756 "traddr": "10.0.0.2", 00:36:03.756 "adrfam": "ipv4", 00:36:03.756 "trsvcid": "4420", 00:36:03.756 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:36:03.756 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:36:03.756 "hdgst": false, 00:36:03.756 "ddgst": false 00:36:03.756 }, 00:36:03.756 "method": "bdev_nvme_attach_controller" 00:36:03.756 },{ 00:36:03.756 "params": { 00:36:03.756 "name": "Nvme4", 00:36:03.756 "trtype": "tcp", 00:36:03.756 "traddr": "10.0.0.2", 00:36:03.756 "adrfam": "ipv4", 00:36:03.756 "trsvcid": "4420", 00:36:03.756 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:36:03.756 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:36:03.756 "hdgst": false, 00:36:03.756 "ddgst": false 00:36:03.756 }, 00:36:03.756 "method": "bdev_nvme_attach_controller" 00:36:03.756 },{ 00:36:03.756 "params": { 00:36:03.756 "name": "Nvme5", 00:36:03.756 "trtype": "tcp", 00:36:03.756 "traddr": "10.0.0.2", 00:36:03.756 "adrfam": "ipv4", 00:36:03.756 "trsvcid": "4420", 00:36:03.756 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:36:03.756 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:36:03.756 "hdgst": false, 00:36:03.756 "ddgst": false 00:36:03.756 }, 00:36:03.756 "method": "bdev_nvme_attach_controller" 00:36:03.756 },{ 00:36:03.756 "params": { 00:36:03.756 "name": "Nvme6", 00:36:03.756 "trtype": "tcp", 00:36:03.756 "traddr": "10.0.0.2", 00:36:03.756 "adrfam": "ipv4", 00:36:03.756 "trsvcid": "4420", 00:36:03.756 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:36:03.756 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:36:03.756 "hdgst": false, 00:36:03.756 "ddgst": false 00:36:03.756 }, 00:36:03.756 "method": "bdev_nvme_attach_controller" 00:36:03.756 },{ 00:36:03.756 "params": { 00:36:03.756 "name": "Nvme7", 00:36:03.756 "trtype": "tcp", 00:36:03.756 "traddr": "10.0.0.2", 00:36:03.756 "adrfam": "ipv4", 00:36:03.756 "trsvcid": "4420", 00:36:03.756 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:36:03.756 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:36:03.756 "hdgst": false, 00:36:03.756 "ddgst": false 00:36:03.756 }, 00:36:03.756 "method": "bdev_nvme_attach_controller" 00:36:03.756 },{ 00:36:03.756 "params": { 00:36:03.756 "name": "Nvme8", 00:36:03.756 "trtype": "tcp", 00:36:03.756 "traddr": "10.0.0.2", 00:36:03.756 "adrfam": "ipv4", 00:36:03.756 "trsvcid": "4420", 00:36:03.756 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:36:03.756 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:36:03.756 "hdgst": false, 00:36:03.756 "ddgst": false 00:36:03.756 }, 00:36:03.756 "method": "bdev_nvme_attach_controller" 00:36:03.756 },{ 00:36:03.756 "params": { 00:36:03.756 "name": "Nvme9", 00:36:03.756 "trtype": "tcp", 00:36:03.756 "traddr": "10.0.0.2", 00:36:03.756 "adrfam": "ipv4", 00:36:03.756 "trsvcid": "4420", 00:36:03.756 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:36:03.756 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:36:03.756 "hdgst": false, 00:36:03.756 "ddgst": false 00:36:03.756 }, 00:36:03.756 "method": "bdev_nvme_attach_controller" 00:36:03.756 },{ 00:36:03.756 "params": { 00:36:03.756 "name": "Nvme10", 00:36:03.756 "trtype": "tcp", 00:36:03.756 "traddr": "10.0.0.2", 00:36:03.756 "adrfam": "ipv4", 00:36:03.756 "trsvcid": "4420", 00:36:03.756 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:36:03.756 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:36:03.756 "hdgst": false, 00:36:03.756 "ddgst": false 00:36:03.756 }, 00:36:03.756 "method": "bdev_nvme_attach_controller" 00:36:03.756 }' 00:36:03.756 [2024-05-15 09:01:58.511890] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:36:03.756 [2024-05-15 09:01:58.511963] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:36:04.015 EAL: No free 2048 kB hugepages reported on node 1 00:36:04.015 [2024-05-15 09:01:58.584865] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:04.015 [2024-05-15 09:01:58.668651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:05.913 09:02:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:36:05.913 09:02:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@861 -- # return 0 00:36:05.914 09:02:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:36:05.914 09:02:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:05.914 09:02:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:36:05.914 09:02:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:05.914 09:02:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 2363120 00:36:05.914 09:02:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:36:05.914 09:02:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:36:06.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2363120 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:36:06.848 09:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 2362949 00:36:06.848 09:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:36:06.848 09:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:36:06.848 09:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:36:06.848 09:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:36:06.848 09:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:06.848 09:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:06.848 { 00:36:06.848 "params": { 00:36:06.848 "name": "Nvme$subsystem", 00:36:06.848 "trtype": "$TEST_TRANSPORT", 00:36:06.848 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:06.848 "adrfam": "ipv4", 00:36:06.848 "trsvcid": "$NVMF_PORT", 00:36:06.848 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:06.848 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:06.848 "hdgst": ${hdgst:-false}, 00:36:06.848 "ddgst": ${ddgst:-false} 00:36:06.848 }, 00:36:06.848 "method": "bdev_nvme_attach_controller" 00:36:06.848 } 00:36:06.848 EOF 00:36:06.848 )") 00:36:06.848 09:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:36:06.848 09:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:06.848 09:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:06.848 { 00:36:06.848 "params": { 00:36:06.848 "name": "Nvme$subsystem", 00:36:06.848 "trtype": "$TEST_TRANSPORT", 00:36:06.848 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:06.848 "adrfam": "ipv4", 00:36:06.848 "trsvcid": "$NVMF_PORT", 00:36:06.848 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:06.848 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:06.848 "hdgst": ${hdgst:-false}, 00:36:06.848 "ddgst": ${ddgst:-false} 00:36:06.848 }, 00:36:06.848 "method": "bdev_nvme_attach_controller" 00:36:06.848 } 00:36:06.848 EOF 00:36:06.848 )") 00:36:06.848 09:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:36:06.848 09:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:06.848 09:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:06.848 { 00:36:06.848 "params": { 00:36:06.848 "name": "Nvme$subsystem", 00:36:06.848 "trtype": "$TEST_TRANSPORT", 00:36:06.848 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:06.848 "adrfam": "ipv4", 00:36:06.848 "trsvcid": "$NVMF_PORT", 00:36:06.848 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:06.848 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:06.848 "hdgst": ${hdgst:-false}, 00:36:06.848 "ddgst": ${ddgst:-false} 00:36:06.848 }, 00:36:06.848 "method": "bdev_nvme_attach_controller" 00:36:06.848 } 00:36:06.848 EOF 00:36:06.848 )") 00:36:06.848 09:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:36:06.848 09:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:06.848 09:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:06.848 { 00:36:06.848 "params": { 00:36:06.848 "name": "Nvme$subsystem", 00:36:06.848 "trtype": "$TEST_TRANSPORT", 00:36:06.848 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:06.848 "adrfam": "ipv4", 00:36:06.848 "trsvcid": "$NVMF_PORT", 00:36:06.848 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:06.848 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:06.848 "hdgst": ${hdgst:-false}, 00:36:06.848 "ddgst": ${ddgst:-false} 00:36:06.848 }, 00:36:06.848 "method": "bdev_nvme_attach_controller" 00:36:06.848 } 00:36:06.848 EOF 00:36:06.848 )") 00:36:06.848 09:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:36:06.848 09:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:06.848 09:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:06.848 { 00:36:06.848 "params": { 00:36:06.848 "name": "Nvme$subsystem", 00:36:06.848 "trtype": "$TEST_TRANSPORT", 00:36:06.848 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:06.848 "adrfam": "ipv4", 00:36:06.848 "trsvcid": "$NVMF_PORT", 00:36:06.848 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:06.848 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:06.848 "hdgst": ${hdgst:-false}, 00:36:06.848 "ddgst": ${ddgst:-false} 00:36:06.848 }, 00:36:06.848 "method": "bdev_nvme_attach_controller" 00:36:06.848 } 00:36:06.848 EOF 00:36:06.848 )") 00:36:06.848 09:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:36:06.848 09:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:06.848 09:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:06.848 { 00:36:06.848 "params": { 00:36:06.848 "name": "Nvme$subsystem", 00:36:06.848 "trtype": "$TEST_TRANSPORT", 00:36:06.848 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:06.848 "adrfam": "ipv4", 00:36:06.848 "trsvcid": "$NVMF_PORT", 00:36:06.848 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:06.848 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:06.848 "hdgst": ${hdgst:-false}, 00:36:06.848 "ddgst": ${ddgst:-false} 00:36:06.848 }, 00:36:06.848 "method": "bdev_nvme_attach_controller" 00:36:06.848 } 00:36:06.848 EOF 00:36:06.848 )") 00:36:06.848 09:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:36:06.848 09:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:06.848 09:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:06.848 { 00:36:06.848 "params": { 00:36:06.848 "name": "Nvme$subsystem", 00:36:06.848 "trtype": "$TEST_TRANSPORT", 00:36:06.848 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:06.848 "adrfam": "ipv4", 00:36:06.848 "trsvcid": "$NVMF_PORT", 00:36:06.848 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:06.848 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:06.848 "hdgst": ${hdgst:-false}, 00:36:06.848 "ddgst": ${ddgst:-false} 00:36:06.848 }, 00:36:06.848 "method": "bdev_nvme_attach_controller" 00:36:06.848 } 00:36:06.848 EOF 00:36:06.848 )") 00:36:06.848 09:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:36:06.848 09:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:06.848 09:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:06.848 { 00:36:06.848 "params": { 00:36:06.848 "name": "Nvme$subsystem", 00:36:06.848 "trtype": "$TEST_TRANSPORT", 00:36:06.848 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:06.848 "adrfam": "ipv4", 00:36:06.848 "trsvcid": "$NVMF_PORT", 00:36:06.848 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:06.848 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:06.848 "hdgst": ${hdgst:-false}, 00:36:06.848 "ddgst": ${ddgst:-false} 00:36:06.848 }, 00:36:06.848 "method": "bdev_nvme_attach_controller" 00:36:06.848 } 00:36:06.848 EOF 00:36:06.848 )") 00:36:06.848 09:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:36:06.848 09:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:06.848 09:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:06.848 { 00:36:06.848 "params": { 00:36:06.848 "name": "Nvme$subsystem", 00:36:06.848 "trtype": "$TEST_TRANSPORT", 00:36:06.848 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:06.848 "adrfam": "ipv4", 00:36:06.848 "trsvcid": "$NVMF_PORT", 00:36:06.848 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:06.848 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:06.848 "hdgst": ${hdgst:-false}, 00:36:06.848 "ddgst": ${ddgst:-false} 00:36:06.848 }, 00:36:06.848 "method": "bdev_nvme_attach_controller" 00:36:06.848 } 00:36:06.848 EOF 00:36:06.848 )") 00:36:06.848 09:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:36:06.848 09:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:06.848 09:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:06.848 { 00:36:06.848 "params": { 00:36:06.848 "name": "Nvme$subsystem", 00:36:06.848 "trtype": "$TEST_TRANSPORT", 00:36:06.848 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:06.848 "adrfam": "ipv4", 00:36:06.848 "trsvcid": "$NVMF_PORT", 00:36:06.848 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:06.848 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:06.848 "hdgst": ${hdgst:-false}, 00:36:06.848 "ddgst": ${ddgst:-false} 00:36:06.848 }, 00:36:06.848 "method": "bdev_nvme_attach_controller" 00:36:06.848 } 00:36:06.848 EOF 00:36:06.848 )") 00:36:06.848 09:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:36:06.848 09:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:36:06.848 09:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:36:06.848 09:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:06.849 "params": { 00:36:06.849 "name": "Nvme1", 00:36:06.849 "trtype": "tcp", 00:36:06.849 "traddr": "10.0.0.2", 00:36:06.849 "adrfam": "ipv4", 00:36:06.849 "trsvcid": "4420", 00:36:06.849 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:06.849 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:06.849 "hdgst": false, 00:36:06.849 "ddgst": false 00:36:06.849 }, 00:36:06.849 "method": "bdev_nvme_attach_controller" 00:36:06.849 },{ 00:36:06.849 "params": { 00:36:06.849 "name": "Nvme2", 00:36:06.849 "trtype": "tcp", 00:36:06.849 "traddr": "10.0.0.2", 00:36:06.849 "adrfam": "ipv4", 00:36:06.849 "trsvcid": "4420", 00:36:06.849 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:36:06.849 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:36:06.849 "hdgst": false, 00:36:06.849 "ddgst": false 00:36:06.849 }, 00:36:06.849 "method": "bdev_nvme_attach_controller" 00:36:06.849 },{ 00:36:06.849 "params": { 00:36:06.849 "name": "Nvme3", 00:36:06.849 "trtype": "tcp", 00:36:06.849 "traddr": "10.0.0.2", 00:36:06.849 "adrfam": "ipv4", 00:36:06.849 "trsvcid": "4420", 00:36:06.849 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:36:06.849 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:36:06.849 "hdgst": false, 00:36:06.849 "ddgst": false 00:36:06.849 }, 00:36:06.849 "method": "bdev_nvme_attach_controller" 00:36:06.849 },{ 00:36:06.849 "params": { 00:36:06.849 "name": "Nvme4", 00:36:06.849 "trtype": "tcp", 00:36:06.849 "traddr": "10.0.0.2", 00:36:06.849 "adrfam": "ipv4", 00:36:06.849 "trsvcid": "4420", 00:36:06.849 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:36:06.849 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:36:06.849 "hdgst": false, 00:36:06.849 "ddgst": false 00:36:06.849 }, 00:36:06.849 "method": "bdev_nvme_attach_controller" 00:36:06.849 },{ 00:36:06.849 "params": { 00:36:06.849 "name": "Nvme5", 00:36:06.849 "trtype": "tcp", 00:36:06.849 "traddr": "10.0.0.2", 00:36:06.849 "adrfam": "ipv4", 00:36:06.849 "trsvcid": "4420", 00:36:06.849 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:36:06.849 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:36:06.849 "hdgst": false, 00:36:06.849 "ddgst": false 00:36:06.849 }, 00:36:06.849 "method": "bdev_nvme_attach_controller" 00:36:06.849 },{ 00:36:06.849 "params": { 00:36:06.849 "name": "Nvme6", 00:36:06.849 "trtype": "tcp", 00:36:06.849 "traddr": "10.0.0.2", 00:36:06.849 "adrfam": "ipv4", 00:36:06.849 "trsvcid": "4420", 00:36:06.849 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:36:06.849 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:36:06.849 "hdgst": false, 00:36:06.849 "ddgst": false 00:36:06.849 }, 00:36:06.849 "method": "bdev_nvme_attach_controller" 00:36:06.849 },{ 00:36:06.849 "params": { 00:36:06.849 "name": "Nvme7", 00:36:06.849 "trtype": "tcp", 00:36:06.849 "traddr": "10.0.0.2", 00:36:06.849 "adrfam": "ipv4", 00:36:06.849 "trsvcid": "4420", 00:36:06.849 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:36:06.849 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:36:06.849 "hdgst": false, 00:36:06.849 "ddgst": false 00:36:06.849 }, 00:36:06.849 "method": "bdev_nvme_attach_controller" 00:36:06.849 },{ 00:36:06.849 "params": { 00:36:06.849 "name": "Nvme8", 00:36:06.849 "trtype": "tcp", 00:36:06.849 "traddr": "10.0.0.2", 00:36:06.849 "adrfam": "ipv4", 00:36:06.849 "trsvcid": "4420", 00:36:06.849 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:36:06.849 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:36:06.849 "hdgst": false, 00:36:06.849 "ddgst": false 00:36:06.849 }, 00:36:06.849 "method": "bdev_nvme_attach_controller" 00:36:06.849 },{ 00:36:06.849 "params": { 00:36:06.849 "name": "Nvme9", 00:36:06.849 "trtype": "tcp", 00:36:06.849 "traddr": "10.0.0.2", 00:36:06.849 "adrfam": "ipv4", 00:36:06.849 "trsvcid": "4420", 00:36:06.849 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:36:06.849 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:36:06.849 "hdgst": false, 00:36:06.849 "ddgst": false 00:36:06.849 }, 00:36:06.849 "method": "bdev_nvme_attach_controller" 00:36:06.849 },{ 00:36:06.849 "params": { 00:36:06.849 "name": "Nvme10", 00:36:06.849 "trtype": "tcp", 00:36:06.849 "traddr": "10.0.0.2", 00:36:06.849 "adrfam": "ipv4", 00:36:06.849 "trsvcid": "4420", 00:36:06.849 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:36:06.849 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:36:06.849 "hdgst": false, 00:36:06.849 "ddgst": false 00:36:06.849 }, 00:36:06.849 "method": "bdev_nvme_attach_controller" 00:36:06.849 }' 00:36:06.849 [2024-05-15 09:02:01.566638] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:36:06.849 [2024-05-15 09:02:01.566726] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2363544 ] 00:36:06.849 EAL: No free 2048 kB hugepages reported on node 1 00:36:07.107 [2024-05-15 09:02:01.639786] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:07.107 [2024-05-15 09:02:01.723843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:08.479 Running I/O for 1 seconds... 00:36:09.851 00:36:09.851 Latency(us) 00:36:09.851 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:09.851 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:36:09.851 Verification LBA range: start 0x0 length 0x400 00:36:09.851 Nvme1n1 : 1.15 223.21 13.95 0.00 0.00 283967.72 22330.79 253211.69 00:36:09.851 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:36:09.851 Verification LBA range: start 0x0 length 0x400 00:36:09.851 Nvme2n1 : 1.14 225.52 14.09 0.00 0.00 276423.87 19709.35 254765.13 00:36:09.851 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:36:09.851 Verification LBA range: start 0x0 length 0x400 00:36:09.851 Nvme3n1 : 1.12 228.90 14.31 0.00 0.00 267522.47 19320.98 256318.58 00:36:09.851 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:36:09.851 Verification LBA range: start 0x0 length 0x400 00:36:09.851 Nvme4n1 : 1.15 278.79 17.42 0.00 0.00 215809.14 16408.27 246997.90 00:36:09.851 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:36:09.851 Verification LBA range: start 0x0 length 0x400 00:36:09.851 Nvme5n1 : 1.16 221.19 13.82 0.00 0.00 268250.83 20971.52 267192.70 00:36:09.851 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:36:09.851 Verification LBA range: start 0x0 length 0x400 00:36:09.851 Nvme6n1 : 1.15 222.57 13.91 0.00 0.00 262006.71 22816.24 253211.69 00:36:09.851 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:36:09.851 Verification LBA range: start 0x0 length 0x400 00:36:09.851 Nvme7n1 : 1.13 226.48 14.15 0.00 0.00 252393.81 19612.25 254765.13 00:36:09.851 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:36:09.851 Verification LBA range: start 0x0 length 0x400 00:36:09.851 Nvme8n1 : 1.18 271.13 16.95 0.00 0.00 207495.62 16796.63 257872.02 00:36:09.851 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:36:09.851 Verification LBA range: start 0x0 length 0x400 00:36:09.851 Nvme9n1 : 1.19 215.73 13.48 0.00 0.00 257703.25 22427.88 265639.25 00:36:09.851 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:36:09.851 Verification LBA range: start 0x0 length 0x400 00:36:09.851 Nvme10n1 : 1.20 266.12 16.63 0.00 0.00 205537.13 6893.42 279620.27 00:36:09.851 =================================================================================================================== 00:36:09.851 Total : 2379.63 148.73 0.00 0.00 246913.58 6893.42 279620.27 00:36:09.851 09:02:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:36:09.852 09:02:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:36:09.852 09:02:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:36:09.852 09:02:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:36:09.852 09:02:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:36:09.852 09:02:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:09.852 09:02:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:36:09.852 09:02:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:09.852 09:02:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:36:09.852 09:02:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:09.852 09:02:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:09.852 rmmod nvme_tcp 00:36:09.852 rmmod nvme_fabrics 00:36:09.852 rmmod nvme_keyring 00:36:09.852 09:02:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:09.852 09:02:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:36:09.852 09:02:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:36:09.852 09:02:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 2362949 ']' 00:36:09.852 09:02:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 2362949 00:36:09.852 09:02:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@947 -- # '[' -z 2362949 ']' 00:36:09.852 09:02:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # kill -0 2362949 00:36:09.852 09:02:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # uname 00:36:09.852 09:02:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:36:09.852 09:02:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2362949 00:36:09.852 09:02:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:36:09.852 09:02:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:36:09.852 09:02:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2362949' 00:36:09.852 killing process with pid 2362949 00:36:09.852 09:02:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # kill 2362949 00:36:09.852 [2024-05-15 09:02:04.581357] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:36:09.852 09:02:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@971 -- # wait 2362949 00:36:10.417 09:02:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:10.417 09:02:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:10.417 09:02:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:10.417 09:02:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:10.417 09:02:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:10.417 09:02:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:10.417 09:02:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:10.417 09:02:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:12.948 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:12.948 00:36:12.948 real 0m12.185s 00:36:12.948 user 0m33.658s 00:36:12.948 sys 0m3.615s 00:36:12.948 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:36:12.948 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:36:12.948 ************************************ 00:36:12.948 END TEST nvmf_shutdown_tc1 00:36:12.948 ************************************ 00:36:12.948 09:02:07 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:36:12.948 09:02:07 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:36:12.948 09:02:07 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1104 -- # xtrace_disable 00:36:12.948 09:02:07 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:36:12.948 ************************************ 00:36:12.948 START TEST nvmf_shutdown_tc2 00:36:12.948 ************************************ 00:36:12.948 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1122 -- # nvmf_shutdown_tc2 00:36:12.948 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:36:12.948 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:36:12.948 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:12.948 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:12.948 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:12.948 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:12.948 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:12.948 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:12.948 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:12.948 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:12.948 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:12.948 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:12.948 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:36:12.948 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:12.948 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:12.948 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:36:12.948 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:12.948 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:12.948 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:12.948 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:12.948 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:12.948 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:36:12.948 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:12.948 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:36:12.948 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:36:12.949 Found 0000:09:00.0 (0x8086 - 0x159b) 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:36:12.949 Found 0000:09:00.1 (0x8086 - 0x159b) 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:36:12.949 Found net devices under 0000:09:00.0: cvl_0_0 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:36:12.949 Found net devices under 0000:09:00.1: cvl_0_1 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:12.949 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:12.949 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:36:12.949 00:36:12.949 --- 10.0.0.2 ping statistics --- 00:36:12.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:12.949 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:12.949 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:12.949 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:36:12.949 00:36:12.949 --- 10.0.0.1 ping statistics --- 00:36:12.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:12.949 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:36:12.949 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:12.950 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@721 -- # xtrace_disable 00:36:12.950 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:12.950 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2364303 00:36:12.950 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:36:12.950 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2364303 00:36:12.950 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@828 -- # '[' -z 2364303 ']' 00:36:12.950 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:12.950 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local max_retries=100 00:36:12.950 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:12.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:12.950 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # xtrace_disable 00:36:12.950 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:12.950 [2024-05-15 09:02:07.375946] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:36:12.950 [2024-05-15 09:02:07.376030] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:12.950 EAL: No free 2048 kB hugepages reported on node 1 00:36:12.950 [2024-05-15 09:02:07.455143] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:12.950 [2024-05-15 09:02:07.545038] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:12.950 [2024-05-15 09:02:07.545100] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:12.950 [2024-05-15 09:02:07.545125] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:12.950 [2024-05-15 09:02:07.545139] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:12.950 [2024-05-15 09:02:07.545150] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:12.950 [2024-05-15 09:02:07.545250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:36:12.950 [2024-05-15 09:02:07.545328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:36:12.950 [2024-05-15 09:02:07.545394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:12.950 [2024-05-15 09:02:07.545392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:36:12.950 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:36:12.950 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@861 -- # return 0 00:36:12.950 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:12.950 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@727 -- # xtrace_disable 00:36:12.950 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:12.950 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:12.950 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:12.950 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:12.950 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:12.950 [2024-05-15 09:02:07.680759] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:12.950 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:12.950 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:36:12.950 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:36:12.950 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@721 -- # xtrace_disable 00:36:12.950 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:12.950 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:36:12.950 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:36:12.950 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:36:12.950 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:36:12.950 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:36:12.950 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:36:12.950 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:36:12.950 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:36:12.950 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:36:12.950 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:36:12.950 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:36:12.950 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:36:12.950 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:36:12.950 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:36:12.950 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:36:12.950 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:36:12.950 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:36:12.950 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:36:12.950 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:36:12.950 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:36:12.950 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:36:12.950 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:36:12.950 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:12.950 09:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:12.950 Malloc1 00:36:13.208 [2024-05-15 09:02:07.755318] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:36:13.208 [2024-05-15 09:02:07.755654] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:13.208 Malloc2 00:36:13.208 Malloc3 00:36:13.208 Malloc4 00:36:13.208 Malloc5 00:36:13.208 Malloc6 00:36:13.467 Malloc7 00:36:13.467 Malloc8 00:36:13.467 Malloc9 00:36:13.467 Malloc10 00:36:13.467 09:02:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:13.467 09:02:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:36:13.467 09:02:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@727 -- # xtrace_disable 00:36:13.467 09:02:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:13.467 09:02:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=2364366 00:36:13.467 09:02:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 2364366 /var/tmp/bdevperf.sock 00:36:13.467 09:02:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@828 -- # '[' -z 2364366 ']' 00:36:13.467 09:02:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:36:13.467 09:02:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:36:13.467 09:02:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:36:13.467 09:02:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local max_retries=100 00:36:13.467 09:02:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:36:13.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:36:13.467 09:02:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:36:13.467 09:02:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # xtrace_disable 00:36:13.467 09:02:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:36:13.467 09:02:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:13.467 09:02:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:13.467 09:02:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:13.467 { 00:36:13.467 "params": { 00:36:13.467 "name": "Nvme$subsystem", 00:36:13.467 "trtype": "$TEST_TRANSPORT", 00:36:13.467 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:13.467 "adrfam": "ipv4", 00:36:13.467 "trsvcid": "$NVMF_PORT", 00:36:13.467 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:13.467 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:13.467 "hdgst": ${hdgst:-false}, 00:36:13.467 "ddgst": ${ddgst:-false} 00:36:13.467 }, 00:36:13.467 "method": "bdev_nvme_attach_controller" 00:36:13.467 } 00:36:13.467 EOF 00:36:13.467 )") 00:36:13.467 09:02:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:36:13.467 09:02:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:13.467 09:02:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:13.467 { 00:36:13.467 "params": { 00:36:13.467 "name": "Nvme$subsystem", 00:36:13.467 "trtype": "$TEST_TRANSPORT", 00:36:13.467 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:13.467 "adrfam": "ipv4", 00:36:13.467 "trsvcid": "$NVMF_PORT", 00:36:13.467 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:13.467 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:13.467 "hdgst": ${hdgst:-false}, 00:36:13.467 "ddgst": ${ddgst:-false} 00:36:13.467 }, 00:36:13.467 "method": "bdev_nvme_attach_controller" 00:36:13.467 } 00:36:13.467 EOF 00:36:13.467 )") 00:36:13.467 09:02:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:36:13.467 09:02:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:13.467 09:02:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:13.467 { 00:36:13.467 "params": { 00:36:13.467 "name": "Nvme$subsystem", 00:36:13.467 "trtype": "$TEST_TRANSPORT", 00:36:13.467 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:13.467 "adrfam": "ipv4", 00:36:13.467 "trsvcid": "$NVMF_PORT", 00:36:13.467 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:13.467 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:13.467 "hdgst": ${hdgst:-false}, 00:36:13.467 "ddgst": ${ddgst:-false} 00:36:13.467 }, 00:36:13.467 "method": "bdev_nvme_attach_controller" 00:36:13.467 } 00:36:13.467 EOF 00:36:13.467 )") 00:36:13.467 09:02:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:36:13.468 09:02:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:13.468 09:02:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:13.468 { 00:36:13.468 "params": { 00:36:13.468 "name": "Nvme$subsystem", 00:36:13.468 "trtype": "$TEST_TRANSPORT", 00:36:13.468 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:13.468 "adrfam": "ipv4", 00:36:13.468 "trsvcid": "$NVMF_PORT", 00:36:13.468 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:13.468 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:13.468 "hdgst": ${hdgst:-false}, 00:36:13.468 "ddgst": ${ddgst:-false} 00:36:13.468 }, 00:36:13.468 "method": "bdev_nvme_attach_controller" 00:36:13.468 } 00:36:13.468 EOF 00:36:13.468 )") 00:36:13.468 09:02:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:36:13.468 09:02:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:13.468 09:02:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:13.468 { 00:36:13.468 "params": { 00:36:13.468 "name": "Nvme$subsystem", 00:36:13.468 "trtype": "$TEST_TRANSPORT", 00:36:13.468 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:13.468 "adrfam": "ipv4", 00:36:13.468 "trsvcid": "$NVMF_PORT", 00:36:13.468 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:13.468 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:13.468 "hdgst": ${hdgst:-false}, 00:36:13.468 "ddgst": ${ddgst:-false} 00:36:13.468 }, 00:36:13.468 "method": "bdev_nvme_attach_controller" 00:36:13.468 } 00:36:13.468 EOF 00:36:13.468 )") 00:36:13.468 09:02:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:36:13.468 09:02:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:13.468 09:02:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:13.468 { 00:36:13.468 "params": { 00:36:13.468 "name": "Nvme$subsystem", 00:36:13.468 "trtype": "$TEST_TRANSPORT", 00:36:13.468 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:13.468 "adrfam": "ipv4", 00:36:13.468 "trsvcid": "$NVMF_PORT", 00:36:13.468 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:13.468 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:13.468 "hdgst": ${hdgst:-false}, 00:36:13.468 "ddgst": ${ddgst:-false} 00:36:13.468 }, 00:36:13.468 "method": "bdev_nvme_attach_controller" 00:36:13.468 } 00:36:13.468 EOF 00:36:13.468 )") 00:36:13.468 09:02:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:36:13.468 09:02:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:13.468 09:02:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:13.468 { 00:36:13.468 "params": { 00:36:13.468 "name": "Nvme$subsystem", 00:36:13.468 "trtype": "$TEST_TRANSPORT", 00:36:13.468 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:13.468 "adrfam": "ipv4", 00:36:13.468 "trsvcid": "$NVMF_PORT", 00:36:13.468 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:13.468 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:13.468 "hdgst": ${hdgst:-false}, 00:36:13.468 "ddgst": ${ddgst:-false} 00:36:13.468 }, 00:36:13.468 "method": "bdev_nvme_attach_controller" 00:36:13.468 } 00:36:13.468 EOF 00:36:13.468 )") 00:36:13.468 09:02:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:36:13.468 09:02:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:13.468 09:02:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:13.468 { 00:36:13.468 "params": { 00:36:13.468 "name": "Nvme$subsystem", 00:36:13.468 "trtype": "$TEST_TRANSPORT", 00:36:13.468 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:13.468 "adrfam": "ipv4", 00:36:13.468 "trsvcid": "$NVMF_PORT", 00:36:13.468 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:13.468 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:13.468 "hdgst": ${hdgst:-false}, 00:36:13.468 "ddgst": ${ddgst:-false} 00:36:13.468 }, 00:36:13.468 "method": "bdev_nvme_attach_controller" 00:36:13.468 } 00:36:13.468 EOF 00:36:13.468 )") 00:36:13.468 09:02:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:36:13.468 09:02:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:13.468 09:02:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:13.468 { 00:36:13.468 "params": { 00:36:13.468 "name": "Nvme$subsystem", 00:36:13.468 "trtype": "$TEST_TRANSPORT", 00:36:13.468 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:13.468 "adrfam": "ipv4", 00:36:13.468 "trsvcid": "$NVMF_PORT", 00:36:13.468 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:13.468 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:13.468 "hdgst": ${hdgst:-false}, 00:36:13.468 "ddgst": ${ddgst:-false} 00:36:13.468 }, 00:36:13.468 "method": "bdev_nvme_attach_controller" 00:36:13.468 } 00:36:13.468 EOF 00:36:13.468 )") 00:36:13.468 09:02:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:36:13.468 09:02:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:13.468 09:02:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:13.468 { 00:36:13.468 "params": { 00:36:13.468 "name": "Nvme$subsystem", 00:36:13.468 "trtype": "$TEST_TRANSPORT", 00:36:13.468 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:13.468 "adrfam": "ipv4", 00:36:13.468 "trsvcid": "$NVMF_PORT", 00:36:13.468 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:13.468 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:13.468 "hdgst": ${hdgst:-false}, 00:36:13.468 "ddgst": ${ddgst:-false} 00:36:13.468 }, 00:36:13.468 "method": "bdev_nvme_attach_controller" 00:36:13.468 } 00:36:13.468 EOF 00:36:13.468 )") 00:36:13.468 09:02:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:36:13.468 09:02:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:36:13.726 09:02:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:36:13.726 09:02:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:13.726 "params": { 00:36:13.726 "name": "Nvme1", 00:36:13.726 "trtype": "tcp", 00:36:13.726 "traddr": "10.0.0.2", 00:36:13.726 "adrfam": "ipv4", 00:36:13.726 "trsvcid": "4420", 00:36:13.726 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:13.726 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:13.726 "hdgst": false, 00:36:13.726 "ddgst": false 00:36:13.726 }, 00:36:13.726 "method": "bdev_nvme_attach_controller" 00:36:13.726 },{ 00:36:13.726 "params": { 00:36:13.726 "name": "Nvme2", 00:36:13.727 "trtype": "tcp", 00:36:13.727 "traddr": "10.0.0.2", 00:36:13.727 "adrfam": "ipv4", 00:36:13.727 "trsvcid": "4420", 00:36:13.727 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:36:13.727 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:36:13.727 "hdgst": false, 00:36:13.727 "ddgst": false 00:36:13.727 }, 00:36:13.727 "method": "bdev_nvme_attach_controller" 00:36:13.727 },{ 00:36:13.727 "params": { 00:36:13.727 "name": "Nvme3", 00:36:13.727 "trtype": "tcp", 00:36:13.727 "traddr": "10.0.0.2", 00:36:13.727 "adrfam": "ipv4", 00:36:13.727 "trsvcid": "4420", 00:36:13.727 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:36:13.727 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:36:13.727 "hdgst": false, 00:36:13.727 "ddgst": false 00:36:13.727 }, 00:36:13.727 "method": "bdev_nvme_attach_controller" 00:36:13.727 },{ 00:36:13.727 "params": { 00:36:13.727 "name": "Nvme4", 00:36:13.727 "trtype": "tcp", 00:36:13.727 "traddr": "10.0.0.2", 00:36:13.727 "adrfam": "ipv4", 00:36:13.727 "trsvcid": "4420", 00:36:13.727 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:36:13.727 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:36:13.727 "hdgst": false, 00:36:13.727 "ddgst": false 00:36:13.727 }, 00:36:13.727 "method": "bdev_nvme_attach_controller" 00:36:13.727 },{ 00:36:13.727 "params": { 00:36:13.727 "name": "Nvme5", 00:36:13.727 "trtype": "tcp", 00:36:13.727 "traddr": "10.0.0.2", 00:36:13.727 "adrfam": "ipv4", 00:36:13.727 "trsvcid": "4420", 00:36:13.727 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:36:13.727 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:36:13.727 "hdgst": false, 00:36:13.727 "ddgst": false 00:36:13.727 }, 00:36:13.727 "method": "bdev_nvme_attach_controller" 00:36:13.727 },{ 00:36:13.727 "params": { 00:36:13.727 "name": "Nvme6", 00:36:13.727 "trtype": "tcp", 00:36:13.727 "traddr": "10.0.0.2", 00:36:13.727 "adrfam": "ipv4", 00:36:13.727 "trsvcid": "4420", 00:36:13.727 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:36:13.727 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:36:13.727 "hdgst": false, 00:36:13.727 "ddgst": false 00:36:13.727 }, 00:36:13.727 "method": "bdev_nvme_attach_controller" 00:36:13.727 },{ 00:36:13.727 "params": { 00:36:13.727 "name": "Nvme7", 00:36:13.727 "trtype": "tcp", 00:36:13.727 "traddr": "10.0.0.2", 00:36:13.727 "adrfam": "ipv4", 00:36:13.727 "trsvcid": "4420", 00:36:13.727 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:36:13.727 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:36:13.727 "hdgst": false, 00:36:13.727 "ddgst": false 00:36:13.727 }, 00:36:13.727 "method": "bdev_nvme_attach_controller" 00:36:13.727 },{ 00:36:13.727 "params": { 00:36:13.727 "name": "Nvme8", 00:36:13.727 "trtype": "tcp", 00:36:13.727 "traddr": "10.0.0.2", 00:36:13.727 "adrfam": "ipv4", 00:36:13.727 "trsvcid": "4420", 00:36:13.727 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:36:13.727 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:36:13.727 "hdgst": false, 00:36:13.727 "ddgst": false 00:36:13.727 }, 00:36:13.727 "method": "bdev_nvme_attach_controller" 00:36:13.727 },{ 00:36:13.727 "params": { 00:36:13.727 "name": "Nvme9", 00:36:13.727 "trtype": "tcp", 00:36:13.727 "traddr": "10.0.0.2", 00:36:13.727 "adrfam": "ipv4", 00:36:13.727 "trsvcid": "4420", 00:36:13.727 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:36:13.727 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:36:13.727 "hdgst": false, 00:36:13.727 "ddgst": false 00:36:13.727 }, 00:36:13.727 "method": "bdev_nvme_attach_controller" 00:36:13.727 },{ 00:36:13.727 "params": { 00:36:13.727 "name": "Nvme10", 00:36:13.727 "trtype": "tcp", 00:36:13.727 "traddr": "10.0.0.2", 00:36:13.727 "adrfam": "ipv4", 00:36:13.727 "trsvcid": "4420", 00:36:13.727 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:36:13.727 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:36:13.727 "hdgst": false, 00:36:13.727 "ddgst": false 00:36:13.727 }, 00:36:13.727 "method": "bdev_nvme_attach_controller" 00:36:13.727 }' 00:36:13.727 [2024-05-15 09:02:08.266876] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:36:13.727 [2024-05-15 09:02:08.266981] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2364366 ] 00:36:13.727 EAL: No free 2048 kB hugepages reported on node 1 00:36:13.727 [2024-05-15 09:02:08.343363] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:13.727 [2024-05-15 09:02:08.427343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:15.101 Running I/O for 10 seconds... 00:36:15.668 09:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:36:15.668 09:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@861 -- # return 0 00:36:15.668 09:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:36:15.668 09:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:15.668 09:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:15.668 09:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:15.668 09:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:36:15.668 09:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:36:15.668 09:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:36:15.668 09:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:36:15.668 09:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:36:15.668 09:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:36:15.668 09:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:36:15.668 09:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:36:15.668 09:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:15.668 09:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:36:15.668 09:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:15.668 09:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:15.668 09:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:36:15.668 09:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:36:15.668 09:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:36:15.925 09:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:36:15.925 09:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:36:15.925 09:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:36:15.925 09:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:36:15.925 09:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:15.925 09:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:15.925 09:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:15.925 09:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:36:15.925 09:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:36:15.926 09:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:36:15.926 09:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:36:15.926 09:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:36:15.926 09:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 2364366 00:36:15.926 09:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@947 -- # '[' -z 2364366 ']' 00:36:15.926 09:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # kill -0 2364366 00:36:15.926 09:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # uname 00:36:15.926 09:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:36:15.926 09:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2364366 00:36:15.926 09:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:36:15.926 09:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:36:15.926 09:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2364366' 00:36:15.926 killing process with pid 2364366 00:36:15.926 09:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # kill 2364366 00:36:15.926 09:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # wait 2364366 00:36:16.183 Received shutdown signal, test time was about 0.905644 seconds 00:36:16.183 00:36:16.183 Latency(us) 00:36:16.183 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:16.183 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:36:16.183 Verification LBA range: start 0x0 length 0x400 00:36:16.183 Nvme1n1 : 0.90 283.84 17.74 0.00 0.00 221922.61 19903.53 246997.90 00:36:16.183 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:36:16.183 Verification LBA range: start 0x0 length 0x400 00:36:16.183 Nvme2n1 : 0.88 217.67 13.60 0.00 0.00 284313.73 21845.33 256318.58 00:36:16.183 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:36:16.183 Verification LBA range: start 0x0 length 0x400 00:36:16.183 Nvme3n1 : 0.86 221.99 13.87 0.00 0.00 272136.98 20777.34 274959.93 00:36:16.183 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:36:16.183 Verification LBA range: start 0x0 length 0x400 00:36:16.183 Nvme4n1 : 0.86 223.25 13.95 0.00 0.00 264324.99 19806.44 254765.13 00:36:16.183 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:36:16.183 Verification LBA range: start 0x0 length 0x400 00:36:16.183 Nvme5n1 : 0.88 218.44 13.65 0.00 0.00 265068.34 19126.80 242337.56 00:36:16.183 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:36:16.183 Verification LBA range: start 0x0 length 0x400 00:36:16.183 Nvme6n1 : 0.87 236.51 14.78 0.00 0.00 234997.82 8543.95 219035.88 00:36:16.183 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:36:16.183 Verification LBA range: start 0x0 length 0x400 00:36:16.183 Nvme7n1 : 0.90 282.94 17.68 0.00 0.00 195955.29 19515.16 256318.58 00:36:16.183 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:36:16.183 Verification LBA range: start 0x0 length 0x400 00:36:16.183 Nvme8n1 : 0.89 215.52 13.47 0.00 0.00 250854.72 21456.97 273406.48 00:36:16.183 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:36:16.183 Verification LBA range: start 0x0 length 0x400 00:36:16.183 Nvme9n1 : 0.89 214.90 13.43 0.00 0.00 245792.87 19126.80 254765.13 00:36:16.183 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:36:16.183 Verification LBA range: start 0x0 length 0x400 00:36:16.183 Nvme10n1 : 0.90 213.76 13.36 0.00 0.00 241877.14 21554.06 284280.60 00:36:16.183 =================================================================================================================== 00:36:16.183 Total : 2328.84 145.55 0.00 0.00 245230.41 8543.95 284280.60 00:36:16.183 09:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:36:17.554 09:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 2364303 00:36:17.554 09:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:36:17.554 09:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:36:17.554 09:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:36:17.554 09:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:36:17.554 09:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:36:17.554 09:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:17.554 09:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:36:17.554 09:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:17.554 09:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:36:17.554 09:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:17.554 09:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:17.554 rmmod nvme_tcp 00:36:17.554 rmmod nvme_fabrics 00:36:17.554 rmmod nvme_keyring 00:36:17.554 09:02:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:17.554 09:02:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:36:17.554 09:02:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:36:17.554 09:02:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 2364303 ']' 00:36:17.554 09:02:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 2364303 00:36:17.554 09:02:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@947 -- # '[' -z 2364303 ']' 00:36:17.554 09:02:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # kill -0 2364303 00:36:17.554 09:02:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # uname 00:36:17.554 09:02:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:36:17.554 09:02:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2364303 00:36:17.554 09:02:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:36:17.554 09:02:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:36:17.554 09:02:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2364303' 00:36:17.554 killing process with pid 2364303 00:36:17.554 09:02:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # kill 2364303 00:36:17.554 [2024-05-15 09:02:12.047180] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:36:17.554 09:02:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # wait 2364303 00:36:17.813 09:02:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:17.813 09:02:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:17.813 09:02:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:17.813 09:02:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:17.813 09:02:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:17.813 09:02:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:17.813 09:02:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:17.813 09:02:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:20.374 00:36:20.374 real 0m7.450s 00:36:20.374 user 0m22.205s 00:36:20.374 sys 0m1.445s 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:20.374 ************************************ 00:36:20.374 END TEST nvmf_shutdown_tc2 00:36:20.374 ************************************ 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1104 -- # xtrace_disable 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:36:20.374 ************************************ 00:36:20.374 START TEST nvmf_shutdown_tc3 00:36:20.374 ************************************ 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1122 -- # nvmf_shutdown_tc3 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:36:20.374 Found 0000:09:00.0 (0x8086 - 0x159b) 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:36:20.374 Found 0000:09:00.1 (0x8086 - 0x159b) 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:36:20.374 Found net devices under 0000:09:00.0: cvl_0_0 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:36:20.374 Found net devices under 0000:09:00.1: cvl_0_1 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:20.374 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:20.375 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:20.375 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:20.375 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:20.375 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:20.375 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:20.375 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:20.375 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:20.375 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:20.375 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:20.375 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:20.375 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:20.375 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:20.375 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:20.375 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:20.375 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:20.375 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:20.375 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:20.375 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:20.375 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:20.375 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:20.375 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:20.375 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:20.375 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:36:20.375 00:36:20.375 --- 10.0.0.2 ping statistics --- 00:36:20.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:20.375 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:36:20.375 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:20.375 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:20.375 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:36:20.375 00:36:20.375 --- 10.0.0.1 ping statistics --- 00:36:20.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:20.375 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:36:20.375 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:20.375 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:36:20.375 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:20.375 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:20.375 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:20.375 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:20.375 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:20.375 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:20.375 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:20.375 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:36:20.375 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:20.375 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@721 -- # xtrace_disable 00:36:20.375 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:36:20.375 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=2365281 00:36:20.375 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:36:20.375 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 2365281 00:36:20.375 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@828 -- # '[' -z 2365281 ']' 00:36:20.375 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:20.375 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local max_retries=100 00:36:20.375 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:20.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:20.375 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # xtrace_disable 00:36:20.375 09:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:36:20.375 [2024-05-15 09:02:14.872139] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:36:20.375 [2024-05-15 09:02:14.872234] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:20.375 EAL: No free 2048 kB hugepages reported on node 1 00:36:20.375 [2024-05-15 09:02:14.945790] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:20.375 [2024-05-15 09:02:15.026721] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:20.375 [2024-05-15 09:02:15.026788] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:20.375 [2024-05-15 09:02:15.026801] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:20.375 [2024-05-15 09:02:15.026811] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:20.375 [2024-05-15 09:02:15.026820] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:20.375 [2024-05-15 09:02:15.026903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:36:20.375 [2024-05-15 09:02:15.026965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:36:20.375 [2024-05-15 09:02:15.027033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:36:20.375 [2024-05-15 09:02:15.027035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:20.375 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:36:20.375 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@861 -- # return 0 00:36:20.375 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:20.375 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@727 -- # xtrace_disable 00:36:20.375 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:36:20.375 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:20.375 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:20.375 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:20.375 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:36:20.633 [2024-05-15 09:02:15.167772] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:20.633 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:20.633 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:36:20.633 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:36:20.633 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@721 -- # xtrace_disable 00:36:20.633 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:36:20.633 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:36:20.633 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:36:20.633 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:36:20.633 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:36:20.633 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:36:20.633 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:36:20.633 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:36:20.633 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:36:20.633 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:36:20.633 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:36:20.633 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:36:20.633 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:36:20.633 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:36:20.633 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:36:20.633 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:36:20.633 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:36:20.633 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:36:20.633 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:36:20.633 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:36:20.633 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:36:20.633 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:36:20.633 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:36:20.633 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:20.633 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:36:20.633 Malloc1 00:36:20.633 [2024-05-15 09:02:15.242461] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:36:20.633 [2024-05-15 09:02:15.242792] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:20.633 Malloc2 00:36:20.633 Malloc3 00:36:20.633 Malloc4 00:36:20.633 Malloc5 00:36:20.891 Malloc6 00:36:20.891 Malloc7 00:36:20.891 Malloc8 00:36:20.891 Malloc9 00:36:20.891 Malloc10 00:36:20.891 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:20.891 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:36:20.891 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@727 -- # xtrace_disable 00:36:20.891 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:36:21.149 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=2365461 00:36:21.149 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 2365461 /var/tmp/bdevperf.sock 00:36:21.149 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@828 -- # '[' -z 2365461 ']' 00:36:21.149 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:36:21.149 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:36:21.149 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:36:21.149 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local max_retries=100 00:36:21.149 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:36:21.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:36:21.149 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:36:21.149 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # xtrace_disable 00:36:21.149 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:36:21.149 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:36:21.149 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:21.149 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:21.149 { 00:36:21.149 "params": { 00:36:21.149 "name": "Nvme$subsystem", 00:36:21.149 "trtype": "$TEST_TRANSPORT", 00:36:21.149 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:21.149 "adrfam": "ipv4", 00:36:21.149 "trsvcid": "$NVMF_PORT", 00:36:21.149 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:21.149 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:21.149 "hdgst": ${hdgst:-false}, 00:36:21.149 "ddgst": ${ddgst:-false} 00:36:21.149 }, 00:36:21.149 "method": "bdev_nvme_attach_controller" 00:36:21.149 } 00:36:21.149 EOF 00:36:21.149 )") 00:36:21.149 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:36:21.149 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:21.149 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:21.149 { 00:36:21.149 "params": { 00:36:21.149 "name": "Nvme$subsystem", 00:36:21.149 "trtype": "$TEST_TRANSPORT", 00:36:21.149 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:21.149 "adrfam": "ipv4", 00:36:21.149 "trsvcid": "$NVMF_PORT", 00:36:21.149 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:21.149 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:21.149 "hdgst": ${hdgst:-false}, 00:36:21.149 "ddgst": ${ddgst:-false} 00:36:21.149 }, 00:36:21.149 "method": "bdev_nvme_attach_controller" 00:36:21.149 } 00:36:21.149 EOF 00:36:21.149 )") 00:36:21.149 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:36:21.149 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:21.149 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:21.149 { 00:36:21.149 "params": { 00:36:21.149 "name": "Nvme$subsystem", 00:36:21.149 "trtype": "$TEST_TRANSPORT", 00:36:21.149 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:21.149 "adrfam": "ipv4", 00:36:21.149 "trsvcid": "$NVMF_PORT", 00:36:21.149 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:21.149 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:21.149 "hdgst": ${hdgst:-false}, 00:36:21.149 "ddgst": ${ddgst:-false} 00:36:21.150 }, 00:36:21.150 "method": "bdev_nvme_attach_controller" 00:36:21.150 } 00:36:21.150 EOF 00:36:21.150 )") 00:36:21.150 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:36:21.150 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:21.150 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:21.150 { 00:36:21.150 "params": { 00:36:21.150 "name": "Nvme$subsystem", 00:36:21.150 "trtype": "$TEST_TRANSPORT", 00:36:21.150 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:21.150 "adrfam": "ipv4", 00:36:21.150 "trsvcid": "$NVMF_PORT", 00:36:21.150 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:21.150 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:21.150 "hdgst": ${hdgst:-false}, 00:36:21.150 "ddgst": ${ddgst:-false} 00:36:21.150 }, 00:36:21.150 "method": "bdev_nvme_attach_controller" 00:36:21.150 } 00:36:21.150 EOF 00:36:21.150 )") 00:36:21.150 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:36:21.150 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:21.150 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:21.150 { 00:36:21.150 "params": { 00:36:21.150 "name": "Nvme$subsystem", 00:36:21.150 "trtype": "$TEST_TRANSPORT", 00:36:21.150 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:21.150 "adrfam": "ipv4", 00:36:21.150 "trsvcid": "$NVMF_PORT", 00:36:21.150 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:21.150 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:21.150 "hdgst": ${hdgst:-false}, 00:36:21.150 "ddgst": ${ddgst:-false} 00:36:21.150 }, 00:36:21.150 "method": "bdev_nvme_attach_controller" 00:36:21.150 } 00:36:21.150 EOF 00:36:21.150 )") 00:36:21.150 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:36:21.150 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:21.150 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:21.150 { 00:36:21.150 "params": { 00:36:21.150 "name": "Nvme$subsystem", 00:36:21.150 "trtype": "$TEST_TRANSPORT", 00:36:21.150 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:21.150 "adrfam": "ipv4", 00:36:21.150 "trsvcid": "$NVMF_PORT", 00:36:21.150 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:21.150 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:21.150 "hdgst": ${hdgst:-false}, 00:36:21.150 "ddgst": ${ddgst:-false} 00:36:21.150 }, 00:36:21.150 "method": "bdev_nvme_attach_controller" 00:36:21.150 } 00:36:21.150 EOF 00:36:21.150 )") 00:36:21.150 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:36:21.150 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:21.150 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:21.150 { 00:36:21.150 "params": { 00:36:21.150 "name": "Nvme$subsystem", 00:36:21.150 "trtype": "$TEST_TRANSPORT", 00:36:21.150 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:21.150 "adrfam": "ipv4", 00:36:21.150 "trsvcid": "$NVMF_PORT", 00:36:21.150 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:21.150 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:21.150 "hdgst": ${hdgst:-false}, 00:36:21.150 "ddgst": ${ddgst:-false} 00:36:21.150 }, 00:36:21.150 "method": "bdev_nvme_attach_controller" 00:36:21.150 } 00:36:21.150 EOF 00:36:21.150 )") 00:36:21.150 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:36:21.150 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:21.150 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:21.150 { 00:36:21.150 "params": { 00:36:21.150 "name": "Nvme$subsystem", 00:36:21.150 "trtype": "$TEST_TRANSPORT", 00:36:21.150 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:21.150 "adrfam": "ipv4", 00:36:21.150 "trsvcid": "$NVMF_PORT", 00:36:21.150 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:21.150 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:21.150 "hdgst": ${hdgst:-false}, 00:36:21.150 "ddgst": ${ddgst:-false} 00:36:21.150 }, 00:36:21.150 "method": "bdev_nvme_attach_controller" 00:36:21.150 } 00:36:21.150 EOF 00:36:21.150 )") 00:36:21.150 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:36:21.150 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:21.150 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:21.150 { 00:36:21.150 "params": { 00:36:21.150 "name": "Nvme$subsystem", 00:36:21.150 "trtype": "$TEST_TRANSPORT", 00:36:21.150 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:21.150 "adrfam": "ipv4", 00:36:21.150 "trsvcid": "$NVMF_PORT", 00:36:21.150 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:21.150 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:21.150 "hdgst": ${hdgst:-false}, 00:36:21.150 "ddgst": ${ddgst:-false} 00:36:21.150 }, 00:36:21.150 "method": "bdev_nvme_attach_controller" 00:36:21.150 } 00:36:21.150 EOF 00:36:21.150 )") 00:36:21.150 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:36:21.150 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:21.150 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:21.150 { 00:36:21.150 "params": { 00:36:21.150 "name": "Nvme$subsystem", 00:36:21.150 "trtype": "$TEST_TRANSPORT", 00:36:21.150 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:21.150 "adrfam": "ipv4", 00:36:21.150 "trsvcid": "$NVMF_PORT", 00:36:21.150 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:21.150 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:21.150 "hdgst": ${hdgst:-false}, 00:36:21.150 "ddgst": ${ddgst:-false} 00:36:21.150 }, 00:36:21.150 "method": "bdev_nvme_attach_controller" 00:36:21.150 } 00:36:21.150 EOF 00:36:21.150 )") 00:36:21.150 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:36:21.150 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:36:21.150 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:36:21.150 09:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:21.150 "params": { 00:36:21.150 "name": "Nvme1", 00:36:21.150 "trtype": "tcp", 00:36:21.150 "traddr": "10.0.0.2", 00:36:21.150 "adrfam": "ipv4", 00:36:21.150 "trsvcid": "4420", 00:36:21.150 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:21.150 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:21.150 "hdgst": false, 00:36:21.150 "ddgst": false 00:36:21.150 }, 00:36:21.150 "method": "bdev_nvme_attach_controller" 00:36:21.150 },{ 00:36:21.150 "params": { 00:36:21.150 "name": "Nvme2", 00:36:21.150 "trtype": "tcp", 00:36:21.150 "traddr": "10.0.0.2", 00:36:21.150 "adrfam": "ipv4", 00:36:21.150 "trsvcid": "4420", 00:36:21.150 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:36:21.150 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:36:21.150 "hdgst": false, 00:36:21.150 "ddgst": false 00:36:21.150 }, 00:36:21.150 "method": "bdev_nvme_attach_controller" 00:36:21.150 },{ 00:36:21.150 "params": { 00:36:21.150 "name": "Nvme3", 00:36:21.150 "trtype": "tcp", 00:36:21.150 "traddr": "10.0.0.2", 00:36:21.150 "adrfam": "ipv4", 00:36:21.150 "trsvcid": "4420", 00:36:21.150 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:36:21.150 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:36:21.150 "hdgst": false, 00:36:21.150 "ddgst": false 00:36:21.150 }, 00:36:21.150 "method": "bdev_nvme_attach_controller" 00:36:21.150 },{ 00:36:21.150 "params": { 00:36:21.150 "name": "Nvme4", 00:36:21.150 "trtype": "tcp", 00:36:21.150 "traddr": "10.0.0.2", 00:36:21.150 "adrfam": "ipv4", 00:36:21.150 "trsvcid": "4420", 00:36:21.150 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:36:21.150 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:36:21.150 "hdgst": false, 00:36:21.150 "ddgst": false 00:36:21.150 }, 00:36:21.150 "method": "bdev_nvme_attach_controller" 00:36:21.150 },{ 00:36:21.150 "params": { 00:36:21.150 "name": "Nvme5", 00:36:21.150 "trtype": "tcp", 00:36:21.150 "traddr": "10.0.0.2", 00:36:21.150 "adrfam": "ipv4", 00:36:21.150 "trsvcid": "4420", 00:36:21.150 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:36:21.150 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:36:21.150 "hdgst": false, 00:36:21.150 "ddgst": false 00:36:21.150 }, 00:36:21.150 "method": "bdev_nvme_attach_controller" 00:36:21.150 },{ 00:36:21.150 "params": { 00:36:21.150 "name": "Nvme6", 00:36:21.150 "trtype": "tcp", 00:36:21.150 "traddr": "10.0.0.2", 00:36:21.150 "adrfam": "ipv4", 00:36:21.150 "trsvcid": "4420", 00:36:21.150 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:36:21.150 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:36:21.150 "hdgst": false, 00:36:21.150 "ddgst": false 00:36:21.150 }, 00:36:21.150 "method": "bdev_nvme_attach_controller" 00:36:21.150 },{ 00:36:21.150 "params": { 00:36:21.150 "name": "Nvme7", 00:36:21.150 "trtype": "tcp", 00:36:21.150 "traddr": "10.0.0.2", 00:36:21.150 "adrfam": "ipv4", 00:36:21.150 "trsvcid": "4420", 00:36:21.150 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:36:21.151 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:36:21.151 "hdgst": false, 00:36:21.151 "ddgst": false 00:36:21.151 }, 00:36:21.151 "method": "bdev_nvme_attach_controller" 00:36:21.151 },{ 00:36:21.151 "params": { 00:36:21.151 "name": "Nvme8", 00:36:21.151 "trtype": "tcp", 00:36:21.151 "traddr": "10.0.0.2", 00:36:21.151 "adrfam": "ipv4", 00:36:21.151 "trsvcid": "4420", 00:36:21.151 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:36:21.151 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:36:21.151 "hdgst": false, 00:36:21.151 "ddgst": false 00:36:21.151 }, 00:36:21.151 "method": "bdev_nvme_attach_controller" 00:36:21.151 },{ 00:36:21.151 "params": { 00:36:21.151 "name": "Nvme9", 00:36:21.151 "trtype": "tcp", 00:36:21.151 "traddr": "10.0.0.2", 00:36:21.151 "adrfam": "ipv4", 00:36:21.151 "trsvcid": "4420", 00:36:21.151 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:36:21.151 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:36:21.151 "hdgst": false, 00:36:21.151 "ddgst": false 00:36:21.151 }, 00:36:21.151 "method": "bdev_nvme_attach_controller" 00:36:21.151 },{ 00:36:21.151 "params": { 00:36:21.151 "name": "Nvme10", 00:36:21.151 "trtype": "tcp", 00:36:21.151 "traddr": "10.0.0.2", 00:36:21.151 "adrfam": "ipv4", 00:36:21.151 "trsvcid": "4420", 00:36:21.151 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:36:21.151 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:36:21.151 "hdgst": false, 00:36:21.151 "ddgst": false 00:36:21.151 }, 00:36:21.151 "method": "bdev_nvme_attach_controller" 00:36:21.151 }' 00:36:21.151 [2024-05-15 09:02:15.736900] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:36:21.151 [2024-05-15 09:02:15.736986] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2365461 ] 00:36:21.151 EAL: No free 2048 kB hugepages reported on node 1 00:36:21.151 [2024-05-15 09:02:15.809475] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:21.151 [2024-05-15 09:02:15.893174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:23.050 Running I/O for 10 seconds... 00:36:23.050 09:02:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:36:23.050 09:02:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@861 -- # return 0 00:36:23.050 09:02:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:36:23.050 09:02:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:23.050 09:02:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:36:23.050 09:02:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:23.050 09:02:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:23.050 09:02:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:36:23.050 09:02:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:36:23.050 09:02:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:36:23.050 09:02:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:36:23.050 09:02:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:36:23.050 09:02:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:36:23.050 09:02:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:36:23.050 09:02:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:36:23.050 09:02:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:36:23.050 09:02:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:23.050 09:02:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:36:23.308 09:02:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:23.308 09:02:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:36:23.308 09:02:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:36:23.308 09:02:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:36:23.567 09:02:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:36:23.567 09:02:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:36:23.567 09:02:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:36:23.567 09:02:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:36:23.567 09:02:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:23.567 09:02:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:36:23.567 09:02:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:23.567 09:02:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:36:23.567 09:02:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:36:23.567 09:02:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:36:23.841 09:02:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:36:23.841 09:02:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:36:23.841 09:02:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:36:23.841 09:02:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:36:23.841 09:02:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:23.841 09:02:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:36:23.842 09:02:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:23.842 09:02:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:36:23.842 09:02:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:36:23.842 09:02:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:36:23.842 09:02:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:36:23.842 09:02:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:36:23.842 09:02:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 2365281 00:36:23.842 09:02:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@947 -- # '[' -z 2365281 ']' 00:36:23.842 09:02:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # kill -0 2365281 00:36:23.842 09:02:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # uname 00:36:23.842 09:02:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:36:23.842 09:02:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2365281 00:36:23.842 09:02:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:36:23.842 09:02:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:36:23.842 09:02:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2365281' 00:36:23.842 killing process with pid 2365281 00:36:23.842 09:02:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # kill 2365281 00:36:23.842 [2024-05-15 09:02:18.495153] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]addres 09:02:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@971 -- # wait 2365281 00:36:23.842 s.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:36:23.842 [2024-05-15 09:02:18.495874] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x951a60 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.495910] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x951a60 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.495926] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x951a60 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.495940] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x951a60 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.495953] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x951a60 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.495966] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x951a60 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.495979] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x951a60 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.496003] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x951a60 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.496017] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x951a60 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.496044] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x951a60 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.496056] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x951a60 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.496069] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x951a60 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.496082] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x951a60 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.496094] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x951a60 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.496107] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x951a60 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.496120] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x951a60 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.496133] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x951a60 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.496145] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x951a60 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.496158] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x951a60 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.496170] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x951a60 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.496183] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x951a60 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.496229] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x951a60 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.496245] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x951a60 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.496258] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x951a60 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.496272] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x951a60 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.496286] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x951a60 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.496299] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x951a60 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.496311] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x951a60 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.496324] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x951a60 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.496336] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x951a60 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.496349] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x951a60 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.496362] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x951a60 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.496374] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x951a60 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.496387] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x951a60 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.496399] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x951a60 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.496416] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x951a60 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.496429] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x951a60 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.496441] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x951a60 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.496453] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x951a60 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.496466] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x951a60 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.496479] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x951a60 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.496492] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x951a60 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.496515] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x951a60 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.496527] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x951a60 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.496539] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x951a60 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.496553] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x951a60 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.496566] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x951a60 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.496580] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x951a60 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.496593] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x951a60 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.496606] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x951a60 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.496619] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x951a60 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.496632] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x951a60 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.496652] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x951a60 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.496664] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x951a60 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.496676] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x951a60 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.496690] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x951a60 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.496702] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x951a60 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.496716] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x951a60 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.496729] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x951a60 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.496741] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x951a60 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.496753] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x951a60 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.496766] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x951a60 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.496782] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x951a60 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.498034] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954440 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.498069] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954440 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.498095] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954440 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.498107] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954440 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.498121] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954440 is same with the state(5) to be set 00:36:23.842 [2024-05-15 09:02:18.498135] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954440 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.498153] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954440 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.498166] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954440 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.498179] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954440 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.498192] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954440 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.498205] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954440 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.498236] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954440 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.498250] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954440 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.498263] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954440 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.498275] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954440 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.498291] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954440 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.498304] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954440 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.498316] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954440 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.498329] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954440 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.498341] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954440 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.498354] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954440 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.498366] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954440 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.498379] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954440 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.498392] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954440 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.498404] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954440 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.498417] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954440 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.498435] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954440 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.498449] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954440 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.498461] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954440 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.498474] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954440 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.498487] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954440 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.498500] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954440 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.498512] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954440 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.498536] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954440 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.498548] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954440 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.498561] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954440 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.498574] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954440 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.498586] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954440 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.498598] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954440 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.498611] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954440 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.498625] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954440 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.498637] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954440 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.498650] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954440 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.498664] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954440 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.498677] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954440 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.498689] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954440 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.498703] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954440 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.498715] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954440 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.498728] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954440 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.498741] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954440 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.498755] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954440 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.498768] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954440 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.498781] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954440 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.498793] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954440 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.498810] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954440 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.498822] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954440 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.498835] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954440 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.498847] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954440 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.498859] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954440 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.498871] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954440 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.498884] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954440 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.498896] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954440 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.498909] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954440 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.502137] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952840 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.502168] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952840 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.502185] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952840 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.502197] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952840 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.502211] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952840 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.502232] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952840 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.502250] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952840 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.502263] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952840 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.502276] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952840 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.502288] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952840 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.502301] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952840 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.502314] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952840 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.502326] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952840 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.502339] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952840 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.502352] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952840 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.502365] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952840 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.502379] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952840 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.502392] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952840 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.502413] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952840 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.502427] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952840 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.502440] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952840 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.502452] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952840 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.502464] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952840 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.502478] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952840 is same with the state(5) to be set 00:36:23.843 [2024-05-15 09:02:18.502492] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952840 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.502505] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952840 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.502518] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952840 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.502531] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952840 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.502544] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952840 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.502563] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952840 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.502576] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952840 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.502588] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952840 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.502601] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952840 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.502614] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952840 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.502636] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952840 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.502650] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952840 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.502663] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952840 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.502676] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952840 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.502689] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952840 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.502702] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952840 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.502717] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952840 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.502730] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952840 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.502744] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952840 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.502757] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952840 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.502771] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952840 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.502787] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952840 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.502801] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952840 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.502815] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952840 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.502829] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952840 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.502842] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952840 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.502855] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952840 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.502869] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952840 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.502882] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952840 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.502896] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952840 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.502910] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952840 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.502932] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952840 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.502945] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952840 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.502957] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952840 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.502970] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952840 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.502982] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952840 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.502995] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952840 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.503008] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952840 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.503020] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952840 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.503855] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ce0 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.503881] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ce0 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.503895] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ce0 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.503908] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ce0 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.503920] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ce0 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.503932] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ce0 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.503944] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ce0 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.503957] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ce0 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.503970] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ce0 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.503982] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ce0 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.504000] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ce0 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.504013] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ce0 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.504025] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ce0 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.504048] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ce0 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.504061] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ce0 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.504073] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ce0 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.504085] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ce0 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.504097] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ce0 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.504110] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ce0 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.504123] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ce0 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.504136] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ce0 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.504148] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ce0 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.504160] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ce0 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.504173] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ce0 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.504186] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ce0 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.504208] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ce0 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.504231] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ce0 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.504245] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ce0 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.504258] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ce0 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.504271] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ce0 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.504283] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ce0 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.504295] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ce0 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.504307] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ce0 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.504320] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ce0 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.504333] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ce0 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.504345] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ce0 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.504358] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ce0 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.504376] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ce0 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.504390] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ce0 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.504404] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ce0 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.504417] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ce0 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.504430] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ce0 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.504443] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ce0 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.504456] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ce0 is same with the state(5) to be set 00:36:23.844 [2024-05-15 09:02:18.504470] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ce0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.504483] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ce0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.504495] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ce0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.504519] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ce0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.504532] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ce0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.504546] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ce0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.504560] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ce0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.504572] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ce0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.504594] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ce0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.504606] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ce0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.504619] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ce0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.504632] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ce0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.504644] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ce0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.504657] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ce0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.504670] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ce0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.504682] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ce0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.504695] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ce0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.504707] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ce0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.504719] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ce0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.505724] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9531a0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.505754] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9531a0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.505768] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9531a0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.505781] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9531a0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.505794] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9531a0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.505807] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9531a0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.505820] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9531a0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.505832] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9531a0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.505845] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9531a0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.505857] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9531a0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.505869] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9531a0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.505882] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9531a0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.505894] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9531a0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.505907] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9531a0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.505920] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9531a0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.505941] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9531a0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.505954] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9531a0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.505966] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9531a0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.505978] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9531a0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.505991] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9531a0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.506003] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9531a0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.506015] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9531a0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.506028] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9531a0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.506040] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9531a0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.506053] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9531a0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.506065] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9531a0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.506078] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9531a0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.506090] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9531a0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.506106] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9531a0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.506119] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9531a0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.506131] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9531a0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.506144] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9531a0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.506157] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9531a0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.506170] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9531a0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.506183] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9531a0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.506205] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9531a0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.506227] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9531a0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.506243] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9531a0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.506255] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9531a0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.506268] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9531a0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.506281] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9531a0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.506293] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9531a0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.506306] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9531a0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.506319] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9531a0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.506332] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9531a0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.506345] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9531a0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.506358] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9531a0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.506371] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9531a0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.506384] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9531a0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.506396] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9531a0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.506409] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9531a0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.506421] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9531a0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.506434] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9531a0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.506447] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9531a0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.506460] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9531a0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.506472] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9531a0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.506488] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9531a0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.506501] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9531a0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.506519] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9531a0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.506531] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9531a0 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.508005] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953640 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.508040] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953640 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.508057] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953640 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.508070] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953640 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.508082] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953640 is same with the state(5) to be set 00:36:23.845 [2024-05-15 09:02:18.508095] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953640 is same with the state(5) to be set 00:36:23.846 [2024-05-15 09:02:18.508107] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953640 is same with the state(5) to be set 00:36:23.846 [2024-05-15 09:02:18.508119] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953640 is same with the state(5) to be set 00:36:23.846 [2024-05-15 09:02:18.508131] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953640 is same with the state(5) to be set 00:36:23.846 [2024-05-15 09:02:18.508144] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953640 is same with the state(5) to be set 00:36:23.846 [2024-05-15 09:02:18.508156] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953640 is same with the state(5) to be set 00:36:23.846 [2024-05-15 09:02:18.508169] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953640 is same with the state(5) to be set 00:36:23.846 [2024-05-15 09:02:18.508182] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953640 is same with the state(5) to be set 00:36:23.846 [2024-05-15 09:02:18.508172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-05-15 09:02:18.508206] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953640 is same with the state(5) to be set 00:36:23.846 id:0 cdw10:00000000 cdw11:00000000 00:36:23.846 [2024-05-15 09:02:18.508231] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953640 is same with the state(5) to be set 00:36:23.846 [2024-05-15 09:02:18.508244] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953640 is same with the state(5) to be set 00:36:23.846 [2024-05-15 09:02:18.508257] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953640 is same with the state(5) to be set 00:36:23.846 [2024-05-15 09:02:18.508252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.846 [2024-05-15 09:02:18.508269] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953640 is same with the state(5) to be set 00:36:23.846 [2024-05-15 09:02:18.508282] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953640 is same with the state(5) to be set 00:36:23.846 [2024-05-15 09:02:18.508283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:23.846 [2024-05-15 09:02:18.508295] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953640 is same with t[2024-05-15 09:02:18.508302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(5) to be set 00:36:23.846 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.846 [2024-05-15 09:02:18.508316] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953640 is same with t[2024-05-15 09:02:18.508318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nshe state(5) to be set 00:36:23.846 id:0 cdw10:00000000 cdw11:00000000 00:36:23.846 [2024-05-15 09:02:18.508332] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953640 is same with t[2024-05-15 09:02:18.508333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(5) to be set 00:36:23.846 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.846 [2024-05-15 09:02:18.508347] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953640 is same with the state(5) to be set 00:36:23.846 [2024-05-15 09:02:18.508350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:23.846 [2024-05-15 09:02:18.508360] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953640 is same with the state(5) to be set 00:36:23.846 [2024-05-15 09:02:18.508364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.846 [2024-05-15 09:02:18.508373] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953640 is same with the state(5) to be set 00:36:23.846 [2024-05-15 09:02:18.508378] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b29cb0 is same with the state(5) to be set 00:36:23.846 [2024-05-15 09:02:18.508387] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953640 is same with the state(5) to be set 00:36:23.846 [2024-05-15 09:02:18.508400] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953640 is same with the state(5) to be set 00:36:23.846 [2024-05-15 09:02:18.508413] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953640 is same with the state(5) to be set 00:36:23.846 [2024-05-15 09:02:18.508425] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953640 is same with the state(5) to be set 00:36:23.846 [2024-05-15 09:02:18.508438] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953640 is same with t[2024-05-15 09:02:18.508435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nshe state(5) to be set 00:36:23.846 id:0 cdw10:00000000 cdw11:00000000 00:36:23.846 [2024-05-15 09:02:18.508454] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953640 is same with the state(5) to be set 00:36:23.846 [2024-05-15 09:02:18.508460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-05-15 09:02:18.508466] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953640 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.846 he state(5) to be set 00:36:23.846 [2024-05-15 09:02:18.508485] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953640 is same with the state(5) to be set 00:36:23.846 [2024-05-15 09:02:18.508498] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953640 is same with t[2024-05-15 09:02:18.508487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nshe state(5) to be set 00:36:23.846 id:0 cdw10:00000000 cdw11:00000000 00:36:23.846 [2024-05-15 09:02:18.508519] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953640 is same with the state(5) to be set 00:36:23.846 [2024-05-15 09:02:18.508523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.846 [2024-05-15 09:02:18.508532] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953640 is same with the state(5) to be set 00:36:23.846 [2024-05-15 09:02:18.508539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:23.846 [2024-05-15 09:02:18.508549] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953640 is same with the state(5) to be set 00:36:23.846 [2024-05-15 09:02:18.508553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.846 [2024-05-15 09:02:18.508563] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953640 is same with the state(5) to be set 00:36:23.846 [2024-05-15 09:02:18.508568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:23.846 [2024-05-15 09:02:18.508576] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953640 is same with the state(5) to be set 00:36:23.846 [2024-05-15 09:02:18.508581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.846 [2024-05-15 09:02:18.508588] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953640 is same with the state(5) to be set 00:36:23.846 [2024-05-15 09:02:18.508595] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f699f0 is same with the state(5) to be set 00:36:23.846 [2024-05-15 09:02:18.508601] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953640 is same with the state(5) to be set 00:36:23.846 [2024-05-15 09:02:18.508614] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953640 is same with the state(5) to be set 00:36:23.846 [2024-05-15 09:02:18.508637] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953640 is same with the state(5) to be set 00:36:23.846 [2024-05-15 09:02:18.508649] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953640 is same with the state(5) to be set 00:36:23.846 [2024-05-15 09:02:18.508662] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953640 is same with the state(5) to be set 00:36:23.846 [2024-05-15 09:02:18.508669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-05-15 09:02:18.508674] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953640 is same with tid:0 cdw10:00000000 cdw11:00000000 00:36:23.846 he state(5) to be set 00:36:23.846 [2024-05-15 09:02:18.508691] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953640 is same with the state(5) to be set 00:36:23.846 [2024-05-15 09:02:18.508693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.846 [2024-05-15 09:02:18.508704] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953640 is same with the state(5) to be set 00:36:23.846 [2024-05-15 09:02:18.508709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:23.846 [2024-05-15 09:02:18.508717] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953640 is same with the state(5) to be set 00:36:23.846 [2024-05-15 09:02:18.508724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.847 [2024-05-15 09:02:18.508730] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953640 is same with the state(5) to be set 00:36:23.847 [2024-05-15 09:02:18.508739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:23.847 [2024-05-15 09:02:18.508742] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953640 is same with the state(5) to be set 00:36:23.847 [2024-05-15 09:02:18.508754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-05-15 09:02:18.508755] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953640 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.847 he state(5) to be set 00:36:23.847 [2024-05-15 09:02:18.508773] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953640 is same with t[2024-05-15 09:02:18.508773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nshe state(5) to be set 00:36:23.847 id:0 cdw10:00000000 cdw11:00000000 00:36:23.847 [2024-05-15 09:02:18.508787] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953640 is same with the state(5) to be set 00:36:23.847 [2024-05-15 09:02:18.508789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.847 [2024-05-15 09:02:18.508800] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953640 is same with the state(5) to be set 00:36:23.847 [2024-05-15 09:02:18.508804] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ffce70 is same with the state(5) to be set 00:36:23.847 [2024-05-15 09:02:18.508812] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953640 is same with the state(5) to be set 00:36:23.847 [2024-05-15 09:02:18.508825] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953640 is same with the state(5) to be set 00:36:23.847 [2024-05-15 09:02:18.508838] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953640 is same with the state(5) to be set 00:36:23.847 [2024-05-15 09:02:18.508850] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953640 is same with the state(5) to be set 00:36:23.847 [2024-05-15 09:02:18.508862] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953640 is same with the state(5) to be set 00:36:23.847 [2024-05-15 09:02:18.508864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:23.847 [2024-05-15 09:02:18.508874] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953640 is same with the state(5) to be set 00:36:23.847 [2024-05-15 09:02:18.508886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.847 [2024-05-15 09:02:18.508892] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953640 is same with the state(5) to be set 00:36:23.847 [2024-05-15 09:02:18.508902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:23.847 [2024-05-15 09:02:18.508916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.847 [2024-05-15 09:02:18.508914] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953640 is same with the state(5) to be set 00:36:23.847 [2024-05-15 09:02:18.508931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:23.847 [2024-05-15 09:02:18.508945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.847 [2024-05-15 09:02:18.508959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:23.847 [2024-05-15 09:02:18.508973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.847 [2024-05-15 09:02:18.508986] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20220a0 is same with the state(5) to be set 00:36:23.847 [2024-05-15 09:02:18.509032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:23.847 [2024-05-15 09:02:18.509053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.847 [2024-05-15 09:02:18.509068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:23.847 [2024-05-15 09:02:18.509094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.847 [2024-05-15 09:02:18.509110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:23.847 [2024-05-15 09:02:18.509124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.847 [2024-05-15 09:02:18.509138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:23.847 [2024-05-15 09:02:18.509151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.847 [2024-05-15 09:02:18.509164] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:36:23.847 [2024-05-15 09:02:18.509231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:23.847 [2024-05-15 09:02:18.509252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.847 [2024-05-15 09:02:18.509268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:23.847 [2024-05-15 09:02:18.509282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.847 [2024-05-15 09:02:18.509296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:23.847 [2024-05-15 09:02:18.509309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.847 [2024-05-15 09:02:18.509323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:23.847 [2024-05-15 09:02:18.509337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.847 [2024-05-15 09:02:18.509350] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f61560 is same with the state(5) to be set 00:36:23.847 [2024-05-15 09:02:18.509396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:23.847 [2024-05-15 09:02:18.509416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.847 [2024-05-15 09:02:18.509431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:23.847 [2024-05-15 09:02:18.509445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.847 [2024-05-15 09:02:18.509459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:23.847 [2024-05-15 09:02:18.509473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.847 [2024-05-15 09:02:18.509487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:23.847 [2024-05-15 09:02:18.509511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.847 [2024-05-15 09:02:18.509524] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201e0b0 is same with the state(5) to be set 00:36:23.847 [2024-05-15 09:02:18.509570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:23.847 [2024-05-15 09:02:18.509590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.847 [2024-05-15 09:02:18.509610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:23.847 [2024-05-15 09:02:18.509624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.847 [2024-05-15 09:02:18.509639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:23.847 [2024-05-15 09:02:18.509653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.847 [2024-05-15 09:02:18.509668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:23.847 [2024-05-15 09:02:18.509681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.847 [2024-05-15 09:02:18.509694] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f622c0 is same with the state(5) to be set 00:36:23.847 [2024-05-15 09:02:18.509978] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953ae0 is same with the state(5) to be set 00:36:23.847 [2024-05-15 09:02:18.510005] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953ae0 is same with the state(5) to be set 00:36:23.847 [2024-05-15 09:02:18.510019] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953ae0 is same with the state(5) to be set 00:36:23.847 [2024-05-15 09:02:18.510032] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953ae0 is same with the state(5) to be set 00:36:23.847 [2024-05-15 09:02:18.510045] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953ae0 is same with the state(5) to be set 00:36:23.847 [2024-05-15 09:02:18.510057] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953ae0 is same with the state(5) to be set 00:36:23.847 [2024-05-15 09:02:18.510070] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953ae0 is same with the state(5) to be set 00:36:23.847 [2024-05-15 09:02:18.510083] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953ae0 is same with the state(5) to be set 00:36:23.847 [2024-05-15 09:02:18.510095] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953ae0 is same with the state(5) to be set 00:36:23.847 [2024-05-15 09:02:18.510107] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953ae0 is same with the state(5) to be set 00:36:23.847 [2024-05-15 09:02:18.510119] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953ae0 is same with the state(5) to be set 00:36:23.847 [2024-05-15 09:02:18.510131] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953ae0 is same with the state(5) to be set 00:36:23.847 [2024-05-15 09:02:18.510144] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953ae0 is same with the state(5) to be set 00:36:23.847 [2024-05-15 09:02:18.510157] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953ae0 is same with the state(5) to be set 00:36:23.847 [2024-05-15 09:02:18.510169] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953ae0 is same with the state(5) to be set 00:36:23.847 [2024-05-15 09:02:18.510181] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953ae0 is same with the state(5) to be set 00:36:23.847 [2024-05-15 09:02:18.510193] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953ae0 is same with the state(5) to be set 00:36:23.847 [2024-05-15 09:02:18.510211] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953ae0 is same with the state(5) to be set 00:36:23.848 [2024-05-15 09:02:18.510233] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953ae0 is same with the state(5) to be set 00:36:23.848 [2024-05-15 09:02:18.510247] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953ae0 is same with the state(5) to be set 00:36:23.848 [2024-05-15 09:02:18.510264] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953ae0 is same with the state(5) to be set 00:36:23.848 [2024-05-15 09:02:18.510277] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953ae0 is same with the state(5) to be set 00:36:23.848 [2024-05-15 09:02:18.510289] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953ae0 is same with the state(5) to be set 00:36:23.848 [2024-05-15 09:02:18.510301] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953ae0 is same with the state(5) to be set 00:36:23.848 [2024-05-15 09:02:18.510315] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953ae0 is same with the state(5) to be set 00:36:23.848 [2024-05-15 09:02:18.510328] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953ae0 is same with the state(5) to be set 00:36:23.848 [2024-05-15 09:02:18.510341] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953ae0 is same with the state(5) to be set 00:36:23.848 [2024-05-15 09:02:18.510353] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953ae0 is same with the state(5) to be set 00:36:23.848 [2024-05-15 09:02:18.510366] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953ae0 is same with the state(5) to be set 00:36:23.848 [2024-05-15 09:02:18.510378] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953ae0 is same with the state(5) to be set 00:36:23.848 [2024-05-15 09:02:18.510374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.848 [2024-05-15 09:02:18.510391] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953ae0 is same with the state(5) to be set 00:36:23.848 [2024-05-15 09:02:18.510400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.848 [2024-05-15 09:02:18.510404] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953ae0 is same with the state(5) to be set 00:36:23.848 [2024-05-15 09:02:18.510416] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953ae0 is same with the state(5) to be set 00:36:23.848 [2024-05-15 09:02:18.510427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:12[2024-05-15 09:02:18.510430] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953ae0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.848 he state(5) to be set 00:36:23.848 [2024-05-15 09:02:18.510445] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953ae0 is same with t[2024-05-15 09:02:18.510445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:36:23.848 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.848 [2024-05-15 09:02:18.510459] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953ae0 is same with the state(5) to be set 00:36:23.848 [2024-05-15 09:02:18.510464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.848 [2024-05-15 09:02:18.510473] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953ae0 is same with the state(5) to be set 00:36:23.848 [2024-05-15 09:02:18.510479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.848 [2024-05-15 09:02:18.510485] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953ae0 is same with the state(5) to be set 00:36:23.848 [2024-05-15 09:02:18.510496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.848 [2024-05-15 09:02:18.510508] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953ae0 is same with the state(5) to be set 00:36:23.848 [2024-05-15 09:02:18.510520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.848 [2024-05-15 09:02:18.510525] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953ae0 is same with the state(5) to be set 00:36:23.848 [2024-05-15 09:02:18.510536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.848 [2024-05-15 09:02:18.510539] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953ae0 is same with the state(5) to be set 00:36:23.848 [2024-05-15 09:02:18.510552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 09:02:18.510553] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953ae0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.848 he state(5) to be set 00:36:23.848 [2024-05-15 09:02:18.510575] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953ae0 is same with t[2024-05-15 09:02:18.510576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:12he state(5) to be set 00:36:23.848 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.848 [2024-05-15 09:02:18.510589] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953ae0 is same with the state(5) to be set 00:36:23.848 [2024-05-15 09:02:18.510592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.848 [2024-05-15 09:02:18.510602] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953ae0 is same with the state(5) to be set 00:36:23.848 [2024-05-15 09:02:18.510608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.848 [2024-05-15 09:02:18.510615] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953ae0 is same with the state(5) to be set 00:36:23.848 [2024-05-15 09:02:18.510622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.848 [2024-05-15 09:02:18.510628] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953ae0 is same with the state(5) to be set 00:36:23.848 [2024-05-15 09:02:18.510638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:12[2024-05-15 09:02:18.510641] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953ae0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.848 he state(5) to be set 00:36:23.848 [2024-05-15 09:02:18.510655] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953ae0 is same with t[2024-05-15 09:02:18.510655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:36:23.848 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.848 [2024-05-15 09:02:18.510669] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953ae0 is same with the state(5) to be set 00:36:23.848 [2024-05-15 09:02:18.510673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.848 [2024-05-15 09:02:18.510682] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953ae0 is same with the state(5) to be set 00:36:23.848 [2024-05-15 09:02:18.510688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.848 [2024-05-15 09:02:18.510695] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953ae0 is same with the state(5) to be set 00:36:23.848 [2024-05-15 09:02:18.510704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.848 [2024-05-15 09:02:18.510708] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953ae0 is same with the state(5) to be set 00:36:23.848 [2024-05-15 09:02:18.510719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.848 [2024-05-15 09:02:18.510723] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953ae0 is same with the state(5) to be set 00:36:23.848 [2024-05-15 09:02:18.510735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:1[2024-05-15 09:02:18.510736] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953ae0 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.848 he state(5) to be set 00:36:23.848 [2024-05-15 09:02:18.510751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 09:02:18.510752] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953ae0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.848 he state(5) to be set 00:36:23.848 [2024-05-15 09:02:18.510768] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953ae0 is same with the state(5) to be set 00:36:23.848 [2024-05-15 09:02:18.510770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.848 [2024-05-15 09:02:18.510780] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953ae0 is same with the state(5) to be set 00:36:23.848 [2024-05-15 09:02:18.510785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.848 [2024-05-15 09:02:18.510793] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953ae0 is same with the state(5) to be set 00:36:23.848 [2024-05-15 09:02:18.510801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.848 [2024-05-15 09:02:18.510806] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953ae0 is same with the state(5) to be set 00:36:23.848 [2024-05-15 09:02:18.510816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 09:02:18.510818] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953ae0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.848 he state(5) to be set 00:36:23.848 [2024-05-15 09:02:18.510833] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953ae0 is same with the state(5) to be set 00:36:23.848 [2024-05-15 09:02:18.510835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.848 [2024-05-15 09:02:18.510845] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953ae0 is same with the state(5) to be set 00:36:23.848 [2024-05-15 09:02:18.510849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.848 [2024-05-15 09:02:18.510865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.848 [2024-05-15 09:02:18.510879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.848 [2024-05-15 09:02:18.510895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.848 [2024-05-15 09:02:18.510908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.848 [2024-05-15 09:02:18.510924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.848 [2024-05-15 09:02:18.510939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.848 [2024-05-15 09:02:18.510954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.848 [2024-05-15 09:02:18.510976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.848 [2024-05-15 09:02:18.511002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.849 [2024-05-15 09:02:18.511017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.849 [2024-05-15 09:02:18.511032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.849 [2024-05-15 09:02:18.511046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.849 [2024-05-15 09:02:18.511067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.849 [2024-05-15 09:02:18.511080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.849 [2024-05-15 09:02:18.511096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.849 [2024-05-15 09:02:18.511110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.849 [2024-05-15 09:02:18.511126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.849 [2024-05-15 09:02:18.511140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.849 [2024-05-15 09:02:18.511155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.849 [2024-05-15 09:02:18.511169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.849 [2024-05-15 09:02:18.511185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.849 [2024-05-15 09:02:18.511208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.849 [2024-05-15 09:02:18.511233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.849 [2024-05-15 09:02:18.511249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.849 [2024-05-15 09:02:18.511264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.849 [2024-05-15 09:02:18.511279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.849 [2024-05-15 09:02:18.511295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.849 [2024-05-15 09:02:18.511309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.849 [2024-05-15 09:02:18.511325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.849 [2024-05-15 09:02:18.511339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.849 [2024-05-15 09:02:18.511355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.849 [2024-05-15 09:02:18.511369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.849 [2024-05-15 09:02:18.511390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.849 [2024-05-15 09:02:18.511405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.849 [2024-05-15 09:02:18.511421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.849 [2024-05-15 09:02:18.511435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.849 [2024-05-15 09:02:18.511451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.849 [2024-05-15 09:02:18.511466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.849 [2024-05-15 09:02:18.511481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.849 [2024-05-15 09:02:18.511496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.849 [2024-05-15 09:02:18.511520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.849 [2024-05-15 09:02:18.511534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.849 [2024-05-15 09:02:18.511550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.849 [2024-05-15 09:02:18.511564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 09:02:18.511556] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953f80 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.849 he state(5) to be set 00:36:23.849 [2024-05-15 09:02:18.511592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.849 [2024-05-15 09:02:18.511594] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953f80 is same with the state(5) to be set 00:36:23.849 [2024-05-15 09:02:18.511606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.849 [2024-05-15 09:02:18.511609] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953f80 is same with the state(5) to be set 00:36:23.849 [2024-05-15 09:02:18.511623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.849 [2024-05-15 09:02:18.511637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.849 [2024-05-15 09:02:18.511653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.849 [2024-05-15 09:02:18.511667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.849 [2024-05-15 09:02:18.511683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.849 [2024-05-15 09:02:18.511698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.849 [2024-05-15 09:02:18.511714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.849 [2024-05-15 09:02:18.511728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.849 [2024-05-15 09:02:18.511747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.849 [2024-05-15 09:02:18.511766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.849 [2024-05-15 09:02:18.511791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.849 [2024-05-15 09:02:18.511806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.849 [2024-05-15 09:02:18.511822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.849 [2024-05-15 09:02:18.511837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.849 [2024-05-15 09:02:18.511853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.849 [2024-05-15 09:02:18.511879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.849 [2024-05-15 09:02:18.511896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.849 [2024-05-15 09:02:18.511910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.849 [2024-05-15 09:02:18.511927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.849 [2024-05-15 09:02:18.511941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.849 [2024-05-15 09:02:18.511957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.849 [2024-05-15 09:02:18.511972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.849 [2024-05-15 09:02:18.511989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.849 [2024-05-15 09:02:18.512004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.849 [2024-05-15 09:02:18.512020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.849 [2024-05-15 09:02:18.512034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.849 [2024-05-15 09:02:18.512051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.849 [2024-05-15 09:02:18.512065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.849 [2024-05-15 09:02:18.512081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.849 [2024-05-15 09:02:18.512095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.849 [2024-05-15 09:02:18.512111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.849 [2024-05-15 09:02:18.512126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.849 [2024-05-15 09:02:18.512142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.849 [2024-05-15 09:02:18.512160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.849 [2024-05-15 09:02:18.512177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.849 [2024-05-15 09:02:18.512191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.849 [2024-05-15 09:02:18.512210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.849 [2024-05-15 09:02:18.512234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.849 [2024-05-15 09:02:18.512252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.850 [2024-05-15 09:02:18.512267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.850 [2024-05-15 09:02:18.512283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.850 [2024-05-15 09:02:18.512298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.850 [2024-05-15 09:02:18.512315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.850 [2024-05-15 09:02:18.512329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.850 [2024-05-15 09:02:18.512346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.850 [2024-05-15 09:02:18.512360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.850 [2024-05-15 09:02:18.512376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.850 [2024-05-15 09:02:18.512390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.850 [2024-05-15 09:02:18.512408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.850 [2024-05-15 09:02:18.512423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.850 [2024-05-15 09:02:18.512440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.850 [2024-05-15 09:02:18.512454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.850 [2024-05-15 09:02:18.512472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.850 [2024-05-15 09:02:18.512486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.850 [2024-05-15 09:02:18.512544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:23.850 [2024-05-15 09:02:18.512631] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b2e0b0 was disconnected and freed. reset controller. 00:36:23.850 [2024-05-15 09:02:18.512693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.850 [2024-05-15 09:02:18.512714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.850 [2024-05-15 09:02:18.512735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.850 [2024-05-15 09:02:18.512754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.850 [2024-05-15 09:02:18.512772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.850 [2024-05-15 09:02:18.512786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.850 [2024-05-15 09:02:18.512802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.850 [2024-05-15 09:02:18.512815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.850 [2024-05-15 09:02:18.512831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.850 [2024-05-15 09:02:18.512845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.850 [2024-05-15 09:02:18.512860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.850 [2024-05-15 09:02:18.512874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.850 [2024-05-15 09:02:18.512889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.850 [2024-05-15 09:02:18.512903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.850 [2024-05-15 09:02:18.512919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.850 [2024-05-15 09:02:18.512933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.850 [2024-05-15 09:02:18.512948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.850 [2024-05-15 09:02:18.512961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.850 [2024-05-15 09:02:18.512977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.850 [2024-05-15 09:02:18.512991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.850 [2024-05-15 09:02:18.513006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.850 [2024-05-15 09:02:18.513020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.850 [2024-05-15 09:02:18.513036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.850 [2024-05-15 09:02:18.513050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.850 [2024-05-15 09:02:18.513065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.850 [2024-05-15 09:02:18.513079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.850 [2024-05-15 09:02:18.513095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.850 [2024-05-15 09:02:18.513119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.850 [2024-05-15 09:02:18.513141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.850 [2024-05-15 09:02:18.513156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.850 [2024-05-15 09:02:18.513171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.850 [2024-05-15 09:02:18.513185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.850 [2024-05-15 09:02:18.513232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.850 [2024-05-15 09:02:18.513248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.850 [2024-05-15 09:02:18.513264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.850 [2024-05-15 09:02:18.513279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.850 [2024-05-15 09:02:18.513295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.850 [2024-05-15 09:02:18.513309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.850 [2024-05-15 09:02:18.513325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.850 [2024-05-15 09:02:18.513340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.850 [2024-05-15 09:02:18.513356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.850 [2024-05-15 09:02:18.513370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.850 [2024-05-15 09:02:18.513386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.850 [2024-05-15 09:02:18.513400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.850 [2024-05-15 09:02:18.513416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.850 [2024-05-15 09:02:18.513431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.850 [2024-05-15 09:02:18.513446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.850 [2024-05-15 09:02:18.513463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.850 [2024-05-15 09:02:18.513479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.850 [2024-05-15 09:02:18.513494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.850 [2024-05-15 09:02:18.513537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.850 [2024-05-15 09:02:18.513552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.850 [2024-05-15 09:02:18.513568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.850 [2024-05-15 09:02:18.513587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.850 [2024-05-15 09:02:18.513603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.850 [2024-05-15 09:02:18.513617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.850 [2024-05-15 09:02:18.513635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.850 [2024-05-15 09:02:18.513650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.850 [2024-05-15 09:02:18.513665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.851 [2024-05-15 09:02:18.513680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.851 [2024-05-15 09:02:18.513696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.851 [2024-05-15 09:02:18.513710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.851 [2024-05-15 09:02:18.513726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.851 [2024-05-15 09:02:18.513740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.851 [2024-05-15 09:02:18.513756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.851 [2024-05-15 09:02:18.513770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.851 [2024-05-15 09:02:18.513786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.851 [2024-05-15 09:02:18.513800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.851 [2024-05-15 09:02:18.513816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.851 [2024-05-15 09:02:18.513830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.851 [2024-05-15 09:02:18.513845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.851 [2024-05-15 09:02:18.513860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.851 [2024-05-15 09:02:18.513876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.851 [2024-05-15 09:02:18.513890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.851 [2024-05-15 09:02:18.513906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.851 [2024-05-15 09:02:18.513920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.851 [2024-05-15 09:02:18.513935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.851 [2024-05-15 09:02:18.513950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.851 [2024-05-15 09:02:18.513969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.851 [2024-05-15 09:02:18.513984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.851 [2024-05-15 09:02:18.514000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.851 [2024-05-15 09:02:18.514014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.851 [2024-05-15 09:02:18.514030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.851 [2024-05-15 09:02:18.514044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.851 [2024-05-15 09:02:18.514059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.851 [2024-05-15 09:02:18.514074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.851 [2024-05-15 09:02:18.514090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.851 [2024-05-15 09:02:18.514104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.851 [2024-05-15 09:02:18.514120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.851 [2024-05-15 09:02:18.514134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.851 [2024-05-15 09:02:18.514150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.851 [2024-05-15 09:02:18.514165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.851 [2024-05-15 09:02:18.514181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.851 [2024-05-15 09:02:18.514195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.851 [2024-05-15 09:02:18.514211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.851 [2024-05-15 09:02:18.514247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.851 [2024-05-15 09:02:18.514265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.851 [2024-05-15 09:02:18.514280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.851 [2024-05-15 09:02:18.514296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.851 [2024-05-15 09:02:18.514310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.851 [2024-05-15 09:02:18.514326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.851 [2024-05-15 09:02:18.514340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.851 [2024-05-15 09:02:18.514356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.851 [2024-05-15 09:02:18.514374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.851 [2024-05-15 09:02:18.514391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.851 [2024-05-15 09:02:18.514405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.851 [2024-05-15 09:02:18.514421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.851 [2024-05-15 09:02:18.514435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.851 [2024-05-15 09:02:18.514452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.851 [2024-05-15 09:02:18.514466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.851 [2024-05-15 09:02:18.514481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.851 [2024-05-15 09:02:18.514495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.851 [2024-05-15 09:02:18.514511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.851 [2024-05-15 09:02:18.514533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.851 [2024-05-15 09:02:18.514549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.851 [2024-05-15 09:02:18.514563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.851 [2024-05-15 09:02:18.514590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.851 [2024-05-15 09:02:18.514604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.851 [2024-05-15 09:02:18.514627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.851 [2024-05-15 09:02:18.514643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.851 [2024-05-15 09:02:18.514659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.851 [2024-05-15 09:02:18.514673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.851 [2024-05-15 09:02:18.514688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.851 [2024-05-15 09:02:18.514702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.851 [2024-05-15 09:02:18.514718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.851 [2024-05-15 09:02:18.514732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.851 [2024-05-15 09:02:18.514747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.851 [2024-05-15 09:02:18.514765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.852 [2024-05-15 09:02:18.514780] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec140 is same with the state(5) to be set 00:36:23.852 [2024-05-15 09:02:18.514847] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1fec140 was disconnected and freed. reset controller. 00:36:23.852 [2024-05-15 09:02:18.518202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.852 [2024-05-15 09:02:18.518244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.852 [2024-05-15 09:02:18.518280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.852 [2024-05-15 09:02:18.518297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.852 [2024-05-15 09:02:18.518313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.852 [2024-05-15 09:02:18.518327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.852 [2024-05-15 09:02:18.518344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.852 [2024-05-15 09:02:18.518358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.852 [2024-05-15 09:02:18.518374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.852 [2024-05-15 09:02:18.518389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.852 [2024-05-15 09:02:18.518404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.852 [2024-05-15 09:02:18.518418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.852 [2024-05-15 09:02:18.518435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.852 [2024-05-15 09:02:18.518449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.852 [2024-05-15 09:02:18.518465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.852 [2024-05-15 09:02:18.518479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.852 [2024-05-15 09:02:18.518496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.852 [2024-05-15 09:02:18.518510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.852 [2024-05-15 09:02:18.518526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.852 [2024-05-15 09:02:18.518541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.852 [2024-05-15 09:02:18.518568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.852 [2024-05-15 09:02:18.518582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.852 [2024-05-15 09:02:18.518599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.852 [2024-05-15 09:02:18.518629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.852 [2024-05-15 09:02:18.518646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.852 [2024-05-15 09:02:18.518661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.852 [2024-05-15 09:02:18.518677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.852 [2024-05-15 09:02:18.518692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.852 [2024-05-15 09:02:18.518707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.852 [2024-05-15 09:02:18.518722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.852 [2024-05-15 09:02:18.518738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.852 [2024-05-15 09:02:18.518752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.852 [2024-05-15 09:02:18.518768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.852 [2024-05-15 09:02:18.518782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.852 [2024-05-15 09:02:18.518798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.852 [2024-05-15 09:02:18.518812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.852 [2024-05-15 09:02:18.518828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.852 [2024-05-15 09:02:18.518842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.852 [2024-05-15 09:02:18.518858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.852 [2024-05-15 09:02:18.518872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.852 [2024-05-15 09:02:18.518888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.852 [2024-05-15 09:02:18.518902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.852 [2024-05-15 09:02:18.518918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.852 [2024-05-15 09:02:18.518932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.852 [2024-05-15 09:02:18.518948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.852 [2024-05-15 09:02:18.518962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.852 [2024-05-15 09:02:18.518979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.852 [2024-05-15 09:02:18.519002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.852 [2024-05-15 09:02:18.519022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.852 [2024-05-15 09:02:18.519037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.852 [2024-05-15 09:02:18.519054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.852 [2024-05-15 09:02:18.519068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.852 [2024-05-15 09:02:18.519085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.852 [2024-05-15 09:02:18.519099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.852 [2024-05-15 09:02:18.519115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.852 [2024-05-15 09:02:18.519129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.852 [2024-05-15 09:02:18.519146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.852 [2024-05-15 09:02:18.519160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.852 [2024-05-15 09:02:18.519176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.852 [2024-05-15 09:02:18.519190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.852 [2024-05-15 09:02:18.519211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.852 [2024-05-15 09:02:18.519236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.852 [2024-05-15 09:02:18.519252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.852 [2024-05-15 09:02:18.519267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.852 [2024-05-15 09:02:18.519283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.852 [2024-05-15 09:02:18.519298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.852 [2024-05-15 09:02:18.519314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.852 [2024-05-15 09:02:18.519329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.852 [2024-05-15 09:02:18.519345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.852 [2024-05-15 09:02:18.519359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.852 [2024-05-15 09:02:18.519375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.852 [2024-05-15 09:02:18.519389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.852 [2024-05-15 09:02:18.519405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.852 [2024-05-15 09:02:18.519424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.852 [2024-05-15 09:02:18.519440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.852 [2024-05-15 09:02:18.519455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.852 [2024-05-15 09:02:18.519471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.853 [2024-05-15 09:02:18.519485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.853 [2024-05-15 09:02:18.519501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.853 [2024-05-15 09:02:18.519522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.853 [2024-05-15 09:02:18.519538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.853 [2024-05-15 09:02:18.519552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.853 [2024-05-15 09:02:18.519568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.853 [2024-05-15 09:02:18.519590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.853 [2024-05-15 09:02:18.519606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.853 [2024-05-15 09:02:18.519620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.853 [2024-05-15 09:02:18.519636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.853 [2024-05-15 09:02:18.519655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.853 [2024-05-15 09:02:18.519670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.853 [2024-05-15 09:02:18.519685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.853 [2024-05-15 09:02:18.519700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.853 [2024-05-15 09:02:18.519715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.853 [2024-05-15 09:02:18.519731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.853 [2024-05-15 09:02:18.519745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.853 [2024-05-15 09:02:18.519761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.853 [2024-05-15 09:02:18.519776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.853 [2024-05-15 09:02:18.519792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.853 [2024-05-15 09:02:18.519806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.853 [2024-05-15 09:02:18.519825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.853 [2024-05-15 09:02:18.519840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.853 [2024-05-15 09:02:18.519857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.853 [2024-05-15 09:02:18.519871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.853 [2024-05-15 09:02:18.519887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.853 [2024-05-15 09:02:18.519901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.853 [2024-05-15 09:02:18.519917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.853 [2024-05-15 09:02:18.519932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.853 [2024-05-15 09:02:18.519947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.853 [2024-05-15 09:02:18.519961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.853 [2024-05-15 09:02:18.519977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.853 [2024-05-15 09:02:18.519992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.853 [2024-05-15 09:02:18.520008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.853 [2024-05-15 09:02:18.520022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.853 [2024-05-15 09:02:18.520039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.853 [2024-05-15 09:02:18.520053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.853 [2024-05-15 09:02:18.520069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.853 [2024-05-15 09:02:18.520083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.853 [2024-05-15 09:02:18.520101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.853 [2024-05-15 09:02:18.520115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.853 [2024-05-15 09:02:18.520131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.853 [2024-05-15 09:02:18.520145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.853 [2024-05-15 09:02:18.520162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.853 [2024-05-15 09:02:18.520176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.853 [2024-05-15 09:02:18.520192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.853 [2024-05-15 09:02:18.520228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.853 [2024-05-15 09:02:18.520246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.853 [2024-05-15 09:02:18.520261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.853 [2024-05-15 09:02:18.520277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.853 [2024-05-15 09:02:18.520291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.853 [2024-05-15 09:02:18.520387] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20bc640 was disconnected and freed. reset controller. 00:36:23.853 [2024-05-15 09:02:18.520493] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:36:23.853 [2024-05-15 09:02:18.520543] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:36:23.853 [2024-05-15 09:02:18.520596] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f699f0 (9): Bad file descriptor 00:36:23.853 [2024-05-15 09:02:18.520643] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f622c0 (9): Bad file descriptor 00:36:23.853 [2024-05-15 09:02:18.520669] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b29cb0 (9): Bad file descriptor 00:36:23.853 [2024-05-15 09:02:18.520735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:23.853 [2024-05-15 09:02:18.520757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.853 [2024-05-15 09:02:18.520773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:23.853 [2024-05-15 09:02:18.520787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.853 [2024-05-15 09:02:18.520801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:23.853 [2024-05-15 09:02:18.520815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.853 [2024-05-15 09:02:18.520830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:23.853 [2024-05-15 09:02:18.520843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.853 [2024-05-15 09:02:18.520857] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a58610 is same with the state(5) to be set 00:36:23.853 [2024-05-15 09:02:18.520886] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ffce70 (9): Bad file descriptor 00:36:23.853 [2024-05-15 09:02:18.520939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:23.853 [2024-05-15 09:02:18.520959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.853 [2024-05-15 09:02:18.520975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:23.853 [2024-05-15 09:02:18.520990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.853 [2024-05-15 09:02:18.521004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:23.853 [2024-05-15 09:02:18.521032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.853 [2024-05-15 09:02:18.521047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:23.853 [2024-05-15 09:02:18.521061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.853 [2024-05-15 09:02:18.521074] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f5e90 is same with the state(5) to be set 00:36:23.853 [2024-05-15 09:02:18.521107] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20220a0 (9): Bad file descriptor 00:36:23.853 [2024-05-15 09:02:18.521136] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:36:23.853 [2024-05-15 09:02:18.521166] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f61560 (9): Bad file descriptor 00:36:23.853 [2024-05-15 09:02:18.521196] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x201e0b0 (9): Bad file descriptor 00:36:23.854 [2024-05-15 09:02:18.522884] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:36:23.854 [2024-05-15 09:02:18.522927] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f5e90 (9): Bad file descriptor 00:36:23.854 [2024-05-15 09:02:18.523649] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:36:23.854 [2024-05-15 09:02:18.523729] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:36:23.854 [2024-05-15 09:02:18.523971] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:36:23.854 [2024-05-15 09:02:18.524174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.854 [2024-05-15 09:02:18.524331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.854 [2024-05-15 09:02:18.524357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f622c0 with addr=10.0.0.2, port=4420 00:36:23.854 [2024-05-15 09:02:18.524374] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f622c0 is same with the state(5) to be set 00:36:23.854 [2024-05-15 09:02:18.524480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.854 [2024-05-15 09:02:18.524588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.854 [2024-05-15 09:02:18.524612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f699f0 with addr=10.0.0.2, port=4420 00:36:23.854 [2024-05-15 09:02:18.524628] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f699f0 is same with the state(5) to be set 00:36:23.854 [2024-05-15 09:02:18.524713] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:36:23.854 [2024-05-15 09:02:18.524865] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:36:23.854 [2024-05-15 09:02:18.524933] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:36:23.854 [2024-05-15 09:02:18.525340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.854 [2024-05-15 09:02:18.525438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.854 [2024-05-15 09:02:18.525464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f5e90 with addr=10.0.0.2, port=4420 00:36:23.854 [2024-05-15 09:02:18.525481] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f5e90 is same with the state(5) to be set 00:36:23.854 [2024-05-15 09:02:18.525501] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f622c0 (9): Bad file descriptor 00:36:23.854 [2024-05-15 09:02:18.525532] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f699f0 (9): Bad file descriptor 00:36:23.854 [2024-05-15 09:02:18.525649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.854 [2024-05-15 09:02:18.525681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.854 [2024-05-15 09:02:18.525708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.854 [2024-05-15 09:02:18.525726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.854 [2024-05-15 09:02:18.525743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.854 [2024-05-15 09:02:18.525758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.854 [2024-05-15 09:02:18.525775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.854 [2024-05-15 09:02:18.525789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.854 [2024-05-15 09:02:18.525805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.854 [2024-05-15 09:02:18.525820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.854 [2024-05-15 09:02:18.525836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.854 [2024-05-15 09:02:18.525850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.854 [2024-05-15 09:02:18.525877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.854 [2024-05-15 09:02:18.525891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.854 [2024-05-15 09:02:18.525908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.854 [2024-05-15 09:02:18.525922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.854 [2024-05-15 09:02:18.525942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.854 [2024-05-15 09:02:18.525957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.854 [2024-05-15 09:02:18.525973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.854 [2024-05-15 09:02:18.525988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.854 [2024-05-15 09:02:18.526004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.854 [2024-05-15 09:02:18.526019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.854 [2024-05-15 09:02:18.526035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.854 [2024-05-15 09:02:18.526049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.854 [2024-05-15 09:02:18.526066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.854 [2024-05-15 09:02:18.526080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.854 [2024-05-15 09:02:18.526100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.854 [2024-05-15 09:02:18.526115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.854 [2024-05-15 09:02:18.526132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.854 [2024-05-15 09:02:18.526146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.854 [2024-05-15 09:02:18.526162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.854 [2024-05-15 09:02:18.526177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.854 [2024-05-15 09:02:18.526194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.854 [2024-05-15 09:02:18.526221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.854 [2024-05-15 09:02:18.526240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.854 [2024-05-15 09:02:18.526255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.854 [2024-05-15 09:02:18.526271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.854 [2024-05-15 09:02:18.526286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.854 [2024-05-15 09:02:18.526302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.854 [2024-05-15 09:02:18.526317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.854 [2024-05-15 09:02:18.526332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.854 [2024-05-15 09:02:18.526347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.854 [2024-05-15 09:02:18.526363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.854 [2024-05-15 09:02:18.526377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.854 [2024-05-15 09:02:18.526393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.854 [2024-05-15 09:02:18.526408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.854 [2024-05-15 09:02:18.526423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.854 [2024-05-15 09:02:18.526437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.854 [2024-05-15 09:02:18.526454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.854 [2024-05-15 09:02:18.526468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.854 [2024-05-15 09:02:18.526484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.854 [2024-05-15 09:02:18.526512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.854 [2024-05-15 09:02:18.526529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.854 [2024-05-15 09:02:18.526545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.854 [2024-05-15 09:02:18.526573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.854 [2024-05-15 09:02:18.526588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.854 [2024-05-15 09:02:18.526605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.854 [2024-05-15 09:02:18.526619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.854 [2024-05-15 09:02:18.526636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.854 [2024-05-15 09:02:18.526652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.854 [2024-05-15 09:02:18.526668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.855 [2024-05-15 09:02:18.526683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.855 [2024-05-15 09:02:18.526699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.855 [2024-05-15 09:02:18.526714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.855 [2024-05-15 09:02:18.526730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.855 [2024-05-15 09:02:18.526745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.855 [2024-05-15 09:02:18.526761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.855 [2024-05-15 09:02:18.526775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.855 [2024-05-15 09:02:18.526791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.855 [2024-05-15 09:02:18.526806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.855 [2024-05-15 09:02:18.526822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.855 [2024-05-15 09:02:18.526837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.855 [2024-05-15 09:02:18.526854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.855 [2024-05-15 09:02:18.526868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.855 [2024-05-15 09:02:18.526887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.855 [2024-05-15 09:02:18.526902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.855 [2024-05-15 09:02:18.526931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.855 [2024-05-15 09:02:18.526947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.855 [2024-05-15 09:02:18.526963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.855 [2024-05-15 09:02:18.526978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.855 [2024-05-15 09:02:18.526994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.855 [2024-05-15 09:02:18.527009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.855 [2024-05-15 09:02:18.527025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.855 [2024-05-15 09:02:18.527040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.855 [2024-05-15 09:02:18.527056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.855 [2024-05-15 09:02:18.527071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.855 [2024-05-15 09:02:18.527087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.855 [2024-05-15 09:02:18.527102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.855 [2024-05-15 09:02:18.527119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.855 [2024-05-15 09:02:18.527134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.855 [2024-05-15 09:02:18.527150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.855 [2024-05-15 09:02:18.527165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.855 [2024-05-15 09:02:18.527181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.855 [2024-05-15 09:02:18.527196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.855 [2024-05-15 09:02:18.527226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.855 [2024-05-15 09:02:18.527244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.855 [2024-05-15 09:02:18.527261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.855 [2024-05-15 09:02:18.527276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.855 [2024-05-15 09:02:18.527292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.855 [2024-05-15 09:02:18.527307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.855 [2024-05-15 09:02:18.527324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.855 [2024-05-15 09:02:18.527346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.855 [2024-05-15 09:02:18.527364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.855 [2024-05-15 09:02:18.527379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.855 [2024-05-15 09:02:18.527396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.855 [2024-05-15 09:02:18.527411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.855 [2024-05-15 09:02:18.527428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.855 [2024-05-15 09:02:18.527443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.855 [2024-05-15 09:02:18.527459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.855 [2024-05-15 09:02:18.527473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.855 [2024-05-15 09:02:18.527490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.855 [2024-05-15 09:02:18.527505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.855 [2024-05-15 09:02:18.527521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.855 [2024-05-15 09:02:18.527547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.855 [2024-05-15 09:02:18.527563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.855 [2024-05-15 09:02:18.527578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.855 [2024-05-15 09:02:18.527605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.855 [2024-05-15 09:02:18.527620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.855 [2024-05-15 09:02:18.527636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.855 [2024-05-15 09:02:18.527650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.855 [2024-05-15 09:02:18.527666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.855 [2024-05-15 09:02:18.527681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.855 [2024-05-15 09:02:18.527697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.855 [2024-05-15 09:02:18.527712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.855 [2024-05-15 09:02:18.527728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.855 [2024-05-15 09:02:18.527742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.855 [2024-05-15 09:02:18.527763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.856 [2024-05-15 09:02:18.527778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.856 [2024-05-15 09:02:18.527793] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feebf0 is same with the state(5) to be set 00:36:23.856 [2024-05-15 09:02:18.527874] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1feebf0 was disconnected and freed. reset controller. 00:36:23.856 [2024-05-15 09:02:18.527958] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f5e90 (9): Bad file descriptor 00:36:23.856 [2024-05-15 09:02:18.527986] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:36:23.856 [2024-05-15 09:02:18.528001] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:36:23.856 [2024-05-15 09:02:18.528017] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:36:23.856 [2024-05-15 09:02:18.528038] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:36:23.856 [2024-05-15 09:02:18.528053] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:36:23.856 [2024-05-15 09:02:18.528068] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:36:23.856 [2024-05-15 09:02:18.529320] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:23.856 [2024-05-15 09:02:18.529345] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:23.856 [2024-05-15 09:02:18.529360] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:36:23.856 [2024-05-15 09:02:18.529391] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:36:23.856 [2024-05-15 09:02:18.529409] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:36:23.856 [2024-05-15 09:02:18.529423] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:36:23.856 [2024-05-15 09:02:18.529505] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:23.856 [2024-05-15 09:02:18.529645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.856 [2024-05-15 09:02:18.529741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.856 [2024-05-15 09:02:18.529766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20220a0 with addr=10.0.0.2, port=4420 00:36:23.856 [2024-05-15 09:02:18.529783] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20220a0 is same with the state(5) to be set 00:36:23.856 [2024-05-15 09:02:18.530109] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20220a0 (9): Bad file descriptor 00:36:23.856 [2024-05-15 09:02:18.530177] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:36:23.856 [2024-05-15 09:02:18.530208] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:36:23.856 [2024-05-15 09:02:18.530233] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:36:23.856 [2024-05-15 09:02:18.530298] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:23.856 [2024-05-15 09:02:18.530538] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a58610 (9): Bad file descriptor 00:36:23.856 [2024-05-15 09:02:18.530694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.856 [2024-05-15 09:02:18.530718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.856 [2024-05-15 09:02:18.530754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.856 [2024-05-15 09:02:18.530771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.856 [2024-05-15 09:02:18.530789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.856 [2024-05-15 09:02:18.530804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.856 [2024-05-15 09:02:18.530826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.856 [2024-05-15 09:02:18.530841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.856 [2024-05-15 09:02:18.530858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.856 [2024-05-15 09:02:18.530872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.856 [2024-05-15 09:02:18.530889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.856 [2024-05-15 09:02:18.530903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.856 [2024-05-15 09:02:18.530919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.856 [2024-05-15 09:02:18.530934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.856 [2024-05-15 09:02:18.530950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.856 [2024-05-15 09:02:18.530965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.856 [2024-05-15 09:02:18.530981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.856 [2024-05-15 09:02:18.530995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.856 [2024-05-15 09:02:18.531011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.856 [2024-05-15 09:02:18.531026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.856 [2024-05-15 09:02:18.531042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.856 [2024-05-15 09:02:18.531055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.856 [2024-05-15 09:02:18.531071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.856 [2024-05-15 09:02:18.531086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.856 [2024-05-15 09:02:18.531102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.856 [2024-05-15 09:02:18.531116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.856 [2024-05-15 09:02:18.531132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.856 [2024-05-15 09:02:18.531150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.856 [2024-05-15 09:02:18.531167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.856 [2024-05-15 09:02:18.531181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.856 [2024-05-15 09:02:18.531198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.856 [2024-05-15 09:02:18.531222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.856 [2024-05-15 09:02:18.531240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.856 [2024-05-15 09:02:18.531254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.856 [2024-05-15 09:02:18.531271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.856 [2024-05-15 09:02:18.531286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.856 [2024-05-15 09:02:18.531302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.856 [2024-05-15 09:02:18.531316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.856 [2024-05-15 09:02:18.531333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.856 [2024-05-15 09:02:18.531347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.856 [2024-05-15 09:02:18.531364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.856 [2024-05-15 09:02:18.531378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.856 [2024-05-15 09:02:18.531394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.856 [2024-05-15 09:02:18.531408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.856 [2024-05-15 09:02:18.531424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.856 [2024-05-15 09:02:18.531438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.856 [2024-05-15 09:02:18.531454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.856 [2024-05-15 09:02:18.531468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.856 [2024-05-15 09:02:18.531484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.856 [2024-05-15 09:02:18.531498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.856 [2024-05-15 09:02:18.531526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.856 [2024-05-15 09:02:18.531540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.856 [2024-05-15 09:02:18.531560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.856 [2024-05-15 09:02:18.531581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.856 [2024-05-15 09:02:18.531597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.856 [2024-05-15 09:02:18.531611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.857 [2024-05-15 09:02:18.531628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.857 [2024-05-15 09:02:18.531642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.857 [2024-05-15 09:02:18.531658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.857 [2024-05-15 09:02:18.531672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.857 [2024-05-15 09:02:18.531689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.857 [2024-05-15 09:02:18.531704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.857 [2024-05-15 09:02:18.531720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.857 [2024-05-15 09:02:18.531734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.857 [2024-05-15 09:02:18.531750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.857 [2024-05-15 09:02:18.531765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.857 [2024-05-15 09:02:18.531781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.857 [2024-05-15 09:02:18.531795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.857 [2024-05-15 09:02:18.531812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.857 [2024-05-15 09:02:18.531826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.857 [2024-05-15 09:02:18.531843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.857 [2024-05-15 09:02:18.531857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.857 [2024-05-15 09:02:18.531873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.857 [2024-05-15 09:02:18.531887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.857 [2024-05-15 09:02:18.531903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.857 [2024-05-15 09:02:18.531917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.857 [2024-05-15 09:02:18.531933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.857 [2024-05-15 09:02:18.531952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.857 [2024-05-15 09:02:18.531969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.857 [2024-05-15 09:02:18.531983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.857 [2024-05-15 09:02:18.531999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.857 [2024-05-15 09:02:18.532013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.857 [2024-05-15 09:02:18.532029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.857 [2024-05-15 09:02:18.532043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.857 [2024-05-15 09:02:18.532059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.857 [2024-05-15 09:02:18.532073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.857 [2024-05-15 09:02:18.532090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.857 [2024-05-15 09:02:18.532104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.857 [2024-05-15 09:02:18.532120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.857 [2024-05-15 09:02:18.532133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.857 [2024-05-15 09:02:18.532150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.857 [2024-05-15 09:02:18.532164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.857 [2024-05-15 09:02:18.532187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.857 [2024-05-15 09:02:18.532202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.857 [2024-05-15 09:02:18.532224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.857 [2024-05-15 09:02:18.532251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.857 [2024-05-15 09:02:18.532267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.857 [2024-05-15 09:02:18.532281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.857 [2024-05-15 09:02:18.532297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.857 [2024-05-15 09:02:18.532312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.857 [2024-05-15 09:02:18.532328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.857 [2024-05-15 09:02:18.532343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.857 [2024-05-15 09:02:18.532364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.857 [2024-05-15 09:02:18.532378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.857 [2024-05-15 09:02:18.532395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.857 [2024-05-15 09:02:18.532409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.857 [2024-05-15 09:02:18.532425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.857 [2024-05-15 09:02:18.532439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.857 [2024-05-15 09:02:18.532455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.857 [2024-05-15 09:02:18.532470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.857 [2024-05-15 09:02:18.532485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.857 [2024-05-15 09:02:18.532499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.857 [2024-05-15 09:02:18.532515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.857 [2024-05-15 09:02:18.532530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.857 [2024-05-15 09:02:18.532546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.857 [2024-05-15 09:02:18.532568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.857 [2024-05-15 09:02:18.532584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.857 [2024-05-15 09:02:18.532600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.857 [2024-05-15 09:02:18.532618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.857 [2024-05-15 09:02:18.532632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.857 [2024-05-15 09:02:18.532648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.857 [2024-05-15 09:02:18.532663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.857 [2024-05-15 09:02:18.532679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.857 [2024-05-15 09:02:18.532694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.857 [2024-05-15 09:02:18.532720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.857 [2024-05-15 09:02:18.532735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.857 [2024-05-15 09:02:18.532751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.857 [2024-05-15 09:02:18.532769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.857 [2024-05-15 09:02:18.532785] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2ceb0 is same with the state(5) to be set 00:36:23.857 [2024-05-15 09:02:18.534047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.857 [2024-05-15 09:02:18.534071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.857 [2024-05-15 09:02:18.534092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.857 [2024-05-15 09:02:18.534108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.857 [2024-05-15 09:02:18.534125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.857 [2024-05-15 09:02:18.534139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.858 [2024-05-15 09:02:18.534156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.858 [2024-05-15 09:02:18.534170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.858 [2024-05-15 09:02:18.534187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.858 [2024-05-15 09:02:18.534202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.858 [2024-05-15 09:02:18.534236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.858 [2024-05-15 09:02:18.534251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.858 [2024-05-15 09:02:18.534268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.858 [2024-05-15 09:02:18.534283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.858 [2024-05-15 09:02:18.534299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.858 [2024-05-15 09:02:18.534313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.858 [2024-05-15 09:02:18.534330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.858 [2024-05-15 09:02:18.534344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.858 [2024-05-15 09:02:18.534361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.858 [2024-05-15 09:02:18.534375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.858 [2024-05-15 09:02:18.534392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.858 [2024-05-15 09:02:18.534406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.858 [2024-05-15 09:02:18.534423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.858 [2024-05-15 09:02:18.534443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.858 [2024-05-15 09:02:18.534461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.858 [2024-05-15 09:02:18.534476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.858 [2024-05-15 09:02:18.534492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.858 [2024-05-15 09:02:18.534506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.858 [2024-05-15 09:02:18.534522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.858 [2024-05-15 09:02:18.534539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.858 [2024-05-15 09:02:18.534555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.858 [2024-05-15 09:02:18.534569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.858 [2024-05-15 09:02:18.534586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.858 [2024-05-15 09:02:18.534601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.858 [2024-05-15 09:02:18.534618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.858 [2024-05-15 09:02:18.534632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.858 [2024-05-15 09:02:18.534649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.858 [2024-05-15 09:02:18.534663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.858 [2024-05-15 09:02:18.534684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.858 [2024-05-15 09:02:18.534699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.858 [2024-05-15 09:02:18.534716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.858 [2024-05-15 09:02:18.534730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.858 [2024-05-15 09:02:18.534749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.858 [2024-05-15 09:02:18.534763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.858 [2024-05-15 09:02:18.534779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.858 [2024-05-15 09:02:18.534793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.858 [2024-05-15 09:02:18.534809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.858 [2024-05-15 09:02:18.534823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.858 [2024-05-15 09:02:18.534843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.858 [2024-05-15 09:02:18.534858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.858 [2024-05-15 09:02:18.534874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.858 [2024-05-15 09:02:18.534888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.858 [2024-05-15 09:02:18.534904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.858 [2024-05-15 09:02:18.534918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.858 [2024-05-15 09:02:18.534934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.858 [2024-05-15 09:02:18.534949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.858 [2024-05-15 09:02:18.534965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.858 [2024-05-15 09:02:18.534979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.858 [2024-05-15 09:02:18.534995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.858 [2024-05-15 09:02:18.535009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.858 [2024-05-15 09:02:18.535025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.858 [2024-05-15 09:02:18.535039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.858 [2024-05-15 09:02:18.535056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.858 [2024-05-15 09:02:18.535070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.858 [2024-05-15 09:02:18.535085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.858 [2024-05-15 09:02:18.535101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.858 [2024-05-15 09:02:18.535117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.858 [2024-05-15 09:02:18.535131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.858 [2024-05-15 09:02:18.535147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.858 [2024-05-15 09:02:18.535162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.858 [2024-05-15 09:02:18.535178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.858 [2024-05-15 09:02:18.535192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.858 [2024-05-15 09:02:18.535212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.858 [2024-05-15 09:02:18.535238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.858 [2024-05-15 09:02:18.535256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.858 [2024-05-15 09:02:18.535270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.858 [2024-05-15 09:02:18.535287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.858 [2024-05-15 09:02:18.535301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.858 [2024-05-15 09:02:18.535317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.858 [2024-05-15 09:02:18.535331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.858 [2024-05-15 09:02:18.535347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.858 [2024-05-15 09:02:18.535361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.858 [2024-05-15 09:02:18.535377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.858 [2024-05-15 09:02:18.535392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.858 [2024-05-15 09:02:18.535408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.859 [2024-05-15 09:02:18.535422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.859 [2024-05-15 09:02:18.535438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.859 [2024-05-15 09:02:18.535453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.859 [2024-05-15 09:02:18.535469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.859 [2024-05-15 09:02:18.535483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.859 [2024-05-15 09:02:18.535499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.859 [2024-05-15 09:02:18.535525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.859 [2024-05-15 09:02:18.535540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.859 [2024-05-15 09:02:18.535554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.859 [2024-05-15 09:02:18.535570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.859 [2024-05-15 09:02:18.535584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.859 [2024-05-15 09:02:18.535601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.859 [2024-05-15 09:02:18.535615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.859 [2024-05-15 09:02:18.535635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.859 [2024-05-15 09:02:18.535650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.859 [2024-05-15 09:02:18.535666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.859 [2024-05-15 09:02:18.535680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.859 [2024-05-15 09:02:18.535696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.859 [2024-05-15 09:02:18.535711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.859 [2024-05-15 09:02:18.535728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.859 [2024-05-15 09:02:18.535743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.859 [2024-05-15 09:02:18.535759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.859 [2024-05-15 09:02:18.535773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.859 [2024-05-15 09:02:18.535789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.859 [2024-05-15 09:02:18.535803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.859 [2024-05-15 09:02:18.535819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.859 [2024-05-15 09:02:18.535834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.859 [2024-05-15 09:02:18.535850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.859 [2024-05-15 09:02:18.535864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.859 [2024-05-15 09:02:18.535881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.859 [2024-05-15 09:02:18.535895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.859 [2024-05-15 09:02:18.535911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.859 [2024-05-15 09:02:18.535932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.859 [2024-05-15 09:02:18.535948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.859 [2024-05-15 09:02:18.535962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.859 [2024-05-15 09:02:18.535978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.859 [2024-05-15 09:02:18.535992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.859 [2024-05-15 09:02:18.536008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.859 [2024-05-15 09:02:18.536026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.859 [2024-05-15 09:02:18.536043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.859 [2024-05-15 09:02:18.536057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.859 [2024-05-15 09:02:18.536073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.859 [2024-05-15 09:02:18.536087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.859 [2024-05-15 09:02:18.536102] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed680 is same with the state(5) to be set 00:36:23.859 [2024-05-15 09:02:18.537383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.859 [2024-05-15 09:02:18.537408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.859 [2024-05-15 09:02:18.537429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.859 [2024-05-15 09:02:18.537444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.859 [2024-05-15 09:02:18.537461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.859 [2024-05-15 09:02:18.537475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.859 [2024-05-15 09:02:18.537492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.859 [2024-05-15 09:02:18.537506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.859 [2024-05-15 09:02:18.537522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.859 [2024-05-15 09:02:18.537536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.859 [2024-05-15 09:02:18.537553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.859 [2024-05-15 09:02:18.537567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.859 [2024-05-15 09:02:18.537584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.859 [2024-05-15 09:02:18.537598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.859 [2024-05-15 09:02:18.537614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.859 [2024-05-15 09:02:18.537628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.859 [2024-05-15 09:02:18.537644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.859 [2024-05-15 09:02:18.537659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.859 [2024-05-15 09:02:18.537677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.859 [2024-05-15 09:02:18.537692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.859 [2024-05-15 09:02:18.537713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.859 [2024-05-15 09:02:18.537729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.859 [2024-05-15 09:02:18.537746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.859 [2024-05-15 09:02:18.537761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.859 [2024-05-15 09:02:18.537777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.859 [2024-05-15 09:02:18.537792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.859 [2024-05-15 09:02:18.537809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.859 [2024-05-15 09:02:18.537823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.859 [2024-05-15 09:02:18.537840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.859 [2024-05-15 09:02:18.537854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.859 [2024-05-15 09:02:18.537870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.859 [2024-05-15 09:02:18.537885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.859 [2024-05-15 09:02:18.537901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.859 [2024-05-15 09:02:18.537916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.859 [2024-05-15 09:02:18.537938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.860 [2024-05-15 09:02:18.537952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.860 [2024-05-15 09:02:18.537968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.860 [2024-05-15 09:02:18.537982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.860 [2024-05-15 09:02:18.537999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.860 [2024-05-15 09:02:18.538013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.860 [2024-05-15 09:02:18.538030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.860 [2024-05-15 09:02:18.538044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.860 [2024-05-15 09:02:18.538060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.860 [2024-05-15 09:02:18.538075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.860 [2024-05-15 09:02:18.538091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.860 [2024-05-15 09:02:18.538108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.860 [2024-05-15 09:02:18.538126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.860 [2024-05-15 09:02:18.538140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.860 [2024-05-15 09:02:18.538157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.860 [2024-05-15 09:02:18.538171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.860 [2024-05-15 09:02:18.538188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.860 [2024-05-15 09:02:18.538212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.860 [2024-05-15 09:02:18.538236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.860 [2024-05-15 09:02:18.538252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.860 [2024-05-15 09:02:18.538269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.860 [2024-05-15 09:02:18.538283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.860 [2024-05-15 09:02:18.538300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.860 [2024-05-15 09:02:18.538315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.860 [2024-05-15 09:02:18.538332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.860 [2024-05-15 09:02:18.538347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.860 [2024-05-15 09:02:18.538363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.860 [2024-05-15 09:02:18.538378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.860 [2024-05-15 09:02:18.538396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.860 [2024-05-15 09:02:18.538410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.860 [2024-05-15 09:02:18.538427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.860 [2024-05-15 09:02:18.538441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.860 [2024-05-15 09:02:18.538457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.860 [2024-05-15 09:02:18.538472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.860 [2024-05-15 09:02:18.538488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.860 [2024-05-15 09:02:18.538503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.860 [2024-05-15 09:02:18.538537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.860 [2024-05-15 09:02:18.538552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.860 [2024-05-15 09:02:18.538568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.860 [2024-05-15 09:02:18.538583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.860 [2024-05-15 09:02:18.538599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.860 [2024-05-15 09:02:18.538613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.860 [2024-05-15 09:02:18.538629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.860 [2024-05-15 09:02:18.538644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.860 [2024-05-15 09:02:18.538660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.860 [2024-05-15 09:02:18.538674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.860 [2024-05-15 09:02:18.538690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.860 [2024-05-15 09:02:18.538704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.860 [2024-05-15 09:02:18.538725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.860 [2024-05-15 09:02:18.538739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.860 [2024-05-15 09:02:18.538755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.860 [2024-05-15 09:02:18.538769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.860 [2024-05-15 09:02:18.538787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.860 [2024-05-15 09:02:18.538802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.860 [2024-05-15 09:02:18.538818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.860 [2024-05-15 09:02:18.538832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.860 [2024-05-15 09:02:18.538849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.860 [2024-05-15 09:02:18.538863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.860 [2024-05-15 09:02:18.538879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.860 [2024-05-15 09:02:18.538894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.860 [2024-05-15 09:02:18.538911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.860 [2024-05-15 09:02:18.538930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.860 [2024-05-15 09:02:18.538947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.860 [2024-05-15 09:02:18.538962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.860 [2024-05-15 09:02:18.538979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.860 [2024-05-15 09:02:18.538993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.860 [2024-05-15 09:02:18.539010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.860 [2024-05-15 09:02:18.539024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.861 [2024-05-15 09:02:18.539041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.861 [2024-05-15 09:02:18.539055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.861 [2024-05-15 09:02:18.539071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.861 [2024-05-15 09:02:18.539086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.861 [2024-05-15 09:02:18.539102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.861 [2024-05-15 09:02:18.539117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.861 [2024-05-15 09:02:18.539133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.861 [2024-05-15 09:02:18.539147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.861 [2024-05-15 09:02:18.539164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.861 [2024-05-15 09:02:18.539178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.861 [2024-05-15 09:02:18.539195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.861 [2024-05-15 09:02:18.539234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.861 [2024-05-15 09:02:18.539253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.861 [2024-05-15 09:02:18.539267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.861 [2024-05-15 09:02:18.539284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.861 [2024-05-15 09:02:18.539298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.861 [2024-05-15 09:02:18.539315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.861 [2024-05-15 09:02:18.539330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.861 [2024-05-15 09:02:18.539350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.861 [2024-05-15 09:02:18.539367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.861 [2024-05-15 09:02:18.539383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.861 [2024-05-15 09:02:18.539397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.861 [2024-05-15 09:02:18.539414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.861 [2024-05-15 09:02:18.539429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.861 [2024-05-15 09:02:18.539446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.861 [2024-05-15 09:02:18.539461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.861 [2024-05-15 09:02:18.539476] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed860 is same with the state(5) to be set 00:36:23.861 [2024-05-15 09:02:18.540760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.861 [2024-05-15 09:02:18.540793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.861 [2024-05-15 09:02:18.540815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.861 [2024-05-15 09:02:18.540831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.861 [2024-05-15 09:02:18.540848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.861 [2024-05-15 09:02:18.540862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.861 [2024-05-15 09:02:18.540878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.861 [2024-05-15 09:02:18.540892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.861 [2024-05-15 09:02:18.540909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.861 [2024-05-15 09:02:18.540923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.861 [2024-05-15 09:02:18.540939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.861 [2024-05-15 09:02:18.540953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.861 [2024-05-15 09:02:18.540970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.861 [2024-05-15 09:02:18.540984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.861 [2024-05-15 09:02:18.541000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.861 [2024-05-15 09:02:18.541014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.861 [2024-05-15 09:02:18.541035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.861 [2024-05-15 09:02:18.541050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.861 [2024-05-15 09:02:18.541065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.861 [2024-05-15 09:02:18.541080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.861 [2024-05-15 09:02:18.541097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.861 [2024-05-15 09:02:18.541110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.861 [2024-05-15 09:02:18.541127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.861 [2024-05-15 09:02:18.541153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.861 [2024-05-15 09:02:18.541170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.861 [2024-05-15 09:02:18.541184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.861 [2024-05-15 09:02:18.541200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.861 [2024-05-15 09:02:18.541227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.861 [2024-05-15 09:02:18.541246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.861 [2024-05-15 09:02:18.541261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.861 [2024-05-15 09:02:18.541277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.861 [2024-05-15 09:02:18.541292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.861 [2024-05-15 09:02:18.541308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.861 [2024-05-15 09:02:18.541322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.861 [2024-05-15 09:02:18.541339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.861 [2024-05-15 09:02:18.541353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.861 [2024-05-15 09:02:18.541370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.861 [2024-05-15 09:02:18.541384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.861 [2024-05-15 09:02:18.541400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.861 [2024-05-15 09:02:18.541414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.861 [2024-05-15 09:02:18.541431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.861 [2024-05-15 09:02:18.541449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.861 [2024-05-15 09:02:18.541466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.861 [2024-05-15 09:02:18.541480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.861 [2024-05-15 09:02:18.541496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.861 [2024-05-15 09:02:18.541515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.861 [2024-05-15 09:02:18.541531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.861 [2024-05-15 09:02:18.541545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.861 [2024-05-15 09:02:18.541562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.861 [2024-05-15 09:02:18.541579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.861 [2024-05-15 09:02:18.541595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.861 [2024-05-15 09:02:18.541609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.861 [2024-05-15 09:02:18.541625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.862 [2024-05-15 09:02:18.541639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.862 [2024-05-15 09:02:18.541655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.862 [2024-05-15 09:02:18.541669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.862 [2024-05-15 09:02:18.541685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.862 [2024-05-15 09:02:18.541700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.862 [2024-05-15 09:02:18.541717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.862 [2024-05-15 09:02:18.541731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.862 [2024-05-15 09:02:18.541747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.862 [2024-05-15 09:02:18.541761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.862 [2024-05-15 09:02:18.541777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.862 [2024-05-15 09:02:18.541791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.862 [2024-05-15 09:02:18.541808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.862 [2024-05-15 09:02:18.541822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.862 [2024-05-15 09:02:18.541842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.862 [2024-05-15 09:02:18.541857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.862 [2024-05-15 09:02:18.541873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.862 [2024-05-15 09:02:18.541887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.862 [2024-05-15 09:02:18.541903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.862 [2024-05-15 09:02:18.541917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.862 [2024-05-15 09:02:18.541934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.862 [2024-05-15 09:02:18.541948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.862 [2024-05-15 09:02:18.541972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.862 [2024-05-15 09:02:18.541987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.862 [2024-05-15 09:02:18.542003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.862 [2024-05-15 09:02:18.542027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.862 [2024-05-15 09:02:18.542043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.862 [2024-05-15 09:02:18.542057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.862 [2024-05-15 09:02:18.542073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.862 [2024-05-15 09:02:18.542087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.862 [2024-05-15 09:02:18.542104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.862 [2024-05-15 09:02:18.542118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.862 [2024-05-15 09:02:18.542134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.862 [2024-05-15 09:02:18.542147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.862 [2024-05-15 09:02:18.542163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.862 [2024-05-15 09:02:18.542177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.862 [2024-05-15 09:02:18.542194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.862 [2024-05-15 09:02:18.542226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.862 [2024-05-15 09:02:18.542243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.862 [2024-05-15 09:02:18.542261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.862 [2024-05-15 09:02:18.542280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.862 [2024-05-15 09:02:18.542294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.862 [2024-05-15 09:02:18.542311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.862 [2024-05-15 09:02:18.542326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.862 [2024-05-15 09:02:18.542342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.862 [2024-05-15 09:02:18.542356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.862 [2024-05-15 09:02:18.542372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.862 [2024-05-15 09:02:18.542386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.862 [2024-05-15 09:02:18.542403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.862 [2024-05-15 09:02:18.542416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.862 [2024-05-15 09:02:18.542433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.862 [2024-05-15 09:02:18.542447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.862 [2024-05-15 09:02:18.542464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.862 [2024-05-15 09:02:18.542478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.862 [2024-05-15 09:02:18.542494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.862 [2024-05-15 09:02:18.542508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.862 [2024-05-15 09:02:18.542528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.862 [2024-05-15 09:02:18.542541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.862 [2024-05-15 09:02:18.542558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.862 [2024-05-15 09:02:18.542572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.862 [2024-05-15 09:02:18.542589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.862 [2024-05-15 09:02:18.542604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.862 [2024-05-15 09:02:18.542620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.862 [2024-05-15 09:02:18.542634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.862 [2024-05-15 09:02:18.542654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.862 [2024-05-15 09:02:18.542670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.862 [2024-05-15 09:02:18.542687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.862 [2024-05-15 09:02:18.542701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.862 [2024-05-15 09:02:18.542718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.862 [2024-05-15 09:02:18.542733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.862 [2024-05-15 09:02:18.542749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.862 [2024-05-15 09:02:18.542763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.862 [2024-05-15 09:02:18.542780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.862 [2024-05-15 09:02:18.542794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.862 [2024-05-15 09:02:18.542811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.862 [2024-05-15 09:02:18.542825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.862 [2024-05-15 09:02:18.542840] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff00d0 is same with the state(5) to be set 00:36:23.862 [2024-05-15 09:02:18.544124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.862 [2024-05-15 09:02:18.544149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.862 [2024-05-15 09:02:18.544171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.863 [2024-05-15 09:02:18.544186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.863 [2024-05-15 09:02:18.544225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.863 [2024-05-15 09:02:18.544243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.863 [2024-05-15 09:02:18.544260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.863 [2024-05-15 09:02:18.544275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.863 [2024-05-15 09:02:18.544291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.863 [2024-05-15 09:02:18.544305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.863 [2024-05-15 09:02:18.544322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.863 [2024-05-15 09:02:18.544336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.863 [2024-05-15 09:02:18.544357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.863 [2024-05-15 09:02:18.544373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.863 [2024-05-15 09:02:18.544389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.863 [2024-05-15 09:02:18.544403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.863 [2024-05-15 09:02:18.544420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.863 [2024-05-15 09:02:18.544433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.863 [2024-05-15 09:02:18.544449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.863 [2024-05-15 09:02:18.544464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.863 [2024-05-15 09:02:18.544480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.863 [2024-05-15 09:02:18.544494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.863 [2024-05-15 09:02:18.544511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.863 [2024-05-15 09:02:18.544534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.863 [2024-05-15 09:02:18.544550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.863 [2024-05-15 09:02:18.544565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.863 [2024-05-15 09:02:18.544581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.863 [2024-05-15 09:02:18.544595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.863 [2024-05-15 09:02:18.544611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.863 [2024-05-15 09:02:18.544625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.863 [2024-05-15 09:02:18.544642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.863 [2024-05-15 09:02:18.544656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.863 [2024-05-15 09:02:18.544673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.863 [2024-05-15 09:02:18.544687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.863 [2024-05-15 09:02:18.544715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.863 [2024-05-15 09:02:18.544729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.863 [2024-05-15 09:02:18.544745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.863 [2024-05-15 09:02:18.544764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.863 [2024-05-15 09:02:18.544780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.863 [2024-05-15 09:02:18.544795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.863 [2024-05-15 09:02:18.544811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.863 [2024-05-15 09:02:18.544826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.863 [2024-05-15 09:02:18.544843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.863 [2024-05-15 09:02:18.544857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.863 [2024-05-15 09:02:18.544873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.863 [2024-05-15 09:02:18.544887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.863 [2024-05-15 09:02:18.544904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.863 [2024-05-15 09:02:18.544918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.863 [2024-05-15 09:02:18.544934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.863 [2024-05-15 09:02:18.544949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.863 [2024-05-15 09:02:18.544965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.863 [2024-05-15 09:02:18.544980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.863 [2024-05-15 09:02:18.544996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.863 [2024-05-15 09:02:18.545010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.863 [2024-05-15 09:02:18.545026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.863 [2024-05-15 09:02:18.545041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.863 [2024-05-15 09:02:18.545057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.863 [2024-05-15 09:02:18.545072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.863 [2024-05-15 09:02:18.545088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.863 [2024-05-15 09:02:18.545102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.863 [2024-05-15 09:02:18.545119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.863 [2024-05-15 09:02:18.545133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.863 [2024-05-15 09:02:18.545149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.863 [2024-05-15 09:02:18.545166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.863 [2024-05-15 09:02:18.545183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.863 [2024-05-15 09:02:18.545197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.863 [2024-05-15 09:02:18.545234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.863 [2024-05-15 09:02:18.545252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.863 [2024-05-15 09:02:18.545269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.863 [2024-05-15 09:02:18.545283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.863 [2024-05-15 09:02:18.545299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.863 [2024-05-15 09:02:18.545313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.863 [2024-05-15 09:02:18.545330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.863 [2024-05-15 09:02:18.545344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.863 [2024-05-15 09:02:18.545360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.863 [2024-05-15 09:02:18.545374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.863 [2024-05-15 09:02:18.545390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.863 [2024-05-15 09:02:18.545405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.863 [2024-05-15 09:02:18.545421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.863 [2024-05-15 09:02:18.545435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.863 [2024-05-15 09:02:18.545451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.863 [2024-05-15 09:02:18.545465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.864 [2024-05-15 09:02:18.545481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.864 [2024-05-15 09:02:18.545495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.864 [2024-05-15 09:02:18.545511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.864 [2024-05-15 09:02:18.545528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.864 [2024-05-15 09:02:18.545544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.864 [2024-05-15 09:02:18.545558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.864 [2024-05-15 09:02:18.545579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.864 [2024-05-15 09:02:18.545601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.864 [2024-05-15 09:02:18.545617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.864 [2024-05-15 09:02:18.545633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.864 [2024-05-15 09:02:18.545649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.864 [2024-05-15 09:02:18.545664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.864 [2024-05-15 09:02:18.545680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.864 [2024-05-15 09:02:18.545694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.864 [2024-05-15 09:02:18.545710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.864 [2024-05-15 09:02:18.545736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.864 [2024-05-15 09:02:18.545752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.864 [2024-05-15 09:02:18.545766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.864 [2024-05-15 09:02:18.545782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.864 [2024-05-15 09:02:18.545796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.864 [2024-05-15 09:02:18.545813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.864 [2024-05-15 09:02:18.545827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.864 [2024-05-15 09:02:18.545843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.864 [2024-05-15 09:02:18.545857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.864 [2024-05-15 09:02:18.545873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.864 [2024-05-15 09:02:18.545888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.864 [2024-05-15 09:02:18.545904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.864 [2024-05-15 09:02:18.545920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.864 [2024-05-15 09:02:18.545936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.864 [2024-05-15 09:02:18.545950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.864 [2024-05-15 09:02:18.545966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.864 [2024-05-15 09:02:18.545984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.864 [2024-05-15 09:02:18.546000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.864 [2024-05-15 09:02:18.546015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.864 [2024-05-15 09:02:18.546031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.864 [2024-05-15 09:02:18.546045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.864 [2024-05-15 09:02:18.546062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.864 [2024-05-15 09:02:18.546076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.864 [2024-05-15 09:02:18.546092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.864 [2024-05-15 09:02:18.546107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.864 [2024-05-15 09:02:18.546123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.864 [2024-05-15 09:02:18.546137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.864 [2024-05-15 09:02:18.546153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.864 [2024-05-15 09:02:18.546172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.864 [2024-05-15 09:02:18.546188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.864 [2024-05-15 09:02:18.546201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.864 [2024-05-15 09:02:18.546222] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bdae0 is same with the state(5) to be set 00:36:23.864 [2024-05-15 09:02:18.547919] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:23.864 [2024-05-15 09:02:18.547951] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:36:23.864 [2024-05-15 09:02:18.547971] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:36:23.864 [2024-05-15 09:02:18.547989] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:36:23.864 [2024-05-15 09:02:18.548125] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:36:23.864 [2024-05-15 09:02:18.548255] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:36:23.864 [2024-05-15 09:02:18.548517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.864 [2024-05-15 09:02:18.548665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.864 [2024-05-15 09:02:18.548692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b29cb0 with addr=10.0.0.2, port=4420 00:36:23.864 [2024-05-15 09:02:18.548709] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b29cb0 is same with the state(5) to be set 00:36:23.864 [2024-05-15 09:02:18.548809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.864 [2024-05-15 09:02:18.548918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.864 [2024-05-15 09:02:18.548943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f61560 with addr=10.0.0.2, port=4420 00:36:23.864 [2024-05-15 09:02:18.548958] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f61560 is same with the state(5) to be set 00:36:23.864 [2024-05-15 09:02:18.549069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.864 [2024-05-15 09:02:18.549163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.864 [2024-05-15 09:02:18.549187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x201e0b0 with addr=10.0.0.2, port=4420 00:36:23.864 [2024-05-15 09:02:18.549210] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201e0b0 is same with the state(5) to be set 00:36:23.864 [2024-05-15 09:02:18.549317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.864 [2024-05-15 09:02:18.549427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.864 [2024-05-15 09:02:18.549451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:36:23.864 [2024-05-15 09:02:18.549468] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:36:23.864 [2024-05-15 09:02:18.550597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.864 [2024-05-15 09:02:18.550632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.865 [2024-05-15 09:02:18.550667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.865 [2024-05-15 09:02:18.550690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.865 [2024-05-15 09:02:18.550707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.865 [2024-05-15 09:02:18.550721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.865 [2024-05-15 09:02:18.550738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.865 [2024-05-15 09:02:18.550753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.865 [2024-05-15 09:02:18.550769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.865 [2024-05-15 09:02:18.550783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.865 [2024-05-15 09:02:18.550800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.865 [2024-05-15 09:02:18.550821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.865 [2024-05-15 09:02:18.550838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.865 [2024-05-15 09:02:18.550852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.865 [2024-05-15 09:02:18.550869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.865 [2024-05-15 09:02:18.550883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.865 [2024-05-15 09:02:18.550905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.865 [2024-05-15 09:02:18.550920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.865 [2024-05-15 09:02:18.550938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.865 [2024-05-15 09:02:18.550953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.865 [2024-05-15 09:02:18.550970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.865 [2024-05-15 09:02:18.550984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.865 [2024-05-15 09:02:18.551001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.865 [2024-05-15 09:02:18.551016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.865 [2024-05-15 09:02:18.551032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.865 [2024-05-15 09:02:18.551046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.865 [2024-05-15 09:02:18.551066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.865 [2024-05-15 09:02:18.551080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.865 [2024-05-15 09:02:18.551096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.865 [2024-05-15 09:02:18.551112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.865 [2024-05-15 09:02:18.551131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.865 [2024-05-15 09:02:18.551145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.865 [2024-05-15 09:02:18.551162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.865 [2024-05-15 09:02:18.551176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.865 [2024-05-15 09:02:18.551193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.865 [2024-05-15 09:02:18.551214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.865 [2024-05-15 09:02:18.551240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.865 [2024-05-15 09:02:18.551255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.865 [2024-05-15 09:02:18.551273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.865 [2024-05-15 09:02:18.551287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.865 [2024-05-15 09:02:18.551303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.865 [2024-05-15 09:02:18.551318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.865 [2024-05-15 09:02:18.551338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.865 [2024-05-15 09:02:18.551353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.865 [2024-05-15 09:02:18.551370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.865 [2024-05-15 09:02:18.551384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.865 [2024-05-15 09:02:18.551401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.865 [2024-05-15 09:02:18.551415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.865 [2024-05-15 09:02:18.551432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.865 [2024-05-15 09:02:18.551446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.865 [2024-05-15 09:02:18.551464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.865 [2024-05-15 09:02:18.551478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.865 [2024-05-15 09:02:18.551494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.865 [2024-05-15 09:02:18.551508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.865 [2024-05-15 09:02:18.551536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.865 [2024-05-15 09:02:18.551551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.865 [2024-05-15 09:02:18.551567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.865 [2024-05-15 09:02:18.551582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.865 [2024-05-15 09:02:18.551598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.865 [2024-05-15 09:02:18.551613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.865 [2024-05-15 09:02:18.551629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.865 [2024-05-15 09:02:18.551644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.865 [2024-05-15 09:02:18.551660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.865 [2024-05-15 09:02:18.551675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.865 [2024-05-15 09:02:18.551691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.865 [2024-05-15 09:02:18.551705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.865 [2024-05-15 09:02:18.551722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.865 [2024-05-15 09:02:18.551740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.865 [2024-05-15 09:02:18.551768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.865 [2024-05-15 09:02:18.551782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.865 [2024-05-15 09:02:18.551798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.865 [2024-05-15 09:02:18.551813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.865 [2024-05-15 09:02:18.551829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.865 [2024-05-15 09:02:18.551843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.865 [2024-05-15 09:02:18.551859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.865 [2024-05-15 09:02:18.551873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.865 [2024-05-15 09:02:18.551890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.865 [2024-05-15 09:02:18.551904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.865 [2024-05-15 09:02:18.551920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.865 [2024-05-15 09:02:18.551934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.865 [2024-05-15 09:02:18.551950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.866 [2024-05-15 09:02:18.551977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.866 [2024-05-15 09:02:18.551992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.866 [2024-05-15 09:02:18.552006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.866 [2024-05-15 09:02:18.552023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.866 [2024-05-15 09:02:18.552037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.866 [2024-05-15 09:02:18.552053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.866 [2024-05-15 09:02:18.552068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.866 [2024-05-15 09:02:18.552084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.866 [2024-05-15 09:02:18.552098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.866 [2024-05-15 09:02:18.552114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.866 [2024-05-15 09:02:18.552129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.866 [2024-05-15 09:02:18.552161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.866 [2024-05-15 09:02:18.552176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.866 [2024-05-15 09:02:18.552193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.866 [2024-05-15 09:02:18.552207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.866 [2024-05-15 09:02:18.552235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.866 [2024-05-15 09:02:18.552251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.866 [2024-05-15 09:02:18.552267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.866 [2024-05-15 09:02:18.552282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.866 [2024-05-15 09:02:18.552298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.866 [2024-05-15 09:02:18.552313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.866 [2024-05-15 09:02:18.552330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.866 [2024-05-15 09:02:18.552345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.866 [2024-05-15 09:02:18.552361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.866 [2024-05-15 09:02:18.552375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.866 [2024-05-15 09:02:18.552391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.866 [2024-05-15 09:02:18.552405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.866 [2024-05-15 09:02:18.552422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.866 [2024-05-15 09:02:18.552435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.866 [2024-05-15 09:02:18.552451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.866 [2024-05-15 09:02:18.552465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.866 [2024-05-15 09:02:18.552482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.866 [2024-05-15 09:02:18.552497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.866 [2024-05-15 09:02:18.552513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.866 [2024-05-15 09:02:18.552527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.866 [2024-05-15 09:02:18.552543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.866 [2024-05-15 09:02:18.552561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.866 [2024-05-15 09:02:18.552578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.866 [2024-05-15 09:02:18.552592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.866 [2024-05-15 09:02:18.552608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.866 [2024-05-15 09:02:18.552623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.866 [2024-05-15 09:02:18.552639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.866 [2024-05-15 09:02:18.552654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.866 [2024-05-15 09:02:18.552670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.866 [2024-05-15 09:02:18.552684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.866 [2024-05-15 09:02:18.552700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.866 [2024-05-15 09:02:18.552714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.866 [2024-05-15 09:02:18.552729] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff15b0 is same with the state(5) to be set 00:36:23.866 [2024-05-15 09:02:18.555319] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:36:23.866 [2024-05-15 09:02:18.555356] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:36:23.866 [2024-05-15 09:02:18.555374] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:36:23.866 [2024-05-15 09:02:18.555392] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:36:23.866 task offset: 24576 on job bdev=Nvme2n1 fails 00:36:23.866 00:36:23.866 Latency(us) 00:36:23.866 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:23.866 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:36:23.866 Job: Nvme1n1 ended in about 0.93 seconds with error 00:36:23.866 Verification LBA range: start 0x0 length 0x400 00:36:23.866 Nvme1n1 : 0.93 138.19 8.64 69.09 0.00 305390.17 20680.25 273406.48 00:36:23.866 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:36:23.866 Job: Nvme2n1 ended in about 0.91 seconds with error 00:36:23.866 Verification LBA range: start 0x0 length 0x400 00:36:23.866 Nvme2n1 : 0.91 211.17 13.20 70.39 0.00 220167.21 18835.53 267192.70 00:36:23.866 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:36:23.866 Job: Nvme3n1 ended in about 0.91 seconds with error 00:36:23.866 Verification LBA range: start 0x0 length 0x400 00:36:23.866 Nvme3n1 : 0.91 210.93 13.18 70.31 0.00 215785.43 10922.67 274959.93 00:36:23.866 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:36:23.866 Job: Nvme4n1 ended in about 0.93 seconds with error 00:36:23.866 Verification LBA range: start 0x0 length 0x400 00:36:23.866 Nvme4n1 : 0.93 143.07 8.94 68.85 0.00 280873.94 12184.84 293601.28 00:36:23.866 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:36:23.866 Job: Nvme5n1 ended in about 0.93 seconds with error 00:36:23.866 Verification LBA range: start 0x0 length 0x400 00:36:23.866 Nvme5n1 : 0.93 137.20 8.57 68.60 0.00 283137.01 18835.53 267192.70 00:36:23.866 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:36:23.866 Job: Nvme6n1 ended in about 0.92 seconds with error 00:36:23.866 Verification LBA range: start 0x0 length 0x400 00:36:23.866 Nvme6n1 : 0.92 148.65 9.29 69.44 0.00 261106.20 18447.17 233016.89 00:36:23.866 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:36:23.866 Job: Nvme7n1 ended in about 0.94 seconds with error 00:36:23.866 Verification LBA range: start 0x0 length 0x400 00:36:23.866 Nvme7n1 : 0.94 136.71 8.54 68.35 0.00 272181.22 32622.36 267192.70 00:36:23.866 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:36:23.866 Job: Nvme8n1 ended in about 0.95 seconds with error 00:36:23.866 Verification LBA range: start 0x0 length 0x400 00:36:23.866 Nvme8n1 : 0.95 135.28 8.45 67.64 0.00 269615.91 17864.63 285834.05 00:36:23.866 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:36:23.866 Job: Nvme9n1 ended in about 0.92 seconds with error 00:36:23.866 Verification LBA range: start 0x0 length 0x400 00:36:23.866 Nvme9n1 : 0.92 139.86 8.74 69.93 0.00 253258.97 6407.96 313796.08 00:36:23.866 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:36:23.866 Job: Nvme10n1 ended in about 0.94 seconds with error 00:36:23.866 Verification LBA range: start 0x0 length 0x400 00:36:23.866 Nvme10n1 : 0.94 136.22 8.51 68.11 0.00 255538.06 20777.34 270299.59 00:36:23.866 =================================================================================================================== 00:36:23.866 Total : 1537.28 96.08 690.72 0.00 259034.77 6407.96 313796.08 00:36:23.866 [2024-05-15 09:02:18.583064] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:36:23.866 [2024-05-15 09:02:18.583153] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:36:23.866 [2024-05-15 09:02:18.583491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.866 [2024-05-15 09:02:18.583645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.867 [2024-05-15 09:02:18.583673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffce70 with addr=10.0.0.2, port=4420 00:36:23.867 [2024-05-15 09:02:18.583702] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ffce70 is same with the state(5) to be set 00:36:23.867 [2024-05-15 09:02:18.583731] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b29cb0 (9): Bad file descriptor 00:36:23.867 [2024-05-15 09:02:18.583767] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f61560 (9): Bad file descriptor 00:36:23.867 [2024-05-15 09:02:18.583786] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x201e0b0 (9): Bad file descriptor 00:36:23.867 [2024-05-15 09:02:18.583804] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:36:23.867 [2024-05-15 09:02:18.584164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.867 [2024-05-15 09:02:18.584297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.867 [2024-05-15 09:02:18.584325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f699f0 with addr=10.0.0.2, port=4420 00:36:23.867 [2024-05-15 09:02:18.584342] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f699f0 is same with the state(5) to be set 00:36:23.867 [2024-05-15 09:02:18.584445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.867 [2024-05-15 09:02:18.584542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.867 [2024-05-15 09:02:18.584568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f622c0 with addr=10.0.0.2, port=4420 00:36:23.867 [2024-05-15 09:02:18.584592] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f622c0 is same with the state(5) to be set 00:36:23.867 [2024-05-15 09:02:18.584686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.867 [2024-05-15 09:02:18.584796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.867 [2024-05-15 09:02:18.584822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f5e90 with addr=10.0.0.2, port=4420 00:36:23.867 [2024-05-15 09:02:18.584838] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f5e90 is same with the state(5) to be set 00:36:23.867 [2024-05-15 09:02:18.584971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.867 [2024-05-15 09:02:18.585074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.867 [2024-05-15 09:02:18.585100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20220a0 with addr=10.0.0.2, port=4420 00:36:23.867 [2024-05-15 09:02:18.585116] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20220a0 is same with the state(5) to be set 00:36:23.867 [2024-05-15 09:02:18.585226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.867 [2024-05-15 09:02:18.585331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.867 [2024-05-15 09:02:18.585356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a58610 with addr=10.0.0.2, port=4420 00:36:23.867 [2024-05-15 09:02:18.585372] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a58610 is same with the state(5) to be set 00:36:23.867 [2024-05-15 09:02:18.585391] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ffce70 (9): Bad file descriptor 00:36:23.867 [2024-05-15 09:02:18.585410] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:23.867 [2024-05-15 09:02:18.585425] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:23.867 [2024-05-15 09:02:18.585442] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:23.867 [2024-05-15 09:02:18.585463] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:36:23.867 [2024-05-15 09:02:18.585478] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:36:23.867 [2024-05-15 09:02:18.585492] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:36:23.867 [2024-05-15 09:02:18.585518] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:36:23.867 [2024-05-15 09:02:18.585533] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:36:23.867 [2024-05-15 09:02:18.585546] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:36:23.867 [2024-05-15 09:02:18.585564] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:36:23.867 [2024-05-15 09:02:18.585581] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:36:23.867 [2024-05-15 09:02:18.585594] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:36:23.867 [2024-05-15 09:02:18.585622] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:36:23.867 [2024-05-15 09:02:18.585653] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:36:23.867 [2024-05-15 09:02:18.585671] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:36:23.867 [2024-05-15 09:02:18.585692] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:36:23.867 [2024-05-15 09:02:18.585711] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:36:23.867 [2024-05-15 09:02:18.586094] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:23.867 [2024-05-15 09:02:18.586131] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:23.867 [2024-05-15 09:02:18.586145] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:23.867 [2024-05-15 09:02:18.586157] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:23.867 [2024-05-15 09:02:18.586175] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f699f0 (9): Bad file descriptor 00:36:23.867 [2024-05-15 09:02:18.586195] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f622c0 (9): Bad file descriptor 00:36:23.867 [2024-05-15 09:02:18.586234] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f5e90 (9): Bad file descriptor 00:36:23.867 [2024-05-15 09:02:18.586255] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20220a0 (9): Bad file descriptor 00:36:23.867 [2024-05-15 09:02:18.586272] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a58610 (9): Bad file descriptor 00:36:23.867 [2024-05-15 09:02:18.586287] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:36:23.867 [2024-05-15 09:02:18.586302] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:36:23.867 [2024-05-15 09:02:18.586316] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:36:23.867 [2024-05-15 09:02:18.586373] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:23.867 [2024-05-15 09:02:18.586395] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:36:23.867 [2024-05-15 09:02:18.586408] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:36:23.867 [2024-05-15 09:02:18.586423] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:36:23.867 [2024-05-15 09:02:18.586440] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:36:23.867 [2024-05-15 09:02:18.586454] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:36:23.867 [2024-05-15 09:02:18.586467] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:36:23.867 [2024-05-15 09:02:18.586484] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:36:23.867 [2024-05-15 09:02:18.586498] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:36:23.867 [2024-05-15 09:02:18.586511] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:36:23.867 [2024-05-15 09:02:18.586530] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:36:23.867 [2024-05-15 09:02:18.586545] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:36:23.867 [2024-05-15 09:02:18.586559] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:36:23.867 [2024-05-15 09:02:18.586574] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:36:23.867 [2024-05-15 09:02:18.586588] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:36:23.867 [2024-05-15 09:02:18.586602] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:36:23.867 [2024-05-15 09:02:18.586652] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:23.867 [2024-05-15 09:02:18.586672] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:23.867 [2024-05-15 09:02:18.586690] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:23.867 [2024-05-15 09:02:18.586703] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:23.867 [2024-05-15 09:02:18.586715] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:24.434 09:02:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:36:24.434 09:02:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:36:25.373 09:02:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 2365461 00:36:25.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (2365461) - No such process 00:36:25.373 09:02:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:36:25.373 09:02:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:36:25.373 09:02:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:36:25.373 09:02:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:36:25.373 09:02:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:36:25.373 09:02:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:36:25.373 09:02:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:25.373 09:02:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:36:25.373 09:02:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:25.373 09:02:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:36:25.373 09:02:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:25.373 09:02:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:25.373 rmmod nvme_tcp 00:36:25.373 rmmod nvme_fabrics 00:36:25.373 rmmod nvme_keyring 00:36:25.373 09:02:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:25.373 09:02:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:36:25.373 09:02:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:36:25.373 09:02:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:36:25.373 09:02:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:25.373 09:02:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:25.373 09:02:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:25.373 09:02:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:25.373 09:02:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:25.373 09:02:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:25.373 09:02:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:25.373 09:02:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:27.941 09:02:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:27.941 00:36:27.941 real 0m7.447s 00:36:27.941 user 0m18.348s 00:36:27.941 sys 0m1.421s 00:36:27.941 09:02:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:36:27.941 09:02:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:36:27.941 ************************************ 00:36:27.941 END TEST nvmf_shutdown_tc3 00:36:27.941 ************************************ 00:36:27.941 09:02:22 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:36:27.941 00:36:27.941 real 0m27.313s 00:36:27.941 user 1m14.302s 00:36:27.941 sys 0m6.629s 00:36:27.941 09:02:22 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # xtrace_disable 00:36:27.941 09:02:22 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:36:27.941 ************************************ 00:36:27.941 END TEST nvmf_shutdown 00:36:27.941 ************************************ 00:36:27.942 09:02:22 nvmf_tcp -- nvmf/nvmf.sh@85 -- # timing_exit target 00:36:27.942 09:02:22 nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:36:27.942 09:02:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:27.942 09:02:22 nvmf_tcp -- nvmf/nvmf.sh@87 -- # timing_enter host 00:36:27.942 09:02:22 nvmf_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:36:27.942 09:02:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:27.942 09:02:22 nvmf_tcp -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:36:27.942 09:02:22 nvmf_tcp -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:36:27.942 09:02:22 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:36:27.942 09:02:22 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:36:27.942 09:02:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:27.942 ************************************ 00:36:27.942 START TEST nvmf_multicontroller 00:36:27.942 ************************************ 00:36:27.942 09:02:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:36:27.942 * Looking for test storage... 00:36:27.942 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:27.942 09:02:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:27.942 09:02:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:36:27.942 09:02:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:27.942 09:02:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:27.942 09:02:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:27.942 09:02:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:27.942 09:02:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:27.942 09:02:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:27.942 09:02:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:27.942 09:02:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:27.942 09:02:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:27.942 09:02:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:27.942 09:02:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:36:27.942 09:02:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:36:27.942 09:02:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:27.942 09:02:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:27.942 09:02:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:27.942 09:02:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:27.942 09:02:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:27.942 09:02:22 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:27.942 09:02:22 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:27.942 09:02:22 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:27.942 09:02:22 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:27.942 09:02:22 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:27.942 09:02:22 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:27.942 09:02:22 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:36:27.942 09:02:22 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:27.942 09:02:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:36:27.942 09:02:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:27.942 09:02:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:27.942 09:02:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:27.942 09:02:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:27.942 09:02:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:27.942 09:02:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:27.942 09:02:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:27.942 09:02:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:27.942 09:02:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:27.942 09:02:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:27.942 09:02:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:36:27.942 09:02:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:36:27.942 09:02:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:36:27.942 09:02:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:36:27.942 09:02:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:36:27.942 09:02:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:27.942 09:02:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:27.942 09:02:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:27.942 09:02:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:27.942 09:02:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:27.942 09:02:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:27.942 09:02:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:27.942 09:02:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:27.942 09:02:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:27.942 09:02:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:27.942 09:02:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:36:27.942 09:02:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:29.860 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:29.860 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:36:29.861 Found 0000:09:00.0 (0x8086 - 0x159b) 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:36:29.861 Found 0000:09:00.1 (0x8086 - 0x159b) 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:36:29.861 Found net devices under 0000:09:00.0: cvl_0_0 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:36:29.861 Found net devices under 0000:09:00.1: cvl_0_1 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:29.861 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:30.120 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:30.120 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:30.120 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:30.120 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:30.120 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:30.120 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:30.120 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:30.120 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:30.120 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:30.120 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:36:30.120 00:36:30.120 --- 10.0.0.2 ping statistics --- 00:36:30.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:30.120 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:36:30.120 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:30.120 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:30.120 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.059 ms 00:36:30.120 00:36:30.120 --- 10.0.0.1 ping statistics --- 00:36:30.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:30.120 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:36:30.120 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:30.120 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:36:30.120 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:30.120 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:30.120 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:30.120 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:30.120 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:30.120 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:30.120 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:30.120 09:02:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:36:30.120 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:30.120 09:02:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@721 -- # xtrace_disable 00:36:30.120 09:02:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:30.120 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=2368270 00:36:30.120 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:36:30.120 09:02:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 2368270 00:36:30.120 09:02:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@828 -- # '[' -z 2368270 ']' 00:36:30.120 09:02:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:30.120 09:02:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local max_retries=100 00:36:30.120 09:02:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:30.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:30.120 09:02:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@837 -- # xtrace_disable 00:36:30.120 09:02:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:30.120 [2024-05-15 09:02:24.840654] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:36:30.120 [2024-05-15 09:02:24.840725] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:30.120 EAL: No free 2048 kB hugepages reported on node 1 00:36:30.378 [2024-05-15 09:02:24.914283] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:30.378 [2024-05-15 09:02:24.995283] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:30.378 [2024-05-15 09:02:24.995330] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:30.378 [2024-05-15 09:02:24.995351] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:30.378 [2024-05-15 09:02:24.995362] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:30.378 [2024-05-15 09:02:24.995372] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:30.378 [2024-05-15 09:02:24.995452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:36:30.378 [2024-05-15 09:02:24.995518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:36:30.378 [2024-05-15 09:02:24.995520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:30.378 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:36:30.378 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@861 -- # return 0 00:36:30.378 09:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:30.378 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@727 -- # xtrace_disable 00:36:30.378 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:30.379 09:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:30.379 09:02:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:30.379 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:30.379 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:30.379 [2024-05-15 09:02:25.145797] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:30.379 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:30.379 09:02:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:30.379 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:30.379 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:30.637 Malloc0 00:36:30.637 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:30.637 09:02:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:30.637 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:30.638 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:30.638 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:30.638 09:02:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:30.638 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:30.638 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:30.638 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:30.638 09:02:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:30.638 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:30.638 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:30.638 [2024-05-15 09:02:25.215883] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:36:30.638 [2024-05-15 09:02:25.216248] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:30.638 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:30.638 09:02:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:36:30.638 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:30.638 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:30.638 [2024-05-15 09:02:25.224003] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:36:30.638 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:30.638 09:02:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:36:30.638 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:30.638 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:30.638 Malloc1 00:36:30.638 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:30.638 09:02:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:36:30.638 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:30.638 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:30.638 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:30.638 09:02:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:36:30.638 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:30.638 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:30.638 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:30.638 09:02:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:36:30.638 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:30.638 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:30.638 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:30.638 09:02:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:36:30.638 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:30.638 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:30.638 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:30.638 09:02:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2368293 00:36:30.638 09:02:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:30.638 09:02:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2368293 /var/tmp/bdevperf.sock 00:36:30.638 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@828 -- # '[' -z 2368293 ']' 00:36:30.638 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:36:30.638 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local max_retries=100 00:36:30.638 09:02:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:36:30.638 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:36:30.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:36:30.638 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@837 -- # xtrace_disable 00:36:30.638 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:30.896 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:36:30.896 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@861 -- # return 0 00:36:30.896 09:02:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:36:30.896 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:30.896 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:31.155 NVMe0n1 00:36:31.155 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.155 09:02:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:36:31.155 09:02:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:36:31.155 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.155 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:31.155 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.155 1 00:36:31.155 09:02:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:36:31.155 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:36:31.155 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:36:31.155 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:36:31.155 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:36:31.155 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:36:31.155 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:36:31.155 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:36:31.155 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.155 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:31.155 request: 00:36:31.155 { 00:36:31.155 "name": "NVMe0", 00:36:31.155 "trtype": "tcp", 00:36:31.155 "traddr": "10.0.0.2", 00:36:31.155 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:36:31.155 "hostaddr": "10.0.0.2", 00:36:31.155 "hostsvcid": "60000", 00:36:31.155 "adrfam": "ipv4", 00:36:31.155 "trsvcid": "4420", 00:36:31.155 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:31.155 "method": "bdev_nvme_attach_controller", 00:36:31.155 "req_id": 1 00:36:31.155 } 00:36:31.155 Got JSON-RPC error response 00:36:31.155 response: 00:36:31.155 { 00:36:31.155 "code": -114, 00:36:31.155 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:36:31.155 } 00:36:31.155 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:36:31.155 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:36:31.155 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:36:31.155 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:36:31.155 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:36:31.155 09:02:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:36:31.155 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:36:31.155 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:36:31.155 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:36:31.155 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:36:31.155 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:36:31.155 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:36:31.155 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:36:31.155 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.155 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:31.155 request: 00:36:31.155 { 00:36:31.155 "name": "NVMe0", 00:36:31.155 "trtype": "tcp", 00:36:31.155 "traddr": "10.0.0.2", 00:36:31.155 "hostaddr": "10.0.0.2", 00:36:31.155 "hostsvcid": "60000", 00:36:31.155 "adrfam": "ipv4", 00:36:31.155 "trsvcid": "4420", 00:36:31.155 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:36:31.155 "method": "bdev_nvme_attach_controller", 00:36:31.155 "req_id": 1 00:36:31.155 } 00:36:31.155 Got JSON-RPC error response 00:36:31.155 response: 00:36:31.155 { 00:36:31.155 "code": -114, 00:36:31.155 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:36:31.155 } 00:36:31.155 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:36:31.155 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:36:31.155 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:36:31.155 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:36:31.155 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:36:31.155 09:02:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:36:31.155 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:36:31.155 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:36:31.155 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:36:31.155 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:36:31.155 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:36:31.155 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:36:31.155 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:36:31.155 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.155 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:31.155 request: 00:36:31.155 { 00:36:31.155 "name": "NVMe0", 00:36:31.155 "trtype": "tcp", 00:36:31.155 "traddr": "10.0.0.2", 00:36:31.156 "hostaddr": "10.0.0.2", 00:36:31.156 "hostsvcid": "60000", 00:36:31.156 "adrfam": "ipv4", 00:36:31.156 "trsvcid": "4420", 00:36:31.156 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:31.156 "multipath": "disable", 00:36:31.156 "method": "bdev_nvme_attach_controller", 00:36:31.156 "req_id": 1 00:36:31.156 } 00:36:31.156 Got JSON-RPC error response 00:36:31.156 response: 00:36:31.156 { 00:36:31.156 "code": -114, 00:36:31.156 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:36:31.156 } 00:36:31.156 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:36:31.156 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:36:31.156 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:36:31.156 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:36:31.156 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:36:31.156 09:02:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:36:31.156 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:36:31.156 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:36:31.156 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:36:31.156 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:36:31.156 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:36:31.156 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:36:31.156 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:36:31.156 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.156 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:31.156 request: 00:36:31.156 { 00:36:31.156 "name": "NVMe0", 00:36:31.156 "trtype": "tcp", 00:36:31.156 "traddr": "10.0.0.2", 00:36:31.156 "hostaddr": "10.0.0.2", 00:36:31.156 "hostsvcid": "60000", 00:36:31.156 "adrfam": "ipv4", 00:36:31.156 "trsvcid": "4420", 00:36:31.156 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:31.156 "multipath": "failover", 00:36:31.156 "method": "bdev_nvme_attach_controller", 00:36:31.156 "req_id": 1 00:36:31.156 } 00:36:31.156 Got JSON-RPC error response 00:36:31.156 response: 00:36:31.156 { 00:36:31.156 "code": -114, 00:36:31.156 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:36:31.156 } 00:36:31.156 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:36:31.156 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:36:31.156 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:36:31.156 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:36:31.156 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:36:31.156 09:02:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:36:31.156 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.156 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:31.156 00:36:31.156 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.156 09:02:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:36:31.156 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.156 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:31.156 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.156 09:02:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:36:31.156 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.156 09:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:31.414 00:36:31.414 09:02:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.414 09:02:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:36:31.414 09:02:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.414 09:02:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:36:31.414 09:02:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:31.414 09:02:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.414 09:02:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:36:31.414 09:02:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:36:32.789 0 00:36:32.789 09:02:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:36:32.789 09:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:32.789 09:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:32.789 09:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:32.789 09:02:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 2368293 00:36:32.789 09:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@947 -- # '[' -z 2368293 ']' 00:36:32.789 09:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # kill -0 2368293 00:36:32.789 09:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # uname 00:36:32.789 09:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:36:32.789 09:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2368293 00:36:32.789 09:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:36:32.789 09:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:36:32.789 09:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2368293' 00:36:32.789 killing process with pid 2368293 00:36:32.789 09:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # kill 2368293 00:36:32.789 09:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@971 -- # wait 2368293 00:36:32.789 09:02:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:32.789 09:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:32.789 09:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:32.789 09:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:32.789 09:02:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:36:32.789 09:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:32.789 09:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:32.789 09:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:32.789 09:02:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:36:32.789 09:02:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:36:32.789 09:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # read -r file 00:36:32.789 09:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:36:32.789 09:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # sort -u 00:36:32.789 09:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1610 -- # cat 00:36:32.789 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:36:32.789 [2024-05-15 09:02:25.326839] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:36:32.789 [2024-05-15 09:02:25.326931] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2368293 ] 00:36:32.789 EAL: No free 2048 kB hugepages reported on node 1 00:36:32.789 [2024-05-15 09:02:25.396887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:32.789 [2024-05-15 09:02:25.481998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:32.789 [2024-05-15 09:02:26.107264] bdev.c:4575:bdev_name_add: *ERROR*: Bdev name 847be439-5ebf-4cc0-816b-ca9b5da8898f already exists 00:36:32.789 [2024-05-15 09:02:26.107306] bdev.c:7691:bdev_register: *ERROR*: Unable to add uuid:847be439-5ebf-4cc0-816b-ca9b5da8898f alias for bdev NVMe1n1 00:36:32.789 [2024-05-15 09:02:26.107324] bdev_nvme.c:4297:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:36:32.789 Running I/O for 1 seconds... 00:36:32.789 00:36:32.789 Latency(us) 00:36:32.789 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:32.789 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:36:32.789 NVMe0n1 : 1.01 17887.92 69.87 0.00 0.00 7143.15 6262.33 16893.72 00:36:32.789 =================================================================================================================== 00:36:32.789 Total : 17887.92 69.87 0.00 0.00 7143.15 6262.33 16893.72 00:36:32.789 Received shutdown signal, test time was about 1.000000 seconds 00:36:32.789 00:36:32.789 Latency(us) 00:36:32.789 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:32.789 =================================================================================================================== 00:36:32.789 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:32.789 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:36:32.789 09:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1615 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:36:32.789 09:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # read -r file 00:36:32.789 09:02:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:36:32.789 09:02:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:32.789 09:02:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:36:32.789 09:02:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:32.789 09:02:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:36:32.789 09:02:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:32.789 09:02:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:32.789 rmmod nvme_tcp 00:36:32.789 rmmod nvme_fabrics 00:36:33.048 rmmod nvme_keyring 00:36:33.048 09:02:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:33.048 09:02:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:36:33.048 09:02:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:36:33.048 09:02:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 2368270 ']' 00:36:33.048 09:02:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 2368270 00:36:33.048 09:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@947 -- # '[' -z 2368270 ']' 00:36:33.048 09:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # kill -0 2368270 00:36:33.048 09:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # uname 00:36:33.048 09:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:36:33.048 09:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2368270 00:36:33.048 09:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:36:33.048 09:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:36:33.048 09:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2368270' 00:36:33.048 killing process with pid 2368270 00:36:33.048 09:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # kill 2368270 00:36:33.048 [2024-05-15 09:02:27.633940] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:36:33.048 09:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@971 -- # wait 2368270 00:36:33.306 09:02:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:33.306 09:02:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:33.306 09:02:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:33.306 09:02:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:33.306 09:02:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:33.306 09:02:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:33.306 09:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:33.306 09:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:35.239 09:02:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:35.239 00:36:35.239 real 0m7.737s 00:36:35.239 user 0m11.647s 00:36:35.239 sys 0m2.550s 00:36:35.239 09:02:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # xtrace_disable 00:36:35.239 09:02:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:35.239 ************************************ 00:36:35.239 END TEST nvmf_multicontroller 00:36:35.240 ************************************ 00:36:35.240 09:02:29 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:36:35.240 09:02:29 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:36:35.240 09:02:29 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:36:35.240 09:02:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:35.240 ************************************ 00:36:35.240 START TEST nvmf_aer 00:36:35.240 ************************************ 00:36:35.240 09:02:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:36:35.498 * Looking for test storage... 00:36:35.498 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:35.498 09:02:30 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:35.498 09:02:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:36:35.498 09:02:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:35.498 09:02:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:35.498 09:02:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:35.498 09:02:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:35.498 09:02:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:35.498 09:02:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:35.498 09:02:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:35.498 09:02:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:35.498 09:02:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:35.498 09:02:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:35.498 09:02:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:36:35.498 09:02:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:36:35.498 09:02:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:35.498 09:02:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:35.498 09:02:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:35.498 09:02:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:35.498 09:02:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:35.498 09:02:30 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:35.498 09:02:30 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:35.498 09:02:30 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:35.499 09:02:30 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:35.499 09:02:30 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:35.499 09:02:30 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:35.499 09:02:30 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:36:35.499 09:02:30 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:35.499 09:02:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:36:35.499 09:02:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:35.499 09:02:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:35.499 09:02:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:35.499 09:02:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:35.499 09:02:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:35.499 09:02:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:35.499 09:02:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:35.499 09:02:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:35.499 09:02:30 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:36:35.499 09:02:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:35.499 09:02:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:35.499 09:02:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:35.499 09:02:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:35.499 09:02:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:35.499 09:02:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:35.499 09:02:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:35.499 09:02:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:35.499 09:02:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:35.499 09:02:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:35.499 09:02:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:36:35.499 09:02:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:36:38.030 Found 0000:09:00.0 (0x8086 - 0x159b) 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:36:38.030 Found 0000:09:00.1 (0x8086 - 0x159b) 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:36:38.030 Found net devices under 0000:09:00.0: cvl_0_0 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:36:38.030 Found net devices under 0000:09:00.1: cvl_0_1 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:38.030 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:38.030 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:36:38.030 00:36:38.030 --- 10.0.0.2 ping statistics --- 00:36:38.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:38.030 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:38.030 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:38.030 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:36:38.030 00:36:38.030 --- 10.0.0.1 ping statistics --- 00:36:38.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:38.030 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@721 -- # xtrace_disable 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=2370909 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 2370909 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@828 -- # '[' -z 2370909 ']' 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local max_retries=100 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:38.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@837 -- # xtrace_disable 00:36:38.030 09:02:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:36:38.030 [2024-05-15 09:02:32.762305] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:36:38.031 [2024-05-15 09:02:32.762380] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:38.031 EAL: No free 2048 kB hugepages reported on node 1 00:36:38.288 [2024-05-15 09:02:32.837303] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:38.288 [2024-05-15 09:02:32.925602] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:38.288 [2024-05-15 09:02:32.925664] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:38.288 [2024-05-15 09:02:32.925690] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:38.288 [2024-05-15 09:02:32.925704] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:38.288 [2024-05-15 09:02:32.925716] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:38.288 [2024-05-15 09:02:32.925803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:38.288 [2024-05-15 09:02:32.925872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:36:38.288 [2024-05-15 09:02:32.925978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:36:38.288 [2024-05-15 09:02:32.925980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:38.288 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:36:38.288 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@861 -- # return 0 00:36:38.288 09:02:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:38.288 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@727 -- # xtrace_disable 00:36:38.288 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:36:38.288 09:02:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:38.288 09:02:33 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:38.288 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:38.288 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:36:38.545 [2024-05-15 09:02:33.082960] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:38.545 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:38.545 09:02:33 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:36:38.545 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:38.545 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:36:38.545 Malloc0 00:36:38.545 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:38.545 09:02:33 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:36:38.545 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:38.545 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:36:38.545 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:38.545 09:02:33 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:38.545 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:38.545 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:36:38.545 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:38.545 09:02:33 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:38.545 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:38.545 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:36:38.545 [2024-05-15 09:02:33.136392] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:36:38.545 [2024-05-15 09:02:33.136848] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:38.545 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:38.545 09:02:33 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:36:38.545 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:38.545 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:36:38.545 [ 00:36:38.545 { 00:36:38.545 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:36:38.545 "subtype": "Discovery", 00:36:38.545 "listen_addresses": [], 00:36:38.545 "allow_any_host": true, 00:36:38.545 "hosts": [] 00:36:38.545 }, 00:36:38.545 { 00:36:38.545 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:36:38.545 "subtype": "NVMe", 00:36:38.545 "listen_addresses": [ 00:36:38.545 { 00:36:38.545 "trtype": "TCP", 00:36:38.545 "adrfam": "IPv4", 00:36:38.545 "traddr": "10.0.0.2", 00:36:38.545 "trsvcid": "4420" 00:36:38.545 } 00:36:38.545 ], 00:36:38.545 "allow_any_host": true, 00:36:38.545 "hosts": [], 00:36:38.545 "serial_number": "SPDK00000000000001", 00:36:38.545 "model_number": "SPDK bdev Controller", 00:36:38.545 "max_namespaces": 2, 00:36:38.545 "min_cntlid": 1, 00:36:38.545 "max_cntlid": 65519, 00:36:38.545 "namespaces": [ 00:36:38.545 { 00:36:38.545 "nsid": 1, 00:36:38.545 "bdev_name": "Malloc0", 00:36:38.545 "name": "Malloc0", 00:36:38.545 "nguid": "D27CF0D5B85046FFBBB39F51252BCB36", 00:36:38.545 "uuid": "d27cf0d5-b850-46ff-bbb3-9f51252bcb36" 00:36:38.545 } 00:36:38.545 ] 00:36:38.545 } 00:36:38.545 ] 00:36:38.545 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:38.545 09:02:33 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:36:38.545 09:02:33 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:36:38.545 09:02:33 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=2370940 00:36:38.545 09:02:33 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:36:38.545 09:02:33 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:36:38.545 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # local i=0 00:36:38.545 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:36:38.545 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # '[' 0 -lt 200 ']' 00:36:38.545 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # i=1 00:36:38.545 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # sleep 0.1 00:36:38.545 EAL: No free 2048 kB hugepages reported on node 1 00:36:38.545 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:36:38.545 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # '[' 1 -lt 200 ']' 00:36:38.545 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # i=2 00:36:38.545 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # sleep 0.1 00:36:38.802 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:36:38.802 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # '[' 2 -lt 200 ']' 00:36:38.802 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # i=3 00:36:38.802 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # sleep 0.1 00:36:38.802 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:36:38.802 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:36:38.802 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1273 -- # return 0 00:36:38.802 09:02:33 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:36:38.802 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:38.802 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:36:38.802 Malloc1 00:36:38.802 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:38.803 09:02:33 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:36:38.803 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:38.803 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:36:38.803 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:38.803 09:02:33 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:36:38.803 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:38.803 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:36:38.803 Asynchronous Event Request test 00:36:38.803 Attaching to 10.0.0.2 00:36:38.803 Attached to 10.0.0.2 00:36:38.803 Registering asynchronous event callbacks... 00:36:38.803 Starting namespace attribute notice tests for all controllers... 00:36:38.803 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:36:38.803 aer_cb - Changed Namespace 00:36:38.803 Cleaning up... 00:36:38.803 [ 00:36:38.803 { 00:36:38.803 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:36:38.803 "subtype": "Discovery", 00:36:38.803 "listen_addresses": [], 00:36:38.803 "allow_any_host": true, 00:36:38.803 "hosts": [] 00:36:38.803 }, 00:36:38.803 { 00:36:38.803 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:36:38.803 "subtype": "NVMe", 00:36:38.803 "listen_addresses": [ 00:36:38.803 { 00:36:38.803 "trtype": "TCP", 00:36:38.803 "adrfam": "IPv4", 00:36:38.803 "traddr": "10.0.0.2", 00:36:38.803 "trsvcid": "4420" 00:36:38.803 } 00:36:38.803 ], 00:36:38.803 "allow_any_host": true, 00:36:38.803 "hosts": [], 00:36:38.803 "serial_number": "SPDK00000000000001", 00:36:38.803 "model_number": "SPDK bdev Controller", 00:36:38.803 "max_namespaces": 2, 00:36:38.803 "min_cntlid": 1, 00:36:38.803 "max_cntlid": 65519, 00:36:38.803 "namespaces": [ 00:36:38.803 { 00:36:38.803 "nsid": 1, 00:36:38.803 "bdev_name": "Malloc0", 00:36:38.803 "name": "Malloc0", 00:36:38.803 "nguid": "D27CF0D5B85046FFBBB39F51252BCB36", 00:36:38.803 "uuid": "d27cf0d5-b850-46ff-bbb3-9f51252bcb36" 00:36:38.803 }, 00:36:38.803 { 00:36:38.803 "nsid": 2, 00:36:38.803 "bdev_name": "Malloc1", 00:36:38.803 "name": "Malloc1", 00:36:38.803 "nguid": "AA375862EAA541CAB7237DD4D1A91B7D", 00:36:38.803 "uuid": "aa375862-eaa5-41ca-b723-7dd4d1a91b7d" 00:36:38.803 } 00:36:38.803 ] 00:36:38.803 } 00:36:38.803 ] 00:36:38.803 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:38.803 09:02:33 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 2370940 00:36:38.803 09:02:33 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:36:38.803 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:38.803 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:36:38.803 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:38.803 09:02:33 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:36:38.803 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:38.803 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:36:39.060 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:39.060 09:02:33 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:39.060 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:39.060 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:36:39.060 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:39.060 09:02:33 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:36:39.060 09:02:33 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:36:39.060 09:02:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:39.060 09:02:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:36:39.060 09:02:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:39.060 09:02:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:36:39.061 09:02:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:39.061 09:02:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:39.061 rmmod nvme_tcp 00:36:39.061 rmmod nvme_fabrics 00:36:39.061 rmmod nvme_keyring 00:36:39.061 09:02:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:39.061 09:02:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:36:39.061 09:02:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:36:39.061 09:02:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 2370909 ']' 00:36:39.061 09:02:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 2370909 00:36:39.061 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@947 -- # '[' -z 2370909 ']' 00:36:39.061 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # kill -0 2370909 00:36:39.061 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # uname 00:36:39.061 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:36:39.061 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2370909 00:36:39.061 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:36:39.061 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:36:39.061 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2370909' 00:36:39.061 killing process with pid 2370909 00:36:39.061 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # kill 2370909 00:36:39.061 [2024-05-15 09:02:33.698384] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:36:39.061 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@971 -- # wait 2370909 00:36:39.317 09:02:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:39.317 09:02:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:39.317 09:02:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:39.317 09:02:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:39.317 09:02:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:39.317 09:02:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:39.317 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:39.317 09:02:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:41.216 09:02:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:41.216 00:36:41.216 real 0m5.947s 00:36:41.216 user 0m4.756s 00:36:41.216 sys 0m2.254s 00:36:41.216 09:02:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # xtrace_disable 00:36:41.216 09:02:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:36:41.216 ************************************ 00:36:41.216 END TEST nvmf_aer 00:36:41.216 ************************************ 00:36:41.216 09:02:35 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:36:41.216 09:02:35 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:36:41.216 09:02:35 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:36:41.216 09:02:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:41.474 ************************************ 00:36:41.474 START TEST nvmf_async_init 00:36:41.474 ************************************ 00:36:41.474 09:02:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:36:41.474 * Looking for test storage... 00:36:41.474 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:41.474 09:02:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:41.474 09:02:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:36:41.474 09:02:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:41.474 09:02:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:41.474 09:02:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:41.474 09:02:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:41.474 09:02:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:41.474 09:02:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:41.474 09:02:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:41.474 09:02:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:41.474 09:02:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:41.474 09:02:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:41.474 09:02:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:36:41.474 09:02:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:36:41.474 09:02:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:41.474 09:02:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:41.474 09:02:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:41.474 09:02:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:41.474 09:02:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:41.474 09:02:36 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:41.474 09:02:36 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:41.474 09:02:36 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:41.474 09:02:36 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:41.474 09:02:36 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:41.474 09:02:36 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:41.474 09:02:36 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:36:41.474 09:02:36 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:41.475 09:02:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:36:41.475 09:02:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:41.475 09:02:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:41.475 09:02:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:41.475 09:02:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:41.475 09:02:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:41.475 09:02:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:41.475 09:02:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:41.475 09:02:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:41.475 09:02:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:36:41.475 09:02:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:36:41.475 09:02:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:36:41.475 09:02:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:36:41.475 09:02:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:36:41.475 09:02:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:36:41.475 09:02:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=23ee0fc8c0d2478e820ac211e0ff4c0f 00:36:41.475 09:02:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:36:41.475 09:02:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:41.475 09:02:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:41.475 09:02:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:41.475 09:02:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:41.475 09:02:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:41.475 09:02:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:41.475 09:02:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:41.475 09:02:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:41.475 09:02:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:41.475 09:02:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:41.475 09:02:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:36:41.475 09:02:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:36:44.005 Found 0000:09:00.0 (0x8086 - 0x159b) 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:36:44.005 Found 0000:09:00.1 (0x8086 - 0x159b) 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:36:44.005 Found net devices under 0000:09:00.0: cvl_0_0 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:36:44.005 Found net devices under 0000:09:00.1: cvl_0_1 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:44.005 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:44.006 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:44.006 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:44.006 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:44.006 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:44.006 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:44.006 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:44.006 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:44.006 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:44.006 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:44.006 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:44.006 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:44.006 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:44.006 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:44.006 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:44.006 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:44.006 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:44.006 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:44.006 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:44.006 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:36:44.006 00:36:44.006 --- 10.0.0.2 ping statistics --- 00:36:44.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:44.006 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:36:44.006 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:44.006 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:44.006 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:36:44.006 00:36:44.006 --- 10.0.0.1 ping statistics --- 00:36:44.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:44.006 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:36:44.006 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:44.006 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:36:44.006 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:44.006 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:44.006 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:44.006 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:44.006 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:44.006 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:44.006 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:44.006 09:02:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:36:44.006 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:44.006 09:02:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@721 -- # xtrace_disable 00:36:44.006 09:02:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:44.006 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=2373283 00:36:44.006 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:36:44.006 09:02:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 2373283 00:36:44.006 09:02:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@828 -- # '[' -z 2373283 ']' 00:36:44.006 09:02:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:44.006 09:02:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local max_retries=100 00:36:44.006 09:02:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:44.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:44.006 09:02:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@837 -- # xtrace_disable 00:36:44.006 09:02:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:44.006 [2024-05-15 09:02:38.751697] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:36:44.006 [2024-05-15 09:02:38.751779] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:44.006 EAL: No free 2048 kB hugepages reported on node 1 00:36:44.264 [2024-05-15 09:02:38.831949] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:44.264 [2024-05-15 09:02:38.916856] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:44.264 [2024-05-15 09:02:38.916922] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:44.264 [2024-05-15 09:02:38.916948] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:44.264 [2024-05-15 09:02:38.916962] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:44.264 [2024-05-15 09:02:38.916974] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:44.264 [2024-05-15 09:02:38.917005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:44.264 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:36:44.264 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@861 -- # return 0 00:36:44.264 09:02:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:44.264 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@727 -- # xtrace_disable 00:36:44.264 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:44.522 09:02:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:44.522 09:02:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:36:44.522 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:44.522 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:44.522 [2024-05-15 09:02:39.071022] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:44.522 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:44.522 09:02:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:36:44.522 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:44.522 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:44.522 null0 00:36:44.522 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:44.522 09:02:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:36:44.522 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:44.522 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:44.522 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:44.522 09:02:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:36:44.522 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:44.522 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:44.522 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:44.522 09:02:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 23ee0fc8c0d2478e820ac211e0ff4c0f 00:36:44.522 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:44.522 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:44.522 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:44.522 09:02:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:44.522 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:44.522 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:44.522 [2024-05-15 09:02:39.111058] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:36:44.522 [2024-05-15 09:02:39.111353] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:44.522 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:44.522 09:02:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:36:44.522 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:44.522 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:44.780 nvme0n1 00:36:44.780 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:44.780 09:02:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:36:44.780 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:44.780 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:44.780 [ 00:36:44.780 { 00:36:44.780 "name": "nvme0n1", 00:36:44.780 "aliases": [ 00:36:44.780 "23ee0fc8-c0d2-478e-820a-c211e0ff4c0f" 00:36:44.780 ], 00:36:44.780 "product_name": "NVMe disk", 00:36:44.780 "block_size": 512, 00:36:44.780 "num_blocks": 2097152, 00:36:44.780 "uuid": "23ee0fc8-c0d2-478e-820a-c211e0ff4c0f", 00:36:44.780 "assigned_rate_limits": { 00:36:44.780 "rw_ios_per_sec": 0, 00:36:44.780 "rw_mbytes_per_sec": 0, 00:36:44.780 "r_mbytes_per_sec": 0, 00:36:44.780 "w_mbytes_per_sec": 0 00:36:44.780 }, 00:36:44.780 "claimed": false, 00:36:44.780 "zoned": false, 00:36:44.780 "supported_io_types": { 00:36:44.780 "read": true, 00:36:44.780 "write": true, 00:36:44.780 "unmap": false, 00:36:44.780 "write_zeroes": true, 00:36:44.780 "flush": true, 00:36:44.780 "reset": true, 00:36:44.780 "compare": true, 00:36:44.780 "compare_and_write": true, 00:36:44.780 "abort": true, 00:36:44.780 "nvme_admin": true, 00:36:44.780 "nvme_io": true 00:36:44.780 }, 00:36:44.780 "memory_domains": [ 00:36:44.780 { 00:36:44.780 "dma_device_id": "system", 00:36:44.780 "dma_device_type": 1 00:36:44.780 } 00:36:44.780 ], 00:36:44.780 "driver_specific": { 00:36:44.780 "nvme": [ 00:36:44.780 { 00:36:44.780 "trid": { 00:36:44.780 "trtype": "TCP", 00:36:44.780 "adrfam": "IPv4", 00:36:44.780 "traddr": "10.0.0.2", 00:36:44.780 "trsvcid": "4420", 00:36:44.780 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:36:44.780 }, 00:36:44.780 "ctrlr_data": { 00:36:44.780 "cntlid": 1, 00:36:44.780 "vendor_id": "0x8086", 00:36:44.780 "model_number": "SPDK bdev Controller", 00:36:44.780 "serial_number": "00000000000000000000", 00:36:44.780 "firmware_revision": "24.05", 00:36:44.780 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:44.780 "oacs": { 00:36:44.780 "security": 0, 00:36:44.780 "format": 0, 00:36:44.780 "firmware": 0, 00:36:44.780 "ns_manage": 0 00:36:44.780 }, 00:36:44.780 "multi_ctrlr": true, 00:36:44.780 "ana_reporting": false 00:36:44.780 }, 00:36:44.780 "vs": { 00:36:44.780 "nvme_version": "1.3" 00:36:44.780 }, 00:36:44.780 "ns_data": { 00:36:44.780 "id": 1, 00:36:44.780 "can_share": true 00:36:44.780 } 00:36:44.780 } 00:36:44.780 ], 00:36:44.780 "mp_policy": "active_passive" 00:36:44.780 } 00:36:44.780 } 00:36:44.780 ] 00:36:44.780 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:44.780 09:02:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:36:44.780 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:44.780 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:44.780 [2024-05-15 09:02:39.363858] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:36:44.780 [2024-05-15 09:02:39.363947] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x191da80 (9): Bad file descriptor 00:36:44.780 [2024-05-15 09:02:39.506387] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:36:44.780 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:44.780 09:02:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:36:44.780 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:44.780 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:44.780 [ 00:36:44.780 { 00:36:44.780 "name": "nvme0n1", 00:36:44.780 "aliases": [ 00:36:44.780 "23ee0fc8-c0d2-478e-820a-c211e0ff4c0f" 00:36:44.780 ], 00:36:44.780 "product_name": "NVMe disk", 00:36:44.780 "block_size": 512, 00:36:44.780 "num_blocks": 2097152, 00:36:44.780 "uuid": "23ee0fc8-c0d2-478e-820a-c211e0ff4c0f", 00:36:44.780 "assigned_rate_limits": { 00:36:44.780 "rw_ios_per_sec": 0, 00:36:44.780 "rw_mbytes_per_sec": 0, 00:36:44.780 "r_mbytes_per_sec": 0, 00:36:44.780 "w_mbytes_per_sec": 0 00:36:44.780 }, 00:36:44.780 "claimed": false, 00:36:44.780 "zoned": false, 00:36:44.780 "supported_io_types": { 00:36:44.780 "read": true, 00:36:44.780 "write": true, 00:36:44.780 "unmap": false, 00:36:44.780 "write_zeroes": true, 00:36:44.780 "flush": true, 00:36:44.780 "reset": true, 00:36:44.780 "compare": true, 00:36:44.780 "compare_and_write": true, 00:36:44.780 "abort": true, 00:36:44.780 "nvme_admin": true, 00:36:44.780 "nvme_io": true 00:36:44.780 }, 00:36:44.780 "memory_domains": [ 00:36:44.780 { 00:36:44.780 "dma_device_id": "system", 00:36:44.780 "dma_device_type": 1 00:36:44.780 } 00:36:44.780 ], 00:36:44.780 "driver_specific": { 00:36:44.780 "nvme": [ 00:36:44.780 { 00:36:44.780 "trid": { 00:36:44.780 "trtype": "TCP", 00:36:44.780 "adrfam": "IPv4", 00:36:44.780 "traddr": "10.0.0.2", 00:36:44.780 "trsvcid": "4420", 00:36:44.780 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:36:44.780 }, 00:36:44.780 "ctrlr_data": { 00:36:44.780 "cntlid": 2, 00:36:44.780 "vendor_id": "0x8086", 00:36:44.780 "model_number": "SPDK bdev Controller", 00:36:44.780 "serial_number": "00000000000000000000", 00:36:44.780 "firmware_revision": "24.05", 00:36:44.780 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:44.780 "oacs": { 00:36:44.780 "security": 0, 00:36:44.780 "format": 0, 00:36:44.780 "firmware": 0, 00:36:44.780 "ns_manage": 0 00:36:44.780 }, 00:36:44.780 "multi_ctrlr": true, 00:36:44.780 "ana_reporting": false 00:36:44.780 }, 00:36:44.780 "vs": { 00:36:44.780 "nvme_version": "1.3" 00:36:44.780 }, 00:36:44.780 "ns_data": { 00:36:44.780 "id": 1, 00:36:44.780 "can_share": true 00:36:44.780 } 00:36:44.780 } 00:36:44.780 ], 00:36:44.780 "mp_policy": "active_passive" 00:36:44.780 } 00:36:44.780 } 00:36:44.780 ] 00:36:44.780 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:44.780 09:02:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:44.780 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:44.780 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:44.780 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:44.780 09:02:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:36:44.780 09:02:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.6HQh1i6yGy 00:36:44.780 09:02:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:36:44.780 09:02:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.6HQh1i6yGy 00:36:44.780 09:02:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:36:44.780 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:44.780 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:44.781 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:44.781 09:02:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:36:44.781 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:44.781 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:44.781 [2024-05-15 09:02:39.564529] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:44.781 [2024-05-15 09:02:39.564698] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:36:44.781 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:44.781 09:02:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.6HQh1i6yGy 00:36:44.781 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:44.781 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:45.039 [2024-05-15 09:02:39.572558] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:36:45.039 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:45.039 09:02:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.6HQh1i6yGy 00:36:45.039 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:45.039 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:45.039 [2024-05-15 09:02:39.580564] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:45.039 [2024-05-15 09:02:39.580636] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:36:45.039 nvme0n1 00:36:45.039 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:45.039 09:02:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:36:45.039 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:45.039 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:45.039 [ 00:36:45.039 { 00:36:45.039 "name": "nvme0n1", 00:36:45.039 "aliases": [ 00:36:45.039 "23ee0fc8-c0d2-478e-820a-c211e0ff4c0f" 00:36:45.039 ], 00:36:45.039 "product_name": "NVMe disk", 00:36:45.039 "block_size": 512, 00:36:45.039 "num_blocks": 2097152, 00:36:45.039 "uuid": "23ee0fc8-c0d2-478e-820a-c211e0ff4c0f", 00:36:45.039 "assigned_rate_limits": { 00:36:45.039 "rw_ios_per_sec": 0, 00:36:45.039 "rw_mbytes_per_sec": 0, 00:36:45.039 "r_mbytes_per_sec": 0, 00:36:45.039 "w_mbytes_per_sec": 0 00:36:45.039 }, 00:36:45.039 "claimed": false, 00:36:45.039 "zoned": false, 00:36:45.039 "supported_io_types": { 00:36:45.039 "read": true, 00:36:45.039 "write": true, 00:36:45.039 "unmap": false, 00:36:45.039 "write_zeroes": true, 00:36:45.039 "flush": true, 00:36:45.039 "reset": true, 00:36:45.039 "compare": true, 00:36:45.039 "compare_and_write": true, 00:36:45.039 "abort": true, 00:36:45.039 "nvme_admin": true, 00:36:45.039 "nvme_io": true 00:36:45.039 }, 00:36:45.039 "memory_domains": [ 00:36:45.039 { 00:36:45.039 "dma_device_id": "system", 00:36:45.039 "dma_device_type": 1 00:36:45.039 } 00:36:45.039 ], 00:36:45.039 "driver_specific": { 00:36:45.039 "nvme": [ 00:36:45.039 { 00:36:45.039 "trid": { 00:36:45.039 "trtype": "TCP", 00:36:45.039 "adrfam": "IPv4", 00:36:45.039 "traddr": "10.0.0.2", 00:36:45.039 "trsvcid": "4421", 00:36:45.039 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:36:45.039 }, 00:36:45.039 "ctrlr_data": { 00:36:45.039 "cntlid": 3, 00:36:45.039 "vendor_id": "0x8086", 00:36:45.039 "model_number": "SPDK bdev Controller", 00:36:45.039 "serial_number": "00000000000000000000", 00:36:45.039 "firmware_revision": "24.05", 00:36:45.039 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:45.039 "oacs": { 00:36:45.039 "security": 0, 00:36:45.039 "format": 0, 00:36:45.039 "firmware": 0, 00:36:45.039 "ns_manage": 0 00:36:45.040 }, 00:36:45.040 "multi_ctrlr": true, 00:36:45.040 "ana_reporting": false 00:36:45.040 }, 00:36:45.040 "vs": { 00:36:45.040 "nvme_version": "1.3" 00:36:45.040 }, 00:36:45.040 "ns_data": { 00:36:45.040 "id": 1, 00:36:45.040 "can_share": true 00:36:45.040 } 00:36:45.040 } 00:36:45.040 ], 00:36:45.040 "mp_policy": "active_passive" 00:36:45.040 } 00:36:45.040 } 00:36:45.040 ] 00:36:45.040 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:45.040 09:02:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:45.040 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:45.040 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:45.040 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:45.040 09:02:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.6HQh1i6yGy 00:36:45.040 09:02:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:36:45.040 09:02:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:36:45.040 09:02:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:45.040 09:02:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:36:45.040 09:02:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:45.040 09:02:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:36:45.040 09:02:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:45.040 09:02:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:45.040 rmmod nvme_tcp 00:36:45.040 rmmod nvme_fabrics 00:36:45.040 rmmod nvme_keyring 00:36:45.040 09:02:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:45.040 09:02:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:36:45.040 09:02:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:36:45.040 09:02:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 2373283 ']' 00:36:45.040 09:02:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 2373283 00:36:45.040 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@947 -- # '[' -z 2373283 ']' 00:36:45.040 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # kill -0 2373283 00:36:45.040 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # uname 00:36:45.040 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:36:45.040 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2373283 00:36:45.040 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:36:45.040 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:36:45.040 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2373283' 00:36:45.040 killing process with pid 2373283 00:36:45.040 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # kill 2373283 00:36:45.040 [2024-05-15 09:02:39.756877] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:36:45.040 [2024-05-15 09:02:39.756918] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:36:45.040 [2024-05-15 09:02:39.756936] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:36:45.040 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@971 -- # wait 2373283 00:36:45.298 09:02:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:45.298 09:02:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:45.298 09:02:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:45.298 09:02:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:45.298 09:02:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:45.298 09:02:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:45.298 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:45.298 09:02:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:47.825 09:02:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:47.825 00:36:47.825 real 0m5.981s 00:36:47.825 user 0m2.197s 00:36:47.825 sys 0m2.171s 00:36:47.825 09:02:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # xtrace_disable 00:36:47.825 09:02:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:47.825 ************************************ 00:36:47.825 END TEST nvmf_async_init 00:36:47.825 ************************************ 00:36:47.825 09:02:42 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:36:47.825 09:02:42 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:36:47.825 09:02:42 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:36:47.825 09:02:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:47.825 ************************************ 00:36:47.825 START TEST dma 00:36:47.825 ************************************ 00:36:47.825 09:02:42 nvmf_tcp.dma -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:36:47.825 * Looking for test storage... 00:36:47.825 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:47.825 09:02:42 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:47.825 09:02:42 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:36:47.825 09:02:42 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:47.825 09:02:42 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:47.825 09:02:42 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:47.825 09:02:42 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:47.825 09:02:42 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:47.825 09:02:42 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:47.825 09:02:42 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:47.825 09:02:42 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:47.825 09:02:42 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:47.825 09:02:42 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:47.825 09:02:42 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:36:47.825 09:02:42 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:36:47.825 09:02:42 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:47.825 09:02:42 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:47.825 09:02:42 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:47.825 09:02:42 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:47.825 09:02:42 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:47.825 09:02:42 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:47.825 09:02:42 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:47.825 09:02:42 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:47.825 09:02:42 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:47.825 09:02:42 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:47.825 09:02:42 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:47.825 09:02:42 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:36:47.825 09:02:42 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:47.825 09:02:42 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:36:47.825 09:02:42 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:47.825 09:02:42 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:47.825 09:02:42 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:47.825 09:02:42 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:47.825 09:02:42 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:47.825 09:02:42 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:47.825 09:02:42 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:47.825 09:02:42 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:47.825 09:02:42 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:36:47.825 09:02:42 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:36:47.825 00:36:47.825 real 0m0.066s 00:36:47.825 user 0m0.032s 00:36:47.825 sys 0m0.038s 00:36:47.825 09:02:42 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # xtrace_disable 00:36:47.825 09:02:42 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:36:47.825 ************************************ 00:36:47.825 END TEST dma 00:36:47.825 ************************************ 00:36:47.825 09:02:42 nvmf_tcp -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:36:47.825 09:02:42 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:36:47.825 09:02:42 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:36:47.825 09:02:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:47.825 ************************************ 00:36:47.825 START TEST nvmf_identify 00:36:47.825 ************************************ 00:36:47.825 09:02:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:36:47.825 * Looking for test storage... 00:36:47.825 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:47.825 09:02:42 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:47.825 09:02:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:36:47.825 09:02:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:47.825 09:02:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:47.825 09:02:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:47.825 09:02:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:47.825 09:02:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:47.825 09:02:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:47.825 09:02:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:47.825 09:02:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:47.825 09:02:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:47.825 09:02:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:47.825 09:02:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:36:47.825 09:02:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:36:47.825 09:02:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:47.825 09:02:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:47.825 09:02:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:47.825 09:02:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:47.825 09:02:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:47.825 09:02:42 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:47.825 09:02:42 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:47.825 09:02:42 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:47.826 09:02:42 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:47.826 09:02:42 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:47.826 09:02:42 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:47.826 09:02:42 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:36:47.826 09:02:42 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:47.826 09:02:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:36:47.826 09:02:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:47.826 09:02:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:47.826 09:02:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:47.826 09:02:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:47.826 09:02:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:47.826 09:02:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:47.826 09:02:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:47.826 09:02:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:47.826 09:02:42 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:47.826 09:02:42 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:47.826 09:02:42 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:36:47.826 09:02:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:47.826 09:02:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:47.826 09:02:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:47.826 09:02:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:47.826 09:02:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:47.826 09:02:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:47.826 09:02:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:47.826 09:02:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:47.826 09:02:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:47.826 09:02:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:47.826 09:02:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:36:47.826 09:02:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:36:49.722 Found 0000:09:00.0 (0x8086 - 0x159b) 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:36:49.722 Found 0000:09:00.1 (0x8086 - 0x159b) 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:36:49.722 Found net devices under 0000:09:00.0: cvl_0_0 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:36:49.722 Found net devices under 0000:09:00.1: cvl_0_1 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:49.722 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:49.980 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:49.980 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:49.980 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:49.980 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:49.980 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:49.980 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:49.980 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:49.980 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:49.980 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:36:49.980 00:36:49.980 --- 10.0.0.2 ping statistics --- 00:36:49.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:49.980 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:36:49.980 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:49.980 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:49.980 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:36:49.980 00:36:49.980 --- 10.0.0.1 ping statistics --- 00:36:49.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:49.980 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:36:49.980 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:49.980 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:36:49.980 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:49.980 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:49.980 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:49.980 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:49.980 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:49.980 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:49.980 09:02:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:49.980 09:02:44 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:36:49.980 09:02:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@721 -- # xtrace_disable 00:36:49.980 09:02:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:36:49.980 09:02:44 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2375697 00:36:49.980 09:02:44 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:36:49.980 09:02:44 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:49.980 09:02:44 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2375697 00:36:49.980 09:02:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@828 -- # '[' -z 2375697 ']' 00:36:49.980 09:02:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:49.980 09:02:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local max_retries=100 00:36:49.980 09:02:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:49.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:49.980 09:02:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@837 -- # xtrace_disable 00:36:49.980 09:02:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:36:49.980 [2024-05-15 09:02:44.678434] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:36:49.980 [2024-05-15 09:02:44.678506] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:49.980 EAL: No free 2048 kB hugepages reported on node 1 00:36:49.980 [2024-05-15 09:02:44.751734] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:50.238 [2024-05-15 09:02:44.836166] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:50.238 [2024-05-15 09:02:44.836241] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:50.238 [2024-05-15 09:02:44.836270] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:50.238 [2024-05-15 09:02:44.836282] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:50.238 [2024-05-15 09:02:44.836292] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:50.238 [2024-05-15 09:02:44.836357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:50.238 [2024-05-15 09:02:44.836426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:36:50.238 [2024-05-15 09:02:44.836478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:50.238 [2024-05-15 09:02:44.836475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:36:50.238 09:02:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:36:50.238 09:02:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@861 -- # return 0 00:36:50.238 09:02:44 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:50.238 09:02:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:50.238 09:02:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:36:50.238 [2024-05-15 09:02:44.961721] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:50.238 09:02:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:50.238 09:02:44 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:36:50.238 09:02:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@727 -- # xtrace_disable 00:36:50.238 09:02:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:36:50.238 09:02:44 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:50.238 09:02:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:50.238 09:02:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:36:50.238 Malloc0 00:36:50.238 09:02:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:50.238 09:02:45 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:50.238 09:02:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:50.238 09:02:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:36:50.238 09:02:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:50.238 09:02:45 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:36:50.238 09:02:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:50.238 09:02:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:36:50.498 09:02:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:50.498 09:02:45 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:50.498 09:02:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:50.498 09:02:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:36:50.498 [2024-05-15 09:02:45.038542] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:36:50.498 [2024-05-15 09:02:45.038855] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:50.498 09:02:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:50.498 09:02:45 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:50.498 09:02:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:50.498 09:02:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:36:50.498 09:02:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:50.498 09:02:45 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:36:50.498 09:02:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:50.498 09:02:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:36:50.498 [ 00:36:50.498 { 00:36:50.498 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:36:50.498 "subtype": "Discovery", 00:36:50.498 "listen_addresses": [ 00:36:50.498 { 00:36:50.498 "trtype": "TCP", 00:36:50.498 "adrfam": "IPv4", 00:36:50.498 "traddr": "10.0.0.2", 00:36:50.498 "trsvcid": "4420" 00:36:50.498 } 00:36:50.498 ], 00:36:50.498 "allow_any_host": true, 00:36:50.498 "hosts": [] 00:36:50.498 }, 00:36:50.498 { 00:36:50.498 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:36:50.498 "subtype": "NVMe", 00:36:50.498 "listen_addresses": [ 00:36:50.498 { 00:36:50.498 "trtype": "TCP", 00:36:50.498 "adrfam": "IPv4", 00:36:50.498 "traddr": "10.0.0.2", 00:36:50.498 "trsvcid": "4420" 00:36:50.498 } 00:36:50.498 ], 00:36:50.498 "allow_any_host": true, 00:36:50.498 "hosts": [], 00:36:50.498 "serial_number": "SPDK00000000000001", 00:36:50.498 "model_number": "SPDK bdev Controller", 00:36:50.498 "max_namespaces": 32, 00:36:50.498 "min_cntlid": 1, 00:36:50.498 "max_cntlid": 65519, 00:36:50.498 "namespaces": [ 00:36:50.498 { 00:36:50.498 "nsid": 1, 00:36:50.498 "bdev_name": "Malloc0", 00:36:50.498 "name": "Malloc0", 00:36:50.498 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:36:50.498 "eui64": "ABCDEF0123456789", 00:36:50.498 "uuid": "e31513dd-5205-43f3-a0f9-f41d7528b85e" 00:36:50.498 } 00:36:50.498 ] 00:36:50.498 } 00:36:50.498 ] 00:36:50.498 09:02:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:50.498 09:02:45 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:36:50.498 [2024-05-15 09:02:45.079505] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:36:50.498 [2024-05-15 09:02:45.079548] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2375841 ] 00:36:50.498 EAL: No free 2048 kB hugepages reported on node 1 00:36:50.498 [2024-05-15 09:02:45.113721] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:36:50.498 [2024-05-15 09:02:45.113777] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:36:50.498 [2024-05-15 09:02:45.113787] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:36:50.498 [2024-05-15 09:02:45.113802] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:36:50.498 [2024-05-15 09:02:45.113816] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:36:50.498 [2024-05-15 09:02:45.117287] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:36:50.498 [2024-05-15 09:02:45.117339] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1892120 0 00:36:50.498 [2024-05-15 09:02:45.125230] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:36:50.498 [2024-05-15 09:02:45.125260] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:36:50.499 [2024-05-15 09:02:45.125271] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:36:50.499 [2024-05-15 09:02:45.125278] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:36:50.499 [2024-05-15 09:02:45.125333] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.499 [2024-05-15 09:02:45.125346] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.499 [2024-05-15 09:02:45.125355] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1892120) 00:36:50.499 [2024-05-15 09:02:45.125376] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:36:50.499 [2024-05-15 09:02:45.125403] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18eb1f0, cid 0, qid 0 00:36:50.499 [2024-05-15 09:02:45.133228] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.499 [2024-05-15 09:02:45.133246] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.499 [2024-05-15 09:02:45.133254] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.499 [2024-05-15 09:02:45.133261] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18eb1f0) on tqpair=0x1892120 00:36:50.499 [2024-05-15 09:02:45.133300] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:36:50.499 [2024-05-15 09:02:45.133313] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:36:50.499 [2024-05-15 09:02:45.133331] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:36:50.499 [2024-05-15 09:02:45.133354] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.499 [2024-05-15 09:02:45.133363] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.499 [2024-05-15 09:02:45.133370] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1892120) 00:36:50.499 [2024-05-15 09:02:45.133382] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.499 [2024-05-15 09:02:45.133405] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18eb1f0, cid 0, qid 0 00:36:50.499 [2024-05-15 09:02:45.133559] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.499 [2024-05-15 09:02:45.133571] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.499 [2024-05-15 09:02:45.133578] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.499 [2024-05-15 09:02:45.133585] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18eb1f0) on tqpair=0x1892120 00:36:50.499 [2024-05-15 09:02:45.133597] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:36:50.499 [2024-05-15 09:02:45.133610] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:36:50.499 [2024-05-15 09:02:45.133623] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.499 [2024-05-15 09:02:45.133630] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.499 [2024-05-15 09:02:45.133637] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1892120) 00:36:50.499 [2024-05-15 09:02:45.133648] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.499 [2024-05-15 09:02:45.133669] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18eb1f0, cid 0, qid 0 00:36:50.499 [2024-05-15 09:02:45.133813] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.499 [2024-05-15 09:02:45.133825] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.499 [2024-05-15 09:02:45.133832] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.499 [2024-05-15 09:02:45.133839] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18eb1f0) on tqpair=0x1892120 00:36:50.499 [2024-05-15 09:02:45.133850] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:36:50.499 [2024-05-15 09:02:45.133864] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:36:50.499 [2024-05-15 09:02:45.133876] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.499 [2024-05-15 09:02:45.133884] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.499 [2024-05-15 09:02:45.133891] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1892120) 00:36:50.499 [2024-05-15 09:02:45.133902] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.499 [2024-05-15 09:02:45.133922] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18eb1f0, cid 0, qid 0 00:36:50.499 [2024-05-15 09:02:45.134022] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.499 [2024-05-15 09:02:45.134037] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.499 [2024-05-15 09:02:45.134044] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.499 [2024-05-15 09:02:45.134051] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18eb1f0) on tqpair=0x1892120 00:36:50.499 [2024-05-15 09:02:45.134062] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:36:50.499 [2024-05-15 09:02:45.134083] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.499 [2024-05-15 09:02:45.134093] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.499 [2024-05-15 09:02:45.134100] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1892120) 00:36:50.499 [2024-05-15 09:02:45.134111] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.499 [2024-05-15 09:02:45.134132] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18eb1f0, cid 0, qid 0 00:36:50.499 [2024-05-15 09:02:45.134282] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.499 [2024-05-15 09:02:45.134298] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.499 [2024-05-15 09:02:45.134305] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.499 [2024-05-15 09:02:45.134312] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18eb1f0) on tqpair=0x1892120 00:36:50.499 [2024-05-15 09:02:45.134322] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:36:50.499 [2024-05-15 09:02:45.134332] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:36:50.499 [2024-05-15 09:02:45.134345] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:36:50.499 [2024-05-15 09:02:45.134456] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:36:50.499 [2024-05-15 09:02:45.134464] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:36:50.499 [2024-05-15 09:02:45.134479] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.499 [2024-05-15 09:02:45.134487] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.499 [2024-05-15 09:02:45.134494] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1892120) 00:36:50.499 [2024-05-15 09:02:45.134505] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.499 [2024-05-15 09:02:45.134542] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18eb1f0, cid 0, qid 0 00:36:50.499 [2024-05-15 09:02:45.134736] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.499 [2024-05-15 09:02:45.134749] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.499 [2024-05-15 09:02:45.134756] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.499 [2024-05-15 09:02:45.134763] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18eb1f0) on tqpair=0x1892120 00:36:50.499 [2024-05-15 09:02:45.134773] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:36:50.499 [2024-05-15 09:02:45.134789] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.499 [2024-05-15 09:02:45.134798] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.499 [2024-05-15 09:02:45.134805] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1892120) 00:36:50.499 [2024-05-15 09:02:45.134816] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.499 [2024-05-15 09:02:45.134836] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18eb1f0, cid 0, qid 0 00:36:50.499 [2024-05-15 09:02:45.134941] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.499 [2024-05-15 09:02:45.134955] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.499 [2024-05-15 09:02:45.134962] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.499 [2024-05-15 09:02:45.134969] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18eb1f0) on tqpair=0x1892120 00:36:50.499 [2024-05-15 09:02:45.134979] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:36:50.499 [2024-05-15 09:02:45.134992] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:36:50.499 [2024-05-15 09:02:45.135007] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:36:50.499 [2024-05-15 09:02:45.135022] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:36:50.499 [2024-05-15 09:02:45.135038] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.499 [2024-05-15 09:02:45.135046] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1892120) 00:36:50.499 [2024-05-15 09:02:45.135057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.499 [2024-05-15 09:02:45.135078] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18eb1f0, cid 0, qid 0 00:36:50.499 [2024-05-15 09:02:45.135224] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:36:50.499 [2024-05-15 09:02:45.135240] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:36:50.499 [2024-05-15 09:02:45.135248] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:36:50.499 [2024-05-15 09:02:45.135255] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1892120): datao=0, datal=4096, cccid=0 00:36:50.499 [2024-05-15 09:02:45.135274] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18eb1f0) on tqpair(0x1892120): expected_datao=0, payload_size=4096 00:36:50.499 [2024-05-15 09:02:45.135282] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.499 [2024-05-15 09:02:45.135301] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:36:50.500 [2024-05-15 09:02:45.135313] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:36:50.500 [2024-05-15 09:02:45.135366] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.500 [2024-05-15 09:02:45.135377] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.500 [2024-05-15 09:02:45.135384] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.500 [2024-05-15 09:02:45.135391] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18eb1f0) on tqpair=0x1892120 00:36:50.500 [2024-05-15 09:02:45.135404] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:36:50.500 [2024-05-15 09:02:45.135414] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:36:50.500 [2024-05-15 09:02:45.135422] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:36:50.500 [2024-05-15 09:02:45.135431] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:36:50.500 [2024-05-15 09:02:45.135439] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:36:50.500 [2024-05-15 09:02:45.135447] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:36:50.500 [2024-05-15 09:02:45.135467] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:36:50.500 [2024-05-15 09:02:45.135484] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.500 [2024-05-15 09:02:45.135492] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.500 [2024-05-15 09:02:45.135499] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1892120) 00:36:50.500 [2024-05-15 09:02:45.135511] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:50.500 [2024-05-15 09:02:45.135532] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18eb1f0, cid 0, qid 0 00:36:50.500 [2024-05-15 09:02:45.135649] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.500 [2024-05-15 09:02:45.135664] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.500 [2024-05-15 09:02:45.135671] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.500 [2024-05-15 09:02:45.135678] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18eb1f0) on tqpair=0x1892120 00:36:50.500 [2024-05-15 09:02:45.135698] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.500 [2024-05-15 09:02:45.135707] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.500 [2024-05-15 09:02:45.135714] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1892120) 00:36:50.500 [2024-05-15 09:02:45.135724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:50.500 [2024-05-15 09:02:45.135734] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.500 [2024-05-15 09:02:45.135742] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.500 [2024-05-15 09:02:45.135748] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1892120) 00:36:50.500 [2024-05-15 09:02:45.135757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:50.500 [2024-05-15 09:02:45.135767] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.500 [2024-05-15 09:02:45.135774] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.500 [2024-05-15 09:02:45.135781] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1892120) 00:36:50.500 [2024-05-15 09:02:45.135790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:50.500 [2024-05-15 09:02:45.135800] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.500 [2024-05-15 09:02:45.135807] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.500 [2024-05-15 09:02:45.135814] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1892120) 00:36:50.500 [2024-05-15 09:02:45.135823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:50.500 [2024-05-15 09:02:45.135833] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:36:50.500 [2024-05-15 09:02:45.135848] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:36:50.500 [2024-05-15 09:02:45.135859] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.500 [2024-05-15 09:02:45.135867] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1892120) 00:36:50.500 [2024-05-15 09:02:45.135877] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.500 [2024-05-15 09:02:45.135899] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18eb1f0, cid 0, qid 0 00:36:50.500 [2024-05-15 09:02:45.135910] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18eb350, cid 1, qid 0 00:36:50.500 [2024-05-15 09:02:45.135918] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18eb4b0, cid 2, qid 0 00:36:50.500 [2024-05-15 09:02:45.135926] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18eb610, cid 3, qid 0 00:36:50.500 [2024-05-15 09:02:45.135934] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18eb770, cid 4, qid 0 00:36:50.500 [2024-05-15 09:02:45.136125] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.500 [2024-05-15 09:02:45.136137] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.500 [2024-05-15 09:02:45.136144] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.500 [2024-05-15 09:02:45.136151] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18eb770) on tqpair=0x1892120 00:36:50.500 [2024-05-15 09:02:45.136171] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:36:50.500 [2024-05-15 09:02:45.136182] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:36:50.500 [2024-05-15 09:02:45.136200] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.500 [2024-05-15 09:02:45.136209] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1892120) 00:36:50.500 [2024-05-15 09:02:45.136228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.500 [2024-05-15 09:02:45.136250] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18eb770, cid 4, qid 0 00:36:50.500 [2024-05-15 09:02:45.136374] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:36:50.500 [2024-05-15 09:02:45.136390] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:36:50.500 [2024-05-15 09:02:45.136397] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:36:50.500 [2024-05-15 09:02:45.136403] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1892120): datao=0, datal=4096, cccid=4 00:36:50.500 [2024-05-15 09:02:45.136411] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18eb770) on tqpair(0x1892120): expected_datao=0, payload_size=4096 00:36:50.500 [2024-05-15 09:02:45.136419] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.500 [2024-05-15 09:02:45.136429] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:36:50.500 [2024-05-15 09:02:45.136437] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:36:50.500 [2024-05-15 09:02:45.136449] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.500 [2024-05-15 09:02:45.136459] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.500 [2024-05-15 09:02:45.136466] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.500 [2024-05-15 09:02:45.136473] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18eb770) on tqpair=0x1892120 00:36:50.500 [2024-05-15 09:02:45.136495] nvme_ctrlr.c:4037:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:36:50.500 [2024-05-15 09:02:45.136536] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.500 [2024-05-15 09:02:45.136548] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1892120) 00:36:50.500 [2024-05-15 09:02:45.136559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.500 [2024-05-15 09:02:45.136571] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.500 [2024-05-15 09:02:45.136578] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.500 [2024-05-15 09:02:45.136585] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1892120) 00:36:50.500 [2024-05-15 09:02:45.136594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:36:50.500 [2024-05-15 09:02:45.136622] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18eb770, cid 4, qid 0 00:36:50.500 [2024-05-15 09:02:45.136634] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18eb8d0, cid 5, qid 0 00:36:50.500 [2024-05-15 09:02:45.136777] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:36:50.500 [2024-05-15 09:02:45.136789] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:36:50.500 [2024-05-15 09:02:45.136796] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:36:50.500 [2024-05-15 09:02:45.136802] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1892120): datao=0, datal=1024, cccid=4 00:36:50.500 [2024-05-15 09:02:45.136810] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18eb770) on tqpair(0x1892120): expected_datao=0, payload_size=1024 00:36:50.500 [2024-05-15 09:02:45.136818] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.500 [2024-05-15 09:02:45.136831] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:36:50.500 [2024-05-15 09:02:45.136839] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:36:50.500 [2024-05-15 09:02:45.136848] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.500 [2024-05-15 09:02:45.136858] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.500 [2024-05-15 09:02:45.136864] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.500 [2024-05-15 09:02:45.136871] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18eb8d0) on tqpair=0x1892120 00:36:50.500 [2024-05-15 09:02:45.180233] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.500 [2024-05-15 09:02:45.180253] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.500 [2024-05-15 09:02:45.180260] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.500 [2024-05-15 09:02:45.180268] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18eb770) on tqpair=0x1892120 00:36:50.500 [2024-05-15 09:02:45.180295] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.500 [2024-05-15 09:02:45.180306] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1892120) 00:36:50.500 [2024-05-15 09:02:45.180318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.501 [2024-05-15 09:02:45.180349] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18eb770, cid 4, qid 0 00:36:50.501 [2024-05-15 09:02:45.180474] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:36:50.501 [2024-05-15 09:02:45.180489] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:36:50.501 [2024-05-15 09:02:45.180496] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:36:50.501 [2024-05-15 09:02:45.180503] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1892120): datao=0, datal=3072, cccid=4 00:36:50.501 [2024-05-15 09:02:45.180511] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18eb770) on tqpair(0x1892120): expected_datao=0, payload_size=3072 00:36:50.501 [2024-05-15 09:02:45.180519] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.501 [2024-05-15 09:02:45.180538] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:36:50.501 [2024-05-15 09:02:45.180547] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:36:50.501 [2024-05-15 09:02:45.180598] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.501 [2024-05-15 09:02:45.180613] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.501 [2024-05-15 09:02:45.180620] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.501 [2024-05-15 09:02:45.180627] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18eb770) on tqpair=0x1892120 00:36:50.501 [2024-05-15 09:02:45.180644] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.501 [2024-05-15 09:02:45.180653] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1892120) 00:36:50.501 [2024-05-15 09:02:45.180664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.501 [2024-05-15 09:02:45.180692] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18eb770, cid 4, qid 0 00:36:50.501 [2024-05-15 09:02:45.180810] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:36:50.501 [2024-05-15 09:02:45.180822] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:36:50.501 [2024-05-15 09:02:45.180829] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:36:50.501 [2024-05-15 09:02:45.180835] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1892120): datao=0, datal=8, cccid=4 00:36:50.501 [2024-05-15 09:02:45.180843] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18eb770) on tqpair(0x1892120): expected_datao=0, payload_size=8 00:36:50.501 [2024-05-15 09:02:45.180851] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.501 [2024-05-15 09:02:45.180865] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:36:50.501 [2024-05-15 09:02:45.180874] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:36:50.501 [2024-05-15 09:02:45.221314] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.501 [2024-05-15 09:02:45.221334] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.501 [2024-05-15 09:02:45.221341] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.501 [2024-05-15 09:02:45.221349] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18eb770) on tqpair=0x1892120 00:36:50.501 ===================================================== 00:36:50.501 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:36:50.501 ===================================================== 00:36:50.501 Controller Capabilities/Features 00:36:50.501 ================================ 00:36:50.501 Vendor ID: 0000 00:36:50.501 Subsystem Vendor ID: 0000 00:36:50.501 Serial Number: .................... 00:36:50.501 Model Number: ........................................ 00:36:50.501 Firmware Version: 24.05 00:36:50.501 Recommended Arb Burst: 0 00:36:50.501 IEEE OUI Identifier: 00 00 00 00:36:50.501 Multi-path I/O 00:36:50.501 May have multiple subsystem ports: No 00:36:50.501 May have multiple controllers: No 00:36:50.501 Associated with SR-IOV VF: No 00:36:50.501 Max Data Transfer Size: 131072 00:36:50.501 Max Number of Namespaces: 0 00:36:50.501 Max Number of I/O Queues: 1024 00:36:50.501 NVMe Specification Version (VS): 1.3 00:36:50.501 NVMe Specification Version (Identify): 1.3 00:36:50.501 Maximum Queue Entries: 128 00:36:50.501 Contiguous Queues Required: Yes 00:36:50.501 Arbitration Mechanisms Supported 00:36:50.501 Weighted Round Robin: Not Supported 00:36:50.501 Vendor Specific: Not Supported 00:36:50.501 Reset Timeout: 15000 ms 00:36:50.501 Doorbell Stride: 4 bytes 00:36:50.501 NVM Subsystem Reset: Not Supported 00:36:50.501 Command Sets Supported 00:36:50.501 NVM Command Set: Supported 00:36:50.501 Boot Partition: Not Supported 00:36:50.501 Memory Page Size Minimum: 4096 bytes 00:36:50.501 Memory Page Size Maximum: 4096 bytes 00:36:50.501 Persistent Memory Region: Not Supported 00:36:50.501 Optional Asynchronous Events Supported 00:36:50.501 Namespace Attribute Notices: Not Supported 00:36:50.501 Firmware Activation Notices: Not Supported 00:36:50.501 ANA Change Notices: Not Supported 00:36:50.501 PLE Aggregate Log Change Notices: Not Supported 00:36:50.501 LBA Status Info Alert Notices: Not Supported 00:36:50.501 EGE Aggregate Log Change Notices: Not Supported 00:36:50.501 Normal NVM Subsystem Shutdown event: Not Supported 00:36:50.501 Zone Descriptor Change Notices: Not Supported 00:36:50.501 Discovery Log Change Notices: Supported 00:36:50.501 Controller Attributes 00:36:50.501 128-bit Host Identifier: Not Supported 00:36:50.501 Non-Operational Permissive Mode: Not Supported 00:36:50.501 NVM Sets: Not Supported 00:36:50.501 Read Recovery Levels: Not Supported 00:36:50.501 Endurance Groups: Not Supported 00:36:50.501 Predictable Latency Mode: Not Supported 00:36:50.501 Traffic Based Keep ALive: Not Supported 00:36:50.501 Namespace Granularity: Not Supported 00:36:50.501 SQ Associations: Not Supported 00:36:50.501 UUID List: Not Supported 00:36:50.501 Multi-Domain Subsystem: Not Supported 00:36:50.501 Fixed Capacity Management: Not Supported 00:36:50.501 Variable Capacity Management: Not Supported 00:36:50.501 Delete Endurance Group: Not Supported 00:36:50.501 Delete NVM Set: Not Supported 00:36:50.501 Extended LBA Formats Supported: Not Supported 00:36:50.501 Flexible Data Placement Supported: Not Supported 00:36:50.501 00:36:50.501 Controller Memory Buffer Support 00:36:50.501 ================================ 00:36:50.501 Supported: No 00:36:50.501 00:36:50.501 Persistent Memory Region Support 00:36:50.501 ================================ 00:36:50.501 Supported: No 00:36:50.501 00:36:50.501 Admin Command Set Attributes 00:36:50.501 ============================ 00:36:50.501 Security Send/Receive: Not Supported 00:36:50.501 Format NVM: Not Supported 00:36:50.501 Firmware Activate/Download: Not Supported 00:36:50.501 Namespace Management: Not Supported 00:36:50.501 Device Self-Test: Not Supported 00:36:50.501 Directives: Not Supported 00:36:50.501 NVMe-MI: Not Supported 00:36:50.501 Virtualization Management: Not Supported 00:36:50.501 Doorbell Buffer Config: Not Supported 00:36:50.501 Get LBA Status Capability: Not Supported 00:36:50.501 Command & Feature Lockdown Capability: Not Supported 00:36:50.501 Abort Command Limit: 1 00:36:50.501 Async Event Request Limit: 4 00:36:50.501 Number of Firmware Slots: N/A 00:36:50.501 Firmware Slot 1 Read-Only: N/A 00:36:50.501 Firmware Activation Without Reset: N/A 00:36:50.501 Multiple Update Detection Support: N/A 00:36:50.501 Firmware Update Granularity: No Information Provided 00:36:50.501 Per-Namespace SMART Log: No 00:36:50.501 Asymmetric Namespace Access Log Page: Not Supported 00:36:50.501 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:36:50.501 Command Effects Log Page: Not Supported 00:36:50.501 Get Log Page Extended Data: Supported 00:36:50.501 Telemetry Log Pages: Not Supported 00:36:50.501 Persistent Event Log Pages: Not Supported 00:36:50.501 Supported Log Pages Log Page: May Support 00:36:50.501 Commands Supported & Effects Log Page: Not Supported 00:36:50.501 Feature Identifiers & Effects Log Page:May Support 00:36:50.501 NVMe-MI Commands & Effects Log Page: May Support 00:36:50.501 Data Area 4 for Telemetry Log: Not Supported 00:36:50.501 Error Log Page Entries Supported: 128 00:36:50.501 Keep Alive: Not Supported 00:36:50.501 00:36:50.501 NVM Command Set Attributes 00:36:50.501 ========================== 00:36:50.501 Submission Queue Entry Size 00:36:50.501 Max: 1 00:36:50.501 Min: 1 00:36:50.501 Completion Queue Entry Size 00:36:50.501 Max: 1 00:36:50.501 Min: 1 00:36:50.501 Number of Namespaces: 0 00:36:50.501 Compare Command: Not Supported 00:36:50.501 Write Uncorrectable Command: Not Supported 00:36:50.501 Dataset Management Command: Not Supported 00:36:50.501 Write Zeroes Command: Not Supported 00:36:50.501 Set Features Save Field: Not Supported 00:36:50.501 Reservations: Not Supported 00:36:50.501 Timestamp: Not Supported 00:36:50.501 Copy: Not Supported 00:36:50.501 Volatile Write Cache: Not Present 00:36:50.501 Atomic Write Unit (Normal): 1 00:36:50.501 Atomic Write Unit (PFail): 1 00:36:50.501 Atomic Compare & Write Unit: 1 00:36:50.501 Fused Compare & Write: Supported 00:36:50.501 Scatter-Gather List 00:36:50.501 SGL Command Set: Supported 00:36:50.501 SGL Keyed: Supported 00:36:50.501 SGL Bit Bucket Descriptor: Not Supported 00:36:50.501 SGL Metadata Pointer: Not Supported 00:36:50.501 Oversized SGL: Not Supported 00:36:50.501 SGL Metadata Address: Not Supported 00:36:50.501 SGL Offset: Supported 00:36:50.501 Transport SGL Data Block: Not Supported 00:36:50.501 Replay Protected Memory Block: Not Supported 00:36:50.501 00:36:50.501 Firmware Slot Information 00:36:50.502 ========================= 00:36:50.502 Active slot: 0 00:36:50.502 00:36:50.502 00:36:50.502 Error Log 00:36:50.502 ========= 00:36:50.502 00:36:50.502 Active Namespaces 00:36:50.502 ================= 00:36:50.502 Discovery Log Page 00:36:50.502 ================== 00:36:50.502 Generation Counter: 2 00:36:50.502 Number of Records: 2 00:36:50.502 Record Format: 0 00:36:50.502 00:36:50.502 Discovery Log Entry 0 00:36:50.502 ---------------------- 00:36:50.502 Transport Type: 3 (TCP) 00:36:50.502 Address Family: 1 (IPv4) 00:36:50.502 Subsystem Type: 3 (Current Discovery Subsystem) 00:36:50.502 Entry Flags: 00:36:50.502 Duplicate Returned Information: 1 00:36:50.502 Explicit Persistent Connection Support for Discovery: 1 00:36:50.502 Transport Requirements: 00:36:50.502 Secure Channel: Not Required 00:36:50.502 Port ID: 0 (0x0000) 00:36:50.502 Controller ID: 65535 (0xffff) 00:36:50.502 Admin Max SQ Size: 128 00:36:50.502 Transport Service Identifier: 4420 00:36:50.502 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:36:50.502 Transport Address: 10.0.0.2 00:36:50.502 Discovery Log Entry 1 00:36:50.502 ---------------------- 00:36:50.502 Transport Type: 3 (TCP) 00:36:50.502 Address Family: 1 (IPv4) 00:36:50.502 Subsystem Type: 2 (NVM Subsystem) 00:36:50.502 Entry Flags: 00:36:50.502 Duplicate Returned Information: 0 00:36:50.502 Explicit Persistent Connection Support for Discovery: 0 00:36:50.502 Transport Requirements: 00:36:50.502 Secure Channel: Not Required 00:36:50.502 Port ID: 0 (0x0000) 00:36:50.502 Controller ID: 65535 (0xffff) 00:36:50.502 Admin Max SQ Size: 128 00:36:50.502 Transport Service Identifier: 4420 00:36:50.502 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:36:50.502 Transport Address: 10.0.0.2 [2024-05-15 09:02:45.221459] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:36:50.502 [2024-05-15 09:02:45.221486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:50.502 [2024-05-15 09:02:45.221500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:50.502 [2024-05-15 09:02:45.221510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:50.502 [2024-05-15 09:02:45.221520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:50.502 [2024-05-15 09:02:45.221535] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.502 [2024-05-15 09:02:45.221544] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.502 [2024-05-15 09:02:45.221551] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1892120) 00:36:50.502 [2024-05-15 09:02:45.221563] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.502 [2024-05-15 09:02:45.221588] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18eb610, cid 3, qid 0 00:36:50.502 [2024-05-15 09:02:45.221781] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.502 [2024-05-15 09:02:45.221797] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.502 [2024-05-15 09:02:45.221804] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.502 [2024-05-15 09:02:45.221812] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18eb610) on tqpair=0x1892120 00:36:50.502 [2024-05-15 09:02:45.221826] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.502 [2024-05-15 09:02:45.221834] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.502 [2024-05-15 09:02:45.221841] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1892120) 00:36:50.502 [2024-05-15 09:02:45.221852] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.502 [2024-05-15 09:02:45.221879] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18eb610, cid 3, qid 0 00:36:50.502 [2024-05-15 09:02:45.221991] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.502 [2024-05-15 09:02:45.222006] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.502 [2024-05-15 09:02:45.222013] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.502 [2024-05-15 09:02:45.222020] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18eb610) on tqpair=0x1892120 00:36:50.502 [2024-05-15 09:02:45.222031] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:36:50.502 [2024-05-15 09:02:45.222040] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:36:50.502 [2024-05-15 09:02:45.222057] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.502 [2024-05-15 09:02:45.222066] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.502 [2024-05-15 09:02:45.222073] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1892120) 00:36:50.502 [2024-05-15 09:02:45.222084] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.502 [2024-05-15 09:02:45.222110] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18eb610, cid 3, qid 0 00:36:50.502 [2024-05-15 09:02:45.222211] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.502 [2024-05-15 09:02:45.222232] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.502 [2024-05-15 09:02:45.222240] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.502 [2024-05-15 09:02:45.222247] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18eb610) on tqpair=0x1892120 00:36:50.502 [2024-05-15 09:02:45.222266] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.502 [2024-05-15 09:02:45.222276] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.502 [2024-05-15 09:02:45.222283] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1892120) 00:36:50.502 [2024-05-15 09:02:45.222294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.502 [2024-05-15 09:02:45.222315] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18eb610, cid 3, qid 0 00:36:50.502 [2024-05-15 09:02:45.222416] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.502 [2024-05-15 09:02:45.222431] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.502 [2024-05-15 09:02:45.222438] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.502 [2024-05-15 09:02:45.222445] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18eb610) on tqpair=0x1892120 00:36:50.502 [2024-05-15 09:02:45.222463] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.502 [2024-05-15 09:02:45.222473] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.502 [2024-05-15 09:02:45.222480] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1892120) 00:36:50.502 [2024-05-15 09:02:45.222491] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.502 [2024-05-15 09:02:45.222512] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18eb610, cid 3, qid 0 00:36:50.502 [2024-05-15 09:02:45.222603] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.502 [2024-05-15 09:02:45.222618] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.502 [2024-05-15 09:02:45.222625] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.502 [2024-05-15 09:02:45.222632] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18eb610) on tqpair=0x1892120 00:36:50.502 [2024-05-15 09:02:45.222650] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.502 [2024-05-15 09:02:45.222660] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.502 [2024-05-15 09:02:45.222667] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1892120) 00:36:50.502 [2024-05-15 09:02:45.222678] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.502 [2024-05-15 09:02:45.222698] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18eb610, cid 3, qid 0 00:36:50.502 [2024-05-15 09:02:45.222791] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.502 [2024-05-15 09:02:45.222806] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.502 [2024-05-15 09:02:45.222813] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.502 [2024-05-15 09:02:45.222820] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18eb610) on tqpair=0x1892120 00:36:50.502 [2024-05-15 09:02:45.222838] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.502 [2024-05-15 09:02:45.222848] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.502 [2024-05-15 09:02:45.222855] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1892120) 00:36:50.502 [2024-05-15 09:02:45.222866] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.502 [2024-05-15 09:02:45.222887] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18eb610, cid 3, qid 0 00:36:50.502 [2024-05-15 09:02:45.222981] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.502 [2024-05-15 09:02:45.222996] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.502 [2024-05-15 09:02:45.223003] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.502 [2024-05-15 09:02:45.223010] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18eb610) on tqpair=0x1892120 00:36:50.502 [2024-05-15 09:02:45.223028] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.502 [2024-05-15 09:02:45.223038] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.502 [2024-05-15 09:02:45.223045] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1892120) 00:36:50.502 [2024-05-15 09:02:45.223056] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.502 [2024-05-15 09:02:45.223076] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18eb610, cid 3, qid 0 00:36:50.502 [2024-05-15 09:02:45.223167] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.502 [2024-05-15 09:02:45.223182] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.502 [2024-05-15 09:02:45.223190] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.503 [2024-05-15 09:02:45.223197] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18eb610) on tqpair=0x1892120 00:36:50.503 [2024-05-15 09:02:45.227222] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.503 [2024-05-15 09:02:45.227237] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.503 [2024-05-15 09:02:45.227244] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1892120) 00:36:50.503 [2024-05-15 09:02:45.227255] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.503 [2024-05-15 09:02:45.227278] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18eb610, cid 3, qid 0 00:36:50.503 [2024-05-15 09:02:45.227412] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.503 [2024-05-15 09:02:45.227425] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.503 [2024-05-15 09:02:45.227432] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.503 [2024-05-15 09:02:45.227439] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18eb610) on tqpair=0x1892120 00:36:50.503 [2024-05-15 09:02:45.227454] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:36:50.503 00:36:50.503 09:02:45 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:36:50.503 [2024-05-15 09:02:45.259546] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:36:50.503 [2024-05-15 09:02:45.259593] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2375845 ] 00:36:50.503 EAL: No free 2048 kB hugepages reported on node 1 00:36:50.764 [2024-05-15 09:02:45.293329] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:36:50.764 [2024-05-15 09:02:45.293378] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:36:50.764 [2024-05-15 09:02:45.293387] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:36:50.764 [2024-05-15 09:02:45.293400] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:36:50.764 [2024-05-15 09:02:45.293412] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:36:50.764 [2024-05-15 09:02:45.297269] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:36:50.764 [2024-05-15 09:02:45.297308] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1051120 0 00:36:50.764 [2024-05-15 09:02:45.305242] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:36:50.764 [2024-05-15 09:02:45.305266] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:36:50.764 [2024-05-15 09:02:45.305275] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:36:50.764 [2024-05-15 09:02:45.305282] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:36:50.764 [2024-05-15 09:02:45.305334] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.764 [2024-05-15 09:02:45.305346] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.764 [2024-05-15 09:02:45.305353] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1051120) 00:36:50.764 [2024-05-15 09:02:45.305368] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:36:50.764 [2024-05-15 09:02:45.305394] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10aa1f0, cid 0, qid 0 00:36:50.764 [2024-05-15 09:02:45.313229] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.764 [2024-05-15 09:02:45.313247] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.764 [2024-05-15 09:02:45.313255] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.764 [2024-05-15 09:02:45.313262] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10aa1f0) on tqpair=0x1051120 00:36:50.764 [2024-05-15 09:02:45.313278] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:36:50.764 [2024-05-15 09:02:45.313288] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:36:50.764 [2024-05-15 09:02:45.313298] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:36:50.764 [2024-05-15 09:02:45.313316] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.764 [2024-05-15 09:02:45.313325] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.764 [2024-05-15 09:02:45.313332] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1051120) 00:36:50.764 [2024-05-15 09:02:45.313343] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.764 [2024-05-15 09:02:45.313367] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10aa1f0, cid 0, qid 0 00:36:50.764 [2024-05-15 09:02:45.313500] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.764 [2024-05-15 09:02:45.313516] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.764 [2024-05-15 09:02:45.313523] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.764 [2024-05-15 09:02:45.313530] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10aa1f0) on tqpair=0x1051120 00:36:50.764 [2024-05-15 09:02:45.313540] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:36:50.764 [2024-05-15 09:02:45.313554] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:36:50.764 [2024-05-15 09:02:45.313567] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.764 [2024-05-15 09:02:45.313575] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.764 [2024-05-15 09:02:45.313582] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1051120) 00:36:50.764 [2024-05-15 09:02:45.313593] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.764 [2024-05-15 09:02:45.313615] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10aa1f0, cid 0, qid 0 00:36:50.764 [2024-05-15 09:02:45.313710] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.764 [2024-05-15 09:02:45.313726] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.764 [2024-05-15 09:02:45.313734] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.764 [2024-05-15 09:02:45.313741] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10aa1f0) on tqpair=0x1051120 00:36:50.764 [2024-05-15 09:02:45.313752] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:36:50.764 [2024-05-15 09:02:45.313766] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:36:50.764 [2024-05-15 09:02:45.313779] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.765 [2024-05-15 09:02:45.313786] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.765 [2024-05-15 09:02:45.313793] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1051120) 00:36:50.765 [2024-05-15 09:02:45.313804] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.765 [2024-05-15 09:02:45.313825] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10aa1f0, cid 0, qid 0 00:36:50.765 [2024-05-15 09:02:45.313920] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.765 [2024-05-15 09:02:45.313935] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.765 [2024-05-15 09:02:45.313942] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.765 [2024-05-15 09:02:45.313949] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10aa1f0) on tqpair=0x1051120 00:36:50.765 [2024-05-15 09:02:45.313960] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:36:50.765 [2024-05-15 09:02:45.313977] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.765 [2024-05-15 09:02:45.313986] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.765 [2024-05-15 09:02:45.313993] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1051120) 00:36:50.765 [2024-05-15 09:02:45.314004] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.765 [2024-05-15 09:02:45.314025] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10aa1f0, cid 0, qid 0 00:36:50.765 [2024-05-15 09:02:45.314119] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.765 [2024-05-15 09:02:45.314134] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.765 [2024-05-15 09:02:45.314141] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.765 [2024-05-15 09:02:45.314148] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10aa1f0) on tqpair=0x1051120 00:36:50.765 [2024-05-15 09:02:45.314158] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:36:50.765 [2024-05-15 09:02:45.314167] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:36:50.765 [2024-05-15 09:02:45.314180] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:36:50.765 [2024-05-15 09:02:45.314290] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:36:50.765 [2024-05-15 09:02:45.314300] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:36:50.765 [2024-05-15 09:02:45.314312] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.765 [2024-05-15 09:02:45.314320] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.765 [2024-05-15 09:02:45.314327] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1051120) 00:36:50.765 [2024-05-15 09:02:45.314338] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.765 [2024-05-15 09:02:45.314364] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10aa1f0, cid 0, qid 0 00:36:50.765 [2024-05-15 09:02:45.314500] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.765 [2024-05-15 09:02:45.314516] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.765 [2024-05-15 09:02:45.314523] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.765 [2024-05-15 09:02:45.314530] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10aa1f0) on tqpair=0x1051120 00:36:50.765 [2024-05-15 09:02:45.314540] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:36:50.765 [2024-05-15 09:02:45.314557] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.765 [2024-05-15 09:02:45.314566] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.765 [2024-05-15 09:02:45.314573] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1051120) 00:36:50.765 [2024-05-15 09:02:45.314584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.765 [2024-05-15 09:02:45.314605] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10aa1f0, cid 0, qid 0 00:36:50.765 [2024-05-15 09:02:45.314702] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.765 [2024-05-15 09:02:45.314717] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.765 [2024-05-15 09:02:45.314724] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.765 [2024-05-15 09:02:45.314731] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10aa1f0) on tqpair=0x1051120 00:36:50.765 [2024-05-15 09:02:45.314741] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:36:50.765 [2024-05-15 09:02:45.314749] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:36:50.765 [2024-05-15 09:02:45.314763] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:36:50.765 [2024-05-15 09:02:45.314777] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:36:50.765 [2024-05-15 09:02:45.314791] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.765 [2024-05-15 09:02:45.314799] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1051120) 00:36:50.765 [2024-05-15 09:02:45.314810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.765 [2024-05-15 09:02:45.314832] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10aa1f0, cid 0, qid 0 00:36:50.765 [2024-05-15 09:02:45.314960] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:36:50.765 [2024-05-15 09:02:45.314972] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:36:50.765 [2024-05-15 09:02:45.314979] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:36:50.765 [2024-05-15 09:02:45.314986] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1051120): datao=0, datal=4096, cccid=0 00:36:50.765 [2024-05-15 09:02:45.314994] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10aa1f0) on tqpair(0x1051120): expected_datao=0, payload_size=4096 00:36:50.765 [2024-05-15 09:02:45.315002] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.765 [2024-05-15 09:02:45.315019] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:36:50.765 [2024-05-15 09:02:45.315028] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:36:50.765 [2024-05-15 09:02:45.315137] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.765 [2024-05-15 09:02:45.315148] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.765 [2024-05-15 09:02:45.315155] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.765 [2024-05-15 09:02:45.315166] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10aa1f0) on tqpair=0x1051120 00:36:50.765 [2024-05-15 09:02:45.315179] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:36:50.765 [2024-05-15 09:02:45.315188] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:36:50.765 [2024-05-15 09:02:45.315196] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:36:50.765 [2024-05-15 09:02:45.315203] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:36:50.765 [2024-05-15 09:02:45.315211] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:36:50.765 [2024-05-15 09:02:45.315228] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:36:50.765 [2024-05-15 09:02:45.315248] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:36:50.765 [2024-05-15 09:02:45.315264] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.765 [2024-05-15 09:02:45.315272] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.765 [2024-05-15 09:02:45.315279] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1051120) 00:36:50.765 [2024-05-15 09:02:45.315290] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:50.765 [2024-05-15 09:02:45.315312] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10aa1f0, cid 0, qid 0 00:36:50.765 [2024-05-15 09:02:45.315442] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.765 [2024-05-15 09:02:45.315458] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.765 [2024-05-15 09:02:45.315465] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.765 [2024-05-15 09:02:45.315472] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10aa1f0) on tqpair=0x1051120 00:36:50.765 [2024-05-15 09:02:45.315489] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.765 [2024-05-15 09:02:45.315498] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.765 [2024-05-15 09:02:45.315505] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1051120) 00:36:50.765 [2024-05-15 09:02:45.315515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:50.765 [2024-05-15 09:02:45.315526] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.765 [2024-05-15 09:02:45.315533] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.765 [2024-05-15 09:02:45.315539] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1051120) 00:36:50.765 [2024-05-15 09:02:45.315548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:50.765 [2024-05-15 09:02:45.315558] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.765 [2024-05-15 09:02:45.315565] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.765 [2024-05-15 09:02:45.315572] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1051120) 00:36:50.765 [2024-05-15 09:02:45.315581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:50.765 [2024-05-15 09:02:45.315591] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.765 [2024-05-15 09:02:45.315598] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.765 [2024-05-15 09:02:45.315604] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1051120) 00:36:50.765 [2024-05-15 09:02:45.315613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:50.765 [2024-05-15 09:02:45.315622] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:36:50.766 [2024-05-15 09:02:45.315640] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:36:50.766 [2024-05-15 09:02:45.315653] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.766 [2024-05-15 09:02:45.315660] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1051120) 00:36:50.766 [2024-05-15 09:02:45.315671] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.766 [2024-05-15 09:02:45.315694] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10aa1f0, cid 0, qid 0 00:36:50.766 [2024-05-15 09:02:45.315705] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10aa350, cid 1, qid 0 00:36:50.766 [2024-05-15 09:02:45.315714] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10aa4b0, cid 2, qid 0 00:36:50.766 [2024-05-15 09:02:45.315722] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10aa610, cid 3, qid 0 00:36:50.766 [2024-05-15 09:02:45.315730] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10aa770, cid 4, qid 0 00:36:50.766 [2024-05-15 09:02:45.315891] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.766 [2024-05-15 09:02:45.315907] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.766 [2024-05-15 09:02:45.315914] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.766 [2024-05-15 09:02:45.315921] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10aa770) on tqpair=0x1051120 00:36:50.766 [2024-05-15 09:02:45.315935] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:36:50.766 [2024-05-15 09:02:45.315945] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:36:50.766 [2024-05-15 09:02:45.315960] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:36:50.766 [2024-05-15 09:02:45.315972] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:36:50.766 [2024-05-15 09:02:45.315984] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.766 [2024-05-15 09:02:45.315991] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.766 [2024-05-15 09:02:45.315998] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1051120) 00:36:50.766 [2024-05-15 09:02:45.316009] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:50.766 [2024-05-15 09:02:45.316031] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10aa770, cid 4, qid 0 00:36:50.766 [2024-05-15 09:02:45.316163] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.766 [2024-05-15 09:02:45.316179] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.766 [2024-05-15 09:02:45.316186] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.766 [2024-05-15 09:02:45.316193] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10aa770) on tqpair=0x1051120 00:36:50.766 [2024-05-15 09:02:45.316259] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:36:50.766 [2024-05-15 09:02:45.316280] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:36:50.766 [2024-05-15 09:02:45.316295] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.766 [2024-05-15 09:02:45.316304] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1051120) 00:36:50.766 [2024-05-15 09:02:45.316315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.766 [2024-05-15 09:02:45.316340] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10aa770, cid 4, qid 0 00:36:50.766 [2024-05-15 09:02:45.316493] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:36:50.766 [2024-05-15 09:02:45.316508] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:36:50.766 [2024-05-15 09:02:45.316515] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:36:50.766 [2024-05-15 09:02:45.316522] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1051120): datao=0, datal=4096, cccid=4 00:36:50.766 [2024-05-15 09:02:45.316530] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10aa770) on tqpair(0x1051120): expected_datao=0, payload_size=4096 00:36:50.766 [2024-05-15 09:02:45.316538] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.766 [2024-05-15 09:02:45.316556] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:36:50.766 [2024-05-15 09:02:45.316564] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:36:50.766 [2024-05-15 09:02:45.357322] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.766 [2024-05-15 09:02:45.357350] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.766 [2024-05-15 09:02:45.357357] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.766 [2024-05-15 09:02:45.357365] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10aa770) on tqpair=0x1051120 00:36:50.766 [2024-05-15 09:02:45.357390] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:36:50.766 [2024-05-15 09:02:45.357412] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:36:50.766 [2024-05-15 09:02:45.357431] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:36:50.766 [2024-05-15 09:02:45.357444] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.766 [2024-05-15 09:02:45.357452] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1051120) 00:36:50.766 [2024-05-15 09:02:45.357464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.766 [2024-05-15 09:02:45.357487] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10aa770, cid 4, qid 0 00:36:50.766 [2024-05-15 09:02:45.357607] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:36:50.766 [2024-05-15 09:02:45.357619] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:36:50.766 [2024-05-15 09:02:45.357626] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:36:50.766 [2024-05-15 09:02:45.357633] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1051120): datao=0, datal=4096, cccid=4 00:36:50.766 [2024-05-15 09:02:45.357641] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10aa770) on tqpair(0x1051120): expected_datao=0, payload_size=4096 00:36:50.766 [2024-05-15 09:02:45.357649] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.766 [2024-05-15 09:02:45.357665] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:36:50.766 [2024-05-15 09:02:45.357673] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:36:50.766 [2024-05-15 09:02:45.402230] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.766 [2024-05-15 09:02:45.402248] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.766 [2024-05-15 09:02:45.402256] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.766 [2024-05-15 09:02:45.402263] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10aa770) on tqpair=0x1051120 00:36:50.766 [2024-05-15 09:02:45.402283] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:36:50.766 [2024-05-15 09:02:45.402316] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:36:50.766 [2024-05-15 09:02:45.402338] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.766 [2024-05-15 09:02:45.402347] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1051120) 00:36:50.766 [2024-05-15 09:02:45.402359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.766 [2024-05-15 09:02:45.402382] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10aa770, cid 4, qid 0 00:36:50.766 [2024-05-15 09:02:45.402536] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:36:50.766 [2024-05-15 09:02:45.402548] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:36:50.766 [2024-05-15 09:02:45.402555] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:36:50.766 [2024-05-15 09:02:45.402562] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1051120): datao=0, datal=4096, cccid=4 00:36:50.766 [2024-05-15 09:02:45.402570] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10aa770) on tqpair(0x1051120): expected_datao=0, payload_size=4096 00:36:50.766 [2024-05-15 09:02:45.402578] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.766 [2024-05-15 09:02:45.402594] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:36:50.766 [2024-05-15 09:02:45.402602] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:36:50.766 [2024-05-15 09:02:45.443341] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.766 [2024-05-15 09:02:45.443359] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.766 [2024-05-15 09:02:45.443367] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.766 [2024-05-15 09:02:45.443374] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10aa770) on tqpair=0x1051120 00:36:50.766 [2024-05-15 09:02:45.443397] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:36:50.766 [2024-05-15 09:02:45.443414] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:36:50.766 [2024-05-15 09:02:45.443429] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:36:50.766 [2024-05-15 09:02:45.443440] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:36:50.766 [2024-05-15 09:02:45.443449] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:36:50.766 [2024-05-15 09:02:45.443459] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:36:50.766 [2024-05-15 09:02:45.443468] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:36:50.766 [2024-05-15 09:02:45.443477] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:36:50.766 [2024-05-15 09:02:45.443500] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.766 [2024-05-15 09:02:45.443510] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1051120) 00:36:50.766 [2024-05-15 09:02:45.443521] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.766 [2024-05-15 09:02:45.443533] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.766 [2024-05-15 09:02:45.443540] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.766 [2024-05-15 09:02:45.443547] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1051120) 00:36:50.766 [2024-05-15 09:02:45.443557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:36:50.767 [2024-05-15 09:02:45.443583] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10aa770, cid 4, qid 0 00:36:50.767 [2024-05-15 09:02:45.443599] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10aa8d0, cid 5, qid 0 00:36:50.767 [2024-05-15 09:02:45.443701] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.767 [2024-05-15 09:02:45.443717] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.767 [2024-05-15 09:02:45.443724] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.767 [2024-05-15 09:02:45.443731] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10aa770) on tqpair=0x1051120 00:36:50.767 [2024-05-15 09:02:45.443742] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.767 [2024-05-15 09:02:45.443752] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.767 [2024-05-15 09:02:45.443758] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.767 [2024-05-15 09:02:45.443765] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10aa8d0) on tqpair=0x1051120 00:36:50.767 [2024-05-15 09:02:45.443783] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.767 [2024-05-15 09:02:45.443791] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1051120) 00:36:50.767 [2024-05-15 09:02:45.443802] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.767 [2024-05-15 09:02:45.443823] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10aa8d0, cid 5, qid 0 00:36:50.767 [2024-05-15 09:02:45.443918] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.767 [2024-05-15 09:02:45.443931] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.767 [2024-05-15 09:02:45.443938] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.767 [2024-05-15 09:02:45.443945] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10aa8d0) on tqpair=0x1051120 00:36:50.767 [2024-05-15 09:02:45.443962] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.767 [2024-05-15 09:02:45.443970] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1051120) 00:36:50.767 [2024-05-15 09:02:45.443981] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.767 [2024-05-15 09:02:45.444001] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10aa8d0, cid 5, qid 0 00:36:50.767 [2024-05-15 09:02:45.444092] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.767 [2024-05-15 09:02:45.444104] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.767 [2024-05-15 09:02:45.444111] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.767 [2024-05-15 09:02:45.444118] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10aa8d0) on tqpair=0x1051120 00:36:50.767 [2024-05-15 09:02:45.444135] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.767 [2024-05-15 09:02:45.444144] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1051120) 00:36:50.767 [2024-05-15 09:02:45.444155] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.767 [2024-05-15 09:02:45.444175] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10aa8d0, cid 5, qid 0 00:36:50.767 [2024-05-15 09:02:45.444287] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.767 [2024-05-15 09:02:45.444303] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.767 [2024-05-15 09:02:45.444310] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.767 [2024-05-15 09:02:45.444317] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10aa8d0) on tqpair=0x1051120 00:36:50.767 [2024-05-15 09:02:45.444339] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.767 [2024-05-15 09:02:45.444349] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1051120) 00:36:50.767 [2024-05-15 09:02:45.444359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.767 [2024-05-15 09:02:45.444375] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.767 [2024-05-15 09:02:45.444384] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1051120) 00:36:50.767 [2024-05-15 09:02:45.444393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.767 [2024-05-15 09:02:45.444405] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.767 [2024-05-15 09:02:45.444413] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1051120) 00:36:50.767 [2024-05-15 09:02:45.444422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.767 [2024-05-15 09:02:45.444438] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.767 [2024-05-15 09:02:45.444446] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1051120) 00:36:50.767 [2024-05-15 09:02:45.444456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.767 [2024-05-15 09:02:45.444478] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10aa8d0, cid 5, qid 0 00:36:50.767 [2024-05-15 09:02:45.444489] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10aa770, cid 4, qid 0 00:36:50.767 [2024-05-15 09:02:45.444497] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10aaa30, cid 6, qid 0 00:36:50.767 [2024-05-15 09:02:45.444505] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10aab90, cid 7, qid 0 00:36:50.767 [2024-05-15 09:02:45.444746] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:36:50.767 [2024-05-15 09:02:45.444762] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:36:50.767 [2024-05-15 09:02:45.444769] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:36:50.767 [2024-05-15 09:02:45.444776] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1051120): datao=0, datal=8192, cccid=5 00:36:50.767 [2024-05-15 09:02:45.444784] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10aa8d0) on tqpair(0x1051120): expected_datao=0, payload_size=8192 00:36:50.767 [2024-05-15 09:02:45.444792] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.767 [2024-05-15 09:02:45.444802] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:36:50.767 [2024-05-15 09:02:45.444810] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:36:50.767 [2024-05-15 09:02:45.444819] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:36:50.767 [2024-05-15 09:02:45.444828] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:36:50.767 [2024-05-15 09:02:45.444835] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:36:50.767 [2024-05-15 09:02:45.444841] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1051120): datao=0, datal=512, cccid=4 00:36:50.767 [2024-05-15 09:02:45.444849] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10aa770) on tqpair(0x1051120): expected_datao=0, payload_size=512 00:36:50.767 [2024-05-15 09:02:45.444856] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.767 [2024-05-15 09:02:45.444866] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:36:50.767 [2024-05-15 09:02:45.444873] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:36:50.767 [2024-05-15 09:02:45.444881] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:36:50.767 [2024-05-15 09:02:45.444890] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:36:50.767 [2024-05-15 09:02:45.444897] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:36:50.767 [2024-05-15 09:02:45.444903] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1051120): datao=0, datal=512, cccid=6 00:36:50.767 [2024-05-15 09:02:45.444911] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10aaa30) on tqpair(0x1051120): expected_datao=0, payload_size=512 00:36:50.767 [2024-05-15 09:02:45.444923] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.767 [2024-05-15 09:02:45.444933] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:36:50.767 [2024-05-15 09:02:45.444940] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:36:50.767 [2024-05-15 09:02:45.444948] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:36:50.767 [2024-05-15 09:02:45.444957] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:36:50.767 [2024-05-15 09:02:45.444964] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:36:50.767 [2024-05-15 09:02:45.444970] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1051120): datao=0, datal=4096, cccid=7 00:36:50.767 [2024-05-15 09:02:45.444978] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10aab90) on tqpair(0x1051120): expected_datao=0, payload_size=4096 00:36:50.767 [2024-05-15 09:02:45.444985] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.767 [2024-05-15 09:02:45.445006] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:36:50.767 [2024-05-15 09:02:45.445015] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:36:50.767 [2024-05-15 09:02:45.485354] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.767 [2024-05-15 09:02:45.485373] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.767 [2024-05-15 09:02:45.485381] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.767 [2024-05-15 09:02:45.485388] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10aa8d0) on tqpair=0x1051120 00:36:50.767 [2024-05-15 09:02:45.485409] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.767 [2024-05-15 09:02:45.485421] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.767 [2024-05-15 09:02:45.485428] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.767 [2024-05-15 09:02:45.485435] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10aa770) on tqpair=0x1051120 00:36:50.767 [2024-05-15 09:02:45.485450] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.767 [2024-05-15 09:02:45.485461] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.767 [2024-05-15 09:02:45.485468] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.767 [2024-05-15 09:02:45.485475] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10aaa30) on tqpair=0x1051120 00:36:50.767 [2024-05-15 09:02:45.485490] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.767 [2024-05-15 09:02:45.485500] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.767 [2024-05-15 09:02:45.485507] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.767 [2024-05-15 09:02:45.485514] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10aab90) on tqpair=0x1051120 00:36:50.767 ===================================================== 00:36:50.767 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:50.767 ===================================================== 00:36:50.767 Controller Capabilities/Features 00:36:50.767 ================================ 00:36:50.767 Vendor ID: 8086 00:36:50.767 Subsystem Vendor ID: 8086 00:36:50.767 Serial Number: SPDK00000000000001 00:36:50.767 Model Number: SPDK bdev Controller 00:36:50.767 Firmware Version: 24.05 00:36:50.767 Recommended Arb Burst: 6 00:36:50.768 IEEE OUI Identifier: e4 d2 5c 00:36:50.768 Multi-path I/O 00:36:50.768 May have multiple subsystem ports: Yes 00:36:50.768 May have multiple controllers: Yes 00:36:50.768 Associated with SR-IOV VF: No 00:36:50.768 Max Data Transfer Size: 131072 00:36:50.768 Max Number of Namespaces: 32 00:36:50.768 Max Number of I/O Queues: 127 00:36:50.768 NVMe Specification Version (VS): 1.3 00:36:50.768 NVMe Specification Version (Identify): 1.3 00:36:50.768 Maximum Queue Entries: 128 00:36:50.768 Contiguous Queues Required: Yes 00:36:50.768 Arbitration Mechanisms Supported 00:36:50.768 Weighted Round Robin: Not Supported 00:36:50.768 Vendor Specific: Not Supported 00:36:50.768 Reset Timeout: 15000 ms 00:36:50.768 Doorbell Stride: 4 bytes 00:36:50.768 NVM Subsystem Reset: Not Supported 00:36:50.768 Command Sets Supported 00:36:50.768 NVM Command Set: Supported 00:36:50.768 Boot Partition: Not Supported 00:36:50.768 Memory Page Size Minimum: 4096 bytes 00:36:50.768 Memory Page Size Maximum: 4096 bytes 00:36:50.768 Persistent Memory Region: Not Supported 00:36:50.768 Optional Asynchronous Events Supported 00:36:50.768 Namespace Attribute Notices: Supported 00:36:50.768 Firmware Activation Notices: Not Supported 00:36:50.768 ANA Change Notices: Not Supported 00:36:50.768 PLE Aggregate Log Change Notices: Not Supported 00:36:50.768 LBA Status Info Alert Notices: Not Supported 00:36:50.768 EGE Aggregate Log Change Notices: Not Supported 00:36:50.768 Normal NVM Subsystem Shutdown event: Not Supported 00:36:50.768 Zone Descriptor Change Notices: Not Supported 00:36:50.768 Discovery Log Change Notices: Not Supported 00:36:50.768 Controller Attributes 00:36:50.768 128-bit Host Identifier: Supported 00:36:50.768 Non-Operational Permissive Mode: Not Supported 00:36:50.768 NVM Sets: Not Supported 00:36:50.768 Read Recovery Levels: Not Supported 00:36:50.768 Endurance Groups: Not Supported 00:36:50.768 Predictable Latency Mode: Not Supported 00:36:50.768 Traffic Based Keep ALive: Not Supported 00:36:50.768 Namespace Granularity: Not Supported 00:36:50.768 SQ Associations: Not Supported 00:36:50.768 UUID List: Not Supported 00:36:50.768 Multi-Domain Subsystem: Not Supported 00:36:50.768 Fixed Capacity Management: Not Supported 00:36:50.768 Variable Capacity Management: Not Supported 00:36:50.768 Delete Endurance Group: Not Supported 00:36:50.768 Delete NVM Set: Not Supported 00:36:50.768 Extended LBA Formats Supported: Not Supported 00:36:50.768 Flexible Data Placement Supported: Not Supported 00:36:50.768 00:36:50.768 Controller Memory Buffer Support 00:36:50.768 ================================ 00:36:50.768 Supported: No 00:36:50.768 00:36:50.768 Persistent Memory Region Support 00:36:50.768 ================================ 00:36:50.768 Supported: No 00:36:50.768 00:36:50.768 Admin Command Set Attributes 00:36:50.768 ============================ 00:36:50.768 Security Send/Receive: Not Supported 00:36:50.768 Format NVM: Not Supported 00:36:50.768 Firmware Activate/Download: Not Supported 00:36:50.768 Namespace Management: Not Supported 00:36:50.768 Device Self-Test: Not Supported 00:36:50.768 Directives: Not Supported 00:36:50.768 NVMe-MI: Not Supported 00:36:50.768 Virtualization Management: Not Supported 00:36:50.768 Doorbell Buffer Config: Not Supported 00:36:50.768 Get LBA Status Capability: Not Supported 00:36:50.768 Command & Feature Lockdown Capability: Not Supported 00:36:50.768 Abort Command Limit: 4 00:36:50.768 Async Event Request Limit: 4 00:36:50.768 Number of Firmware Slots: N/A 00:36:50.768 Firmware Slot 1 Read-Only: N/A 00:36:50.768 Firmware Activation Without Reset: N/A 00:36:50.768 Multiple Update Detection Support: N/A 00:36:50.768 Firmware Update Granularity: No Information Provided 00:36:50.768 Per-Namespace SMART Log: No 00:36:50.768 Asymmetric Namespace Access Log Page: Not Supported 00:36:50.768 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:36:50.768 Command Effects Log Page: Supported 00:36:50.768 Get Log Page Extended Data: Supported 00:36:50.768 Telemetry Log Pages: Not Supported 00:36:50.768 Persistent Event Log Pages: Not Supported 00:36:50.768 Supported Log Pages Log Page: May Support 00:36:50.768 Commands Supported & Effects Log Page: Not Supported 00:36:50.768 Feature Identifiers & Effects Log Page:May Support 00:36:50.768 NVMe-MI Commands & Effects Log Page: May Support 00:36:50.768 Data Area 4 for Telemetry Log: Not Supported 00:36:50.768 Error Log Page Entries Supported: 128 00:36:50.768 Keep Alive: Supported 00:36:50.768 Keep Alive Granularity: 10000 ms 00:36:50.768 00:36:50.768 NVM Command Set Attributes 00:36:50.768 ========================== 00:36:50.768 Submission Queue Entry Size 00:36:50.768 Max: 64 00:36:50.768 Min: 64 00:36:50.768 Completion Queue Entry Size 00:36:50.768 Max: 16 00:36:50.768 Min: 16 00:36:50.768 Number of Namespaces: 32 00:36:50.768 Compare Command: Supported 00:36:50.768 Write Uncorrectable Command: Not Supported 00:36:50.768 Dataset Management Command: Supported 00:36:50.768 Write Zeroes Command: Supported 00:36:50.768 Set Features Save Field: Not Supported 00:36:50.768 Reservations: Supported 00:36:50.768 Timestamp: Not Supported 00:36:50.768 Copy: Supported 00:36:50.768 Volatile Write Cache: Present 00:36:50.768 Atomic Write Unit (Normal): 1 00:36:50.768 Atomic Write Unit (PFail): 1 00:36:50.768 Atomic Compare & Write Unit: 1 00:36:50.768 Fused Compare & Write: Supported 00:36:50.768 Scatter-Gather List 00:36:50.768 SGL Command Set: Supported 00:36:50.768 SGL Keyed: Supported 00:36:50.768 SGL Bit Bucket Descriptor: Not Supported 00:36:50.768 SGL Metadata Pointer: Not Supported 00:36:50.768 Oversized SGL: Not Supported 00:36:50.768 SGL Metadata Address: Not Supported 00:36:50.768 SGL Offset: Supported 00:36:50.768 Transport SGL Data Block: Not Supported 00:36:50.768 Replay Protected Memory Block: Not Supported 00:36:50.768 00:36:50.768 Firmware Slot Information 00:36:50.768 ========================= 00:36:50.768 Active slot: 1 00:36:50.768 Slot 1 Firmware Revision: 24.05 00:36:50.768 00:36:50.768 00:36:50.768 Commands Supported and Effects 00:36:50.768 ============================== 00:36:50.768 Admin Commands 00:36:50.768 -------------- 00:36:50.768 Get Log Page (02h): Supported 00:36:50.768 Identify (06h): Supported 00:36:50.768 Abort (08h): Supported 00:36:50.768 Set Features (09h): Supported 00:36:50.768 Get Features (0Ah): Supported 00:36:50.768 Asynchronous Event Request (0Ch): Supported 00:36:50.768 Keep Alive (18h): Supported 00:36:50.768 I/O Commands 00:36:50.768 ------------ 00:36:50.768 Flush (00h): Supported LBA-Change 00:36:50.768 Write (01h): Supported LBA-Change 00:36:50.768 Read (02h): Supported 00:36:50.768 Compare (05h): Supported 00:36:50.768 Write Zeroes (08h): Supported LBA-Change 00:36:50.768 Dataset Management (09h): Supported LBA-Change 00:36:50.768 Copy (19h): Supported LBA-Change 00:36:50.768 Unknown (79h): Supported LBA-Change 00:36:50.768 Unknown (7Ah): Supported 00:36:50.768 00:36:50.768 Error Log 00:36:50.768 ========= 00:36:50.768 00:36:50.768 Arbitration 00:36:50.768 =========== 00:36:50.768 Arbitration Burst: 1 00:36:50.768 00:36:50.768 Power Management 00:36:50.768 ================ 00:36:50.768 Number of Power States: 1 00:36:50.768 Current Power State: Power State #0 00:36:50.768 Power State #0: 00:36:50.768 Max Power: 0.00 W 00:36:50.768 Non-Operational State: Operational 00:36:50.768 Entry Latency: Not Reported 00:36:50.768 Exit Latency: Not Reported 00:36:50.768 Relative Read Throughput: 0 00:36:50.768 Relative Read Latency: 0 00:36:50.768 Relative Write Throughput: 0 00:36:50.768 Relative Write Latency: 0 00:36:50.768 Idle Power: Not Reported 00:36:50.768 Active Power: Not Reported 00:36:50.768 Non-Operational Permissive Mode: Not Supported 00:36:50.768 00:36:50.768 Health Information 00:36:50.768 ================== 00:36:50.768 Critical Warnings: 00:36:50.768 Available Spare Space: OK 00:36:50.768 Temperature: OK 00:36:50.768 Device Reliability: OK 00:36:50.768 Read Only: No 00:36:50.768 Volatile Memory Backup: OK 00:36:50.768 Current Temperature: 0 Kelvin (-273 Celsius) 00:36:50.768 Temperature Threshold: [2024-05-15 09:02:45.485646] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.768 [2024-05-15 09:02:45.485659] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1051120) 00:36:50.768 [2024-05-15 09:02:45.485670] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.768 [2024-05-15 09:02:45.485693] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10aab90, cid 7, qid 0 00:36:50.769 [2024-05-15 09:02:45.485827] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.769 [2024-05-15 09:02:45.485842] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.769 [2024-05-15 09:02:45.485849] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.769 [2024-05-15 09:02:45.485856] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10aab90) on tqpair=0x1051120 00:36:50.769 [2024-05-15 09:02:45.485898] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:36:50.769 [2024-05-15 09:02:45.485920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:50.769 [2024-05-15 09:02:45.485932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:50.769 [2024-05-15 09:02:45.485946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:50.769 [2024-05-15 09:02:45.485956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:50.769 [2024-05-15 09:02:45.485969] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.769 [2024-05-15 09:02:45.485977] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.769 [2024-05-15 09:02:45.485984] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1051120) 00:36:50.769 [2024-05-15 09:02:45.485995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.769 [2024-05-15 09:02:45.486018] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10aa610, cid 3, qid 0 00:36:50.769 [2024-05-15 09:02:45.486141] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.769 [2024-05-15 09:02:45.486156] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.769 [2024-05-15 09:02:45.486163] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.769 [2024-05-15 09:02:45.486170] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10aa610) on tqpair=0x1051120 00:36:50.769 [2024-05-15 09:02:45.486182] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.769 [2024-05-15 09:02:45.486190] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.769 [2024-05-15 09:02:45.486197] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1051120) 00:36:50.769 [2024-05-15 09:02:45.486207] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.769 [2024-05-15 09:02:45.490245] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10aa610, cid 3, qid 0 00:36:50.769 [2024-05-15 09:02:45.490390] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.769 [2024-05-15 09:02:45.490403] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.769 [2024-05-15 09:02:45.490410] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.769 [2024-05-15 09:02:45.490416] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10aa610) on tqpair=0x1051120 00:36:50.769 [2024-05-15 09:02:45.490425] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:36:50.769 [2024-05-15 09:02:45.490434] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:36:50.769 [2024-05-15 09:02:45.490450] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.769 [2024-05-15 09:02:45.490459] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.769 [2024-05-15 09:02:45.490466] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1051120) 00:36:50.769 [2024-05-15 09:02:45.490477] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.769 [2024-05-15 09:02:45.490498] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10aa610, cid 3, qid 0 00:36:50.769 [2024-05-15 09:02:45.490611] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.769 [2024-05-15 09:02:45.490626] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.769 [2024-05-15 09:02:45.490633] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.769 [2024-05-15 09:02:45.490639] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10aa610) on tqpair=0x1051120 00:36:50.769 [2024-05-15 09:02:45.490658] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.769 [2024-05-15 09:02:45.490667] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.769 [2024-05-15 09:02:45.490674] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1051120) 00:36:50.769 [2024-05-15 09:02:45.490685] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.769 [2024-05-15 09:02:45.490710] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10aa610, cid 3, qid 0 00:36:50.769 [2024-05-15 09:02:45.490807] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.769 [2024-05-15 09:02:45.490823] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.769 [2024-05-15 09:02:45.490830] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.769 [2024-05-15 09:02:45.490836] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10aa610) on tqpair=0x1051120 00:36:50.769 [2024-05-15 09:02:45.490854] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.769 [2024-05-15 09:02:45.490864] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.769 [2024-05-15 09:02:45.490870] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1051120) 00:36:50.769 [2024-05-15 09:02:45.490881] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.769 [2024-05-15 09:02:45.490902] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10aa610, cid 3, qid 0 00:36:50.769 [2024-05-15 09:02:45.490994] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.769 [2024-05-15 09:02:45.491009] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.769 [2024-05-15 09:02:45.491016] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.769 [2024-05-15 09:02:45.491023] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10aa610) on tqpair=0x1051120 00:36:50.769 [2024-05-15 09:02:45.491041] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.769 [2024-05-15 09:02:45.491050] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.769 [2024-05-15 09:02:45.491057] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1051120) 00:36:50.769 [2024-05-15 09:02:45.491067] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.769 [2024-05-15 09:02:45.491088] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10aa610, cid 3, qid 0 00:36:50.769 [2024-05-15 09:02:45.491182] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.769 [2024-05-15 09:02:45.491194] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.769 [2024-05-15 09:02:45.491201] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.769 [2024-05-15 09:02:45.491207] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10aa610) on tqpair=0x1051120 00:36:50.769 [2024-05-15 09:02:45.491234] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.769 [2024-05-15 09:02:45.491244] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.769 [2024-05-15 09:02:45.491251] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1051120) 00:36:50.769 [2024-05-15 09:02:45.491262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.769 [2024-05-15 09:02:45.491283] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10aa610, cid 3, qid 0 00:36:50.769 [2024-05-15 09:02:45.491377] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.769 [2024-05-15 09:02:45.491392] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.769 [2024-05-15 09:02:45.491399] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.769 [2024-05-15 09:02:45.491406] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10aa610) on tqpair=0x1051120 00:36:50.769 [2024-05-15 09:02:45.491424] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.769 [2024-05-15 09:02:45.491433] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.769 [2024-05-15 09:02:45.491440] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1051120) 00:36:50.769 [2024-05-15 09:02:45.491450] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.769 [2024-05-15 09:02:45.491475] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10aa610, cid 3, qid 0 00:36:50.769 [2024-05-15 09:02:45.491565] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.769 [2024-05-15 09:02:45.491577] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.769 [2024-05-15 09:02:45.491584] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.770 [2024-05-15 09:02:45.491591] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10aa610) on tqpair=0x1051120 00:36:50.770 [2024-05-15 09:02:45.491609] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.770 [2024-05-15 09:02:45.491618] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.770 [2024-05-15 09:02:45.491625] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1051120) 00:36:50.770 [2024-05-15 09:02:45.491635] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.770 [2024-05-15 09:02:45.491656] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10aa610, cid 3, qid 0 00:36:50.770 [2024-05-15 09:02:45.491755] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.770 [2024-05-15 09:02:45.491770] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.770 [2024-05-15 09:02:45.491777] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.770 [2024-05-15 09:02:45.491784] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10aa610) on tqpair=0x1051120 00:36:50.770 [2024-05-15 09:02:45.491802] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.770 [2024-05-15 09:02:45.491812] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.770 [2024-05-15 09:02:45.491818] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1051120) 00:36:50.770 [2024-05-15 09:02:45.491829] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.770 [2024-05-15 09:02:45.491850] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10aa610, cid 3, qid 0 00:36:50.770 [2024-05-15 09:02:45.491940] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.770 [2024-05-15 09:02:45.491952] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.770 [2024-05-15 09:02:45.491959] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.770 [2024-05-15 09:02:45.491966] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10aa610) on tqpair=0x1051120 00:36:50.770 [2024-05-15 09:02:45.491983] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.770 [2024-05-15 09:02:45.491992] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.770 [2024-05-15 09:02:45.491999] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1051120) 00:36:50.770 [2024-05-15 09:02:45.492009] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.770 [2024-05-15 09:02:45.492030] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10aa610, cid 3, qid 0 00:36:50.770 [2024-05-15 09:02:45.492121] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.770 [2024-05-15 09:02:45.492133] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.770 [2024-05-15 09:02:45.492140] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.770 [2024-05-15 09:02:45.492147] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10aa610) on tqpair=0x1051120 00:36:50.770 [2024-05-15 09:02:45.492164] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.770 [2024-05-15 09:02:45.492173] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.770 [2024-05-15 09:02:45.492180] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1051120) 00:36:50.770 [2024-05-15 09:02:45.492191] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.770 [2024-05-15 09:02:45.492211] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10aa610, cid 3, qid 0 00:36:50.770 [2024-05-15 09:02:45.492313] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.770 [2024-05-15 09:02:45.492329] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.770 [2024-05-15 09:02:45.492336] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.770 [2024-05-15 09:02:45.492343] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10aa610) on tqpair=0x1051120 00:36:50.770 [2024-05-15 09:02:45.492361] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.770 [2024-05-15 09:02:45.492370] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.770 [2024-05-15 09:02:45.492377] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1051120) 00:36:50.770 [2024-05-15 09:02:45.492387] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.770 [2024-05-15 09:02:45.492408] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10aa610, cid 3, qid 0 00:36:50.770 [2024-05-15 09:02:45.492502] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.770 [2024-05-15 09:02:45.492517] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.770 [2024-05-15 09:02:45.492524] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.770 [2024-05-15 09:02:45.492531] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10aa610) on tqpair=0x1051120 00:36:50.770 [2024-05-15 09:02:45.492549] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.770 [2024-05-15 09:02:45.492558] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.770 [2024-05-15 09:02:45.492565] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1051120) 00:36:50.770 [2024-05-15 09:02:45.492575] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.770 [2024-05-15 09:02:45.492596] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10aa610, cid 3, qid 0 00:36:50.770 [2024-05-15 09:02:45.492690] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.770 [2024-05-15 09:02:45.492705] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.770 [2024-05-15 09:02:45.492712] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.770 [2024-05-15 09:02:45.492719] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10aa610) on tqpair=0x1051120 00:36:50.770 [2024-05-15 09:02:45.492737] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.770 [2024-05-15 09:02:45.492746] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.770 [2024-05-15 09:02:45.492753] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1051120) 00:36:50.770 [2024-05-15 09:02:45.492763] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.770 [2024-05-15 09:02:45.492784] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10aa610, cid 3, qid 0 00:36:50.770 [2024-05-15 09:02:45.492873] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.770 [2024-05-15 09:02:45.492885] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.770 [2024-05-15 09:02:45.492892] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.770 [2024-05-15 09:02:45.492899] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10aa610) on tqpair=0x1051120 00:36:50.770 [2024-05-15 09:02:45.492917] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.770 [2024-05-15 09:02:45.492926] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.770 [2024-05-15 09:02:45.492932] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1051120) 00:36:50.770 [2024-05-15 09:02:45.492943] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.770 [2024-05-15 09:02:45.492963] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10aa610, cid 3, qid 0 00:36:50.770 [2024-05-15 09:02:45.493057] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.770 [2024-05-15 09:02:45.493075] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.770 [2024-05-15 09:02:45.493083] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.770 [2024-05-15 09:02:45.493090] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10aa610) on tqpair=0x1051120 00:36:50.770 [2024-05-15 09:02:45.493108] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.770 [2024-05-15 09:02:45.493118] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.770 [2024-05-15 09:02:45.493124] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1051120) 00:36:50.770 [2024-05-15 09:02:45.493135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.770 [2024-05-15 09:02:45.493156] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10aa610, cid 3, qid 0 00:36:50.770 [2024-05-15 09:02:45.493251] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.770 [2024-05-15 09:02:45.493265] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.770 [2024-05-15 09:02:45.493272] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.770 [2024-05-15 09:02:45.493279] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10aa610) on tqpair=0x1051120 00:36:50.770 [2024-05-15 09:02:45.493296] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.770 [2024-05-15 09:02:45.493305] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.770 [2024-05-15 09:02:45.493312] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1051120) 00:36:50.770 [2024-05-15 09:02:45.493323] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.770 [2024-05-15 09:02:45.493343] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10aa610, cid 3, qid 0 00:36:50.770 [2024-05-15 09:02:45.493437] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.770 [2024-05-15 09:02:45.493449] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.770 [2024-05-15 09:02:45.493456] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.770 [2024-05-15 09:02:45.493463] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10aa610) on tqpair=0x1051120 00:36:50.770 [2024-05-15 09:02:45.493480] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.770 [2024-05-15 09:02:45.493489] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.770 [2024-05-15 09:02:45.493496] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1051120) 00:36:50.770 [2024-05-15 09:02:45.493506] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.770 [2024-05-15 09:02:45.493527] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10aa610, cid 3, qid 0 00:36:50.770 [2024-05-15 09:02:45.493619] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.770 [2024-05-15 09:02:45.493631] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.770 [2024-05-15 09:02:45.493637] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.770 [2024-05-15 09:02:45.493644] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10aa610) on tqpair=0x1051120 00:36:50.770 [2024-05-15 09:02:45.493662] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.770 [2024-05-15 09:02:45.493671] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.770 [2024-05-15 09:02:45.493677] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1051120) 00:36:50.770 [2024-05-15 09:02:45.493688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.770 [2024-05-15 09:02:45.493708] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10aa610, cid 3, qid 0 00:36:50.771 [2024-05-15 09:02:45.493801] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.771 [2024-05-15 09:02:45.493816] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.771 [2024-05-15 09:02:45.493827] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.771 [2024-05-15 09:02:45.493834] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10aa610) on tqpair=0x1051120 00:36:50.771 [2024-05-15 09:02:45.493852] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.771 [2024-05-15 09:02:45.493861] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.771 [2024-05-15 09:02:45.493868] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1051120) 00:36:50.771 [2024-05-15 09:02:45.493878] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.771 [2024-05-15 09:02:45.493899] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10aa610, cid 3, qid 0 00:36:50.771 [2024-05-15 09:02:45.493988] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.771 [2024-05-15 09:02:45.494000] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.771 [2024-05-15 09:02:45.494007] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.771 [2024-05-15 09:02:45.494014] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10aa610) on tqpair=0x1051120 00:36:50.771 [2024-05-15 09:02:45.494032] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.771 [2024-05-15 09:02:45.494040] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.771 [2024-05-15 09:02:45.494047] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1051120) 00:36:50.771 [2024-05-15 09:02:45.494058] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.771 [2024-05-15 09:02:45.494078] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10aa610, cid 3, qid 0 00:36:50.771 [2024-05-15 09:02:45.494169] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.771 [2024-05-15 09:02:45.494181] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.771 [2024-05-15 09:02:45.494188] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.771 [2024-05-15 09:02:45.494194] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10aa610) on tqpair=0x1051120 00:36:50.771 [2024-05-15 09:02:45.494212] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:50.771 [2024-05-15 09:02:45.498232] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:50.771 [2024-05-15 09:02:45.498241] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1051120) 00:36:50.771 [2024-05-15 09:02:45.498252] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.771 [2024-05-15 09:02:45.498275] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10aa610, cid 3, qid 0 00:36:50.771 [2024-05-15 09:02:45.498403] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:50.771 [2024-05-15 09:02:45.498419] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:50.771 [2024-05-15 09:02:45.498425] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:50.771 [2024-05-15 09:02:45.498432] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10aa610) on tqpair=0x1051120 00:36:50.771 [2024-05-15 09:02:45.498447] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:36:50.771 0 Kelvin (-273 Celsius) 00:36:50.771 Available Spare: 0% 00:36:50.771 Available Spare Threshold: 0% 00:36:50.771 Life Percentage Used: 0% 00:36:50.771 Data Units Read: 0 00:36:50.771 Data Units Written: 0 00:36:50.771 Host Read Commands: 0 00:36:50.771 Host Write Commands: 0 00:36:50.771 Controller Busy Time: 0 minutes 00:36:50.771 Power Cycles: 0 00:36:50.771 Power On Hours: 0 hours 00:36:50.771 Unsafe Shutdowns: 0 00:36:50.771 Unrecoverable Media Errors: 0 00:36:50.771 Lifetime Error Log Entries: 0 00:36:50.771 Warning Temperature Time: 0 minutes 00:36:50.771 Critical Temperature Time: 0 minutes 00:36:50.771 00:36:50.771 Number of Queues 00:36:50.771 ================ 00:36:50.771 Number of I/O Submission Queues: 127 00:36:50.771 Number of I/O Completion Queues: 127 00:36:50.771 00:36:50.771 Active Namespaces 00:36:50.771 ================= 00:36:50.771 Namespace ID:1 00:36:50.771 Error Recovery Timeout: Unlimited 00:36:50.771 Command Set Identifier: NVM (00h) 00:36:50.771 Deallocate: Supported 00:36:50.771 Deallocated/Unwritten Error: Not Supported 00:36:50.771 Deallocated Read Value: Unknown 00:36:50.771 Deallocate in Write Zeroes: Not Supported 00:36:50.771 Deallocated Guard Field: 0xFFFF 00:36:50.771 Flush: Supported 00:36:50.771 Reservation: Supported 00:36:50.771 Namespace Sharing Capabilities: Multiple Controllers 00:36:50.771 Size (in LBAs): 131072 (0GiB) 00:36:50.771 Capacity (in LBAs): 131072 (0GiB) 00:36:50.771 Utilization (in LBAs): 131072 (0GiB) 00:36:50.771 NGUID: ABCDEF0123456789ABCDEF0123456789 00:36:50.771 EUI64: ABCDEF0123456789 00:36:50.771 UUID: e31513dd-5205-43f3-a0f9-f41d7528b85e 00:36:50.771 Thin Provisioning: Not Supported 00:36:50.771 Per-NS Atomic Units: Yes 00:36:50.771 Atomic Boundary Size (Normal): 0 00:36:50.771 Atomic Boundary Size (PFail): 0 00:36:50.771 Atomic Boundary Offset: 0 00:36:50.771 Maximum Single Source Range Length: 65535 00:36:50.771 Maximum Copy Length: 65535 00:36:50.771 Maximum Source Range Count: 1 00:36:50.771 NGUID/EUI64 Never Reused: No 00:36:50.771 Namespace Write Protected: No 00:36:50.771 Number of LBA Formats: 1 00:36:50.771 Current LBA Format: LBA Format #00 00:36:50.771 LBA Format #00: Data Size: 512 Metadata Size: 0 00:36:50.771 00:36:50.771 09:02:45 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:36:50.771 09:02:45 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:50.771 09:02:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:50.771 09:02:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:36:50.771 09:02:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:50.771 09:02:45 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:36:50.771 09:02:45 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:36:50.771 09:02:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:50.771 09:02:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:36:50.771 09:02:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:50.771 09:02:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:36:50.771 09:02:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:50.771 09:02:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:50.771 rmmod nvme_tcp 00:36:50.771 rmmod nvme_fabrics 00:36:51.029 rmmod nvme_keyring 00:36:51.029 09:02:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:51.029 09:02:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:36:51.029 09:02:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:36:51.029 09:02:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 2375697 ']' 00:36:51.029 09:02:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 2375697 00:36:51.029 09:02:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@947 -- # '[' -z 2375697 ']' 00:36:51.029 09:02:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # kill -0 2375697 00:36:51.029 09:02:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # uname 00:36:51.029 09:02:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:36:51.029 09:02:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2375697 00:36:51.029 09:02:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:36:51.029 09:02:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:36:51.029 09:02:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2375697' 00:36:51.029 killing process with pid 2375697 00:36:51.029 09:02:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # kill 2375697 00:36:51.029 [2024-05-15 09:02:45.617020] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:36:51.029 09:02:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@971 -- # wait 2375697 00:36:51.315 09:02:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:51.315 09:02:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:51.315 09:02:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:51.315 09:02:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:51.315 09:02:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:51.315 09:02:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:51.315 09:02:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:51.315 09:02:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:53.215 09:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:53.215 00:36:53.215 real 0m5.757s 00:36:53.215 user 0m4.802s 00:36:53.215 sys 0m2.128s 00:36:53.215 09:02:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # xtrace_disable 00:36:53.215 09:02:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:36:53.215 ************************************ 00:36:53.215 END TEST nvmf_identify 00:36:53.215 ************************************ 00:36:53.215 09:02:47 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:36:53.215 09:02:47 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:36:53.215 09:02:47 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:36:53.215 09:02:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:53.215 ************************************ 00:36:53.215 START TEST nvmf_perf 00:36:53.215 ************************************ 00:36:53.215 09:02:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:36:53.474 * Looking for test storage... 00:36:53.474 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:53.474 09:02:48 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:53.474 09:02:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:36:53.474 09:02:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:53.474 09:02:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:53.474 09:02:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:53.474 09:02:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:53.474 09:02:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:53.474 09:02:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:53.474 09:02:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:53.474 09:02:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:53.474 09:02:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:53.474 09:02:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:53.474 09:02:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:36:53.474 09:02:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:36:53.474 09:02:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:53.474 09:02:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:53.474 09:02:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:53.474 09:02:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:53.474 09:02:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:53.474 09:02:48 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:53.474 09:02:48 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:53.474 09:02:48 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:53.474 09:02:48 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:53.474 09:02:48 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:53.475 09:02:48 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:53.475 09:02:48 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:36:53.475 09:02:48 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:53.475 09:02:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:36:53.475 09:02:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:53.475 09:02:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:53.475 09:02:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:53.475 09:02:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:53.475 09:02:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:53.475 09:02:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:53.475 09:02:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:53.475 09:02:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:53.475 09:02:48 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:36:53.475 09:02:48 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:36:53.475 09:02:48 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:53.475 09:02:48 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:36:53.475 09:02:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:53.475 09:02:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:53.475 09:02:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:53.475 09:02:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:53.475 09:02:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:53.475 09:02:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:53.475 09:02:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:53.475 09:02:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:53.475 09:02:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:53.475 09:02:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:53.475 09:02:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:36:53.475 09:02:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:36:56.003 Found 0000:09:00.0 (0x8086 - 0x159b) 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:36:56.003 Found 0000:09:00.1 (0x8086 - 0x159b) 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:36:56.003 Found net devices under 0000:09:00.0: cvl_0_0 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:36:56.003 Found net devices under 0000:09:00.1: cvl_0_1 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:56.003 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:56.003 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:36:56.003 00:36:56.003 --- 10.0.0.2 ping statistics --- 00:36:56.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:56.003 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:56.003 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:56.003 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:36:56.003 00:36:56.003 --- 10.0.0.1 ping statistics --- 00:36:56.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:56.003 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@721 -- # xtrace_disable 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=2378136 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 2378136 00:36:56.003 09:02:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@828 -- # '[' -z 2378136 ']' 00:36:56.004 09:02:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:56.004 09:02:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local max_retries=100 00:36:56.004 09:02:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:56.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:56.004 09:02:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@837 -- # xtrace_disable 00:36:56.004 09:02:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:36:56.004 [2024-05-15 09:02:50.726823] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:36:56.004 [2024-05-15 09:02:50.726894] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:56.004 EAL: No free 2048 kB hugepages reported on node 1 00:36:56.262 [2024-05-15 09:02:50.800033] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:56.262 [2024-05-15 09:02:50.885347] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:56.262 [2024-05-15 09:02:50.885395] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:56.263 [2024-05-15 09:02:50.885408] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:56.263 [2024-05-15 09:02:50.885419] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:56.263 [2024-05-15 09:02:50.885430] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:56.263 [2024-05-15 09:02:50.885493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:56.263 [2024-05-15 09:02:50.885575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:36:56.263 [2024-05-15 09:02:50.885642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:36:56.263 [2024-05-15 09:02:50.885644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:56.263 09:02:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:36:56.263 09:02:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@861 -- # return 0 00:36:56.263 09:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:56.263 09:02:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@727 -- # xtrace_disable 00:36:56.263 09:02:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:36:56.263 09:02:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:56.263 09:02:51 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:36:56.263 09:02:51 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:36:59.542 09:02:54 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:36:59.542 09:02:54 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:36:59.799 09:02:54 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:0b:00.0 00:36:59.799 09:02:54 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:00.057 09:02:54 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:37:00.057 09:02:54 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:0b:00.0 ']' 00:37:00.057 09:02:54 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:37:00.057 09:02:54 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:37:00.057 09:02:54 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:37:00.315 [2024-05-15 09:02:54.851152] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:00.315 09:02:54 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:00.573 09:02:55 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:37:00.573 09:02:55 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:00.830 09:02:55 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:37:00.830 09:02:55 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:37:00.830 09:02:55 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:01.089 [2024-05-15 09:02:55.834441] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:37:01.089 [2024-05-15 09:02:55.834776] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:01.089 09:02:55 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:01.346 09:02:56 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:0b:00.0 ']' 00:37:01.346 09:02:56 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:37:01.346 09:02:56 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:37:01.346 09:02:56 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:37:02.717 Initializing NVMe Controllers 00:37:02.717 Attached to NVMe Controller at 0000:0b:00.0 [8086:0a54] 00:37:02.717 Associating PCIE (0000:0b:00.0) NSID 1 with lcore 0 00:37:02.717 Initialization complete. Launching workers. 00:37:02.718 ======================================================== 00:37:02.718 Latency(us) 00:37:02.718 Device Information : IOPS MiB/s Average min max 00:37:02.718 PCIE (0000:0b:00.0) NSID 1 from core 0: 84438.52 329.84 378.21 15.88 5302.88 00:37:02.718 ======================================================== 00:37:02.718 Total : 84438.52 329.84 378.21 15.88 5302.88 00:37:02.718 00:37:02.718 09:02:57 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:02.718 EAL: No free 2048 kB hugepages reported on node 1 00:37:04.090 Initializing NVMe Controllers 00:37:04.090 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:04.090 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:37:04.090 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:37:04.090 Initialization complete. Launching workers. 00:37:04.090 ======================================================== 00:37:04.090 Latency(us) 00:37:04.090 Device Information : IOPS MiB/s Average min max 00:37:04.090 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 89.77 0.35 11506.05 159.13 45789.99 00:37:04.090 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 38.90 0.15 26319.69 7963.65 48227.43 00:37:04.090 ======================================================== 00:37:04.090 Total : 128.67 0.50 15984.59 159.13 48227.43 00:37:04.090 00:37:04.090 09:02:58 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:04.090 EAL: No free 2048 kB hugepages reported on node 1 00:37:05.465 Initializing NVMe Controllers 00:37:05.465 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:05.465 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:37:05.465 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:37:05.465 Initialization complete. Launching workers. 00:37:05.465 ======================================================== 00:37:05.465 Latency(us) 00:37:05.465 Device Information : IOPS MiB/s Average min max 00:37:05.465 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8474.85 33.10 3776.03 594.06 11105.05 00:37:05.465 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3817.17 14.91 8412.76 5184.12 18928.83 00:37:05.465 ======================================================== 00:37:05.465 Total : 12292.02 48.02 5215.92 594.06 18928.83 00:37:05.465 00:37:05.465 09:03:00 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:37:05.465 09:03:00 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:37:05.465 09:03:00 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:05.465 EAL: No free 2048 kB hugepages reported on node 1 00:37:07.994 Initializing NVMe Controllers 00:37:07.994 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:07.994 Controller IO queue size 128, less than required. 00:37:07.994 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:07.994 Controller IO queue size 128, less than required. 00:37:07.994 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:07.994 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:37:07.994 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:37:07.994 Initialization complete. Launching workers. 00:37:07.994 ======================================================== 00:37:07.994 Latency(us) 00:37:07.994 Device Information : IOPS MiB/s Average min max 00:37:07.994 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1550.28 387.57 84275.34 57353.10 142745.64 00:37:07.994 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 586.42 146.60 222508.31 92478.74 303620.76 00:37:07.994 ======================================================== 00:37:07.994 Total : 2136.69 534.17 122213.40 57353.10 303620.76 00:37:07.994 00:37:07.994 09:03:02 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:37:07.994 EAL: No free 2048 kB hugepages reported on node 1 00:37:07.994 No valid NVMe controllers or AIO or URING devices found 00:37:07.994 Initializing NVMe Controllers 00:37:07.994 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:07.994 Controller IO queue size 128, less than required. 00:37:07.994 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:07.994 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:37:07.994 Controller IO queue size 128, less than required. 00:37:07.994 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:07.994 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:37:07.994 WARNING: Some requested NVMe devices were skipped 00:37:08.252 09:03:02 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:37:08.252 EAL: No free 2048 kB hugepages reported on node 1 00:37:10.810 Initializing NVMe Controllers 00:37:10.810 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:10.810 Controller IO queue size 128, less than required. 00:37:10.810 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:10.810 Controller IO queue size 128, less than required. 00:37:10.810 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:10.810 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:37:10.810 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:37:10.810 Initialization complete. Launching workers. 00:37:10.810 00:37:10.810 ==================== 00:37:10.810 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:37:10.810 TCP transport: 00:37:10.810 polls: 14711 00:37:10.810 idle_polls: 10548 00:37:10.810 sock_completions: 4163 00:37:10.810 nvme_completions: 4793 00:37:10.810 submitted_requests: 7152 00:37:10.810 queued_requests: 1 00:37:10.810 00:37:10.810 ==================== 00:37:10.810 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:37:10.810 TCP transport: 00:37:10.810 polls: 13877 00:37:10.810 idle_polls: 9128 00:37:10.810 sock_completions: 4749 00:37:10.810 nvme_completions: 5975 00:37:10.810 submitted_requests: 8914 00:37:10.810 queued_requests: 1 00:37:10.810 ======================================================== 00:37:10.810 Latency(us) 00:37:10.810 Device Information : IOPS MiB/s Average min max 00:37:10.810 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1197.97 299.49 110967.14 75393.78 176084.87 00:37:10.810 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1493.47 373.37 86861.56 40115.56 129500.55 00:37:10.810 ======================================================== 00:37:10.810 Total : 2691.44 672.86 97591.07 40115.56 176084.87 00:37:10.810 00:37:10.810 09:03:05 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:37:10.810 09:03:05 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:10.810 09:03:05 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:37:10.810 09:03:05 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:0b:00.0 ']' 00:37:10.810 09:03:05 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:37:14.088 09:03:08 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=6bc9bda2-2443-4f3a-90fa-b63ca8da650e 00:37:14.088 09:03:08 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 6bc9bda2-2443-4f3a-90fa-b63ca8da650e 00:37:14.088 09:03:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_uuid=6bc9bda2-2443-4f3a-90fa-b63ca8da650e 00:37:14.088 09:03:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local lvs_info 00:37:14.088 09:03:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local fc 00:37:14.088 09:03:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local cs 00:37:14.089 09:03:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:37:14.346 09:03:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # lvs_info='[ 00:37:14.346 { 00:37:14.346 "uuid": "6bc9bda2-2443-4f3a-90fa-b63ca8da650e", 00:37:14.346 "name": "lvs_0", 00:37:14.346 "base_bdev": "Nvme0n1", 00:37:14.346 "total_data_clusters": 238234, 00:37:14.346 "free_clusters": 238234, 00:37:14.346 "block_size": 512, 00:37:14.346 "cluster_size": 4194304 00:37:14.346 } 00:37:14.346 ]' 00:37:14.346 09:03:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="6bc9bda2-2443-4f3a-90fa-b63ca8da650e") .free_clusters' 00:37:14.603 09:03:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # fc=238234 00:37:14.603 09:03:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # jq '.[] | select(.uuid=="6bc9bda2-2443-4f3a-90fa-b63ca8da650e") .cluster_size' 00:37:14.603 09:03:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # cs=4194304 00:37:14.603 09:03:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # free_mb=952936 00:37:14.603 09:03:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1371 -- # echo 952936 00:37:14.603 952936 00:37:14.603 09:03:09 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:37:14.603 09:03:09 nvmf_tcp.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:37:14.603 09:03:09 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6bc9bda2-2443-4f3a-90fa-b63ca8da650e lbd_0 20480 00:37:15.169 09:03:09 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=332dea66-5f49-4fac-b822-bf139dcd576d 00:37:15.169 09:03:09 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 332dea66-5f49-4fac-b822-bf139dcd576d lvs_n_0 00:37:16.101 09:03:10 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=edb2d1bd-8b29-463a-9045-6e635d68ecb5 00:37:16.101 09:03:10 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb edb2d1bd-8b29-463a-9045-6e635d68ecb5 00:37:16.101 09:03:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_uuid=edb2d1bd-8b29-463a-9045-6e635d68ecb5 00:37:16.101 09:03:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local lvs_info 00:37:16.101 09:03:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local fc 00:37:16.101 09:03:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local cs 00:37:16.101 09:03:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:37:16.101 09:03:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # lvs_info='[ 00:37:16.101 { 00:37:16.101 "uuid": "6bc9bda2-2443-4f3a-90fa-b63ca8da650e", 00:37:16.101 "name": "lvs_0", 00:37:16.101 "base_bdev": "Nvme0n1", 00:37:16.101 "total_data_clusters": 238234, 00:37:16.101 "free_clusters": 233114, 00:37:16.101 "block_size": 512, 00:37:16.101 "cluster_size": 4194304 00:37:16.101 }, 00:37:16.101 { 00:37:16.101 "uuid": "edb2d1bd-8b29-463a-9045-6e635d68ecb5", 00:37:16.101 "name": "lvs_n_0", 00:37:16.101 "base_bdev": "332dea66-5f49-4fac-b822-bf139dcd576d", 00:37:16.101 "total_data_clusters": 5114, 00:37:16.101 "free_clusters": 5114, 00:37:16.101 "block_size": 512, 00:37:16.101 "cluster_size": 4194304 00:37:16.101 } 00:37:16.101 ]' 00:37:16.101 09:03:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="edb2d1bd-8b29-463a-9045-6e635d68ecb5") .free_clusters' 00:37:16.101 09:03:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # fc=5114 00:37:16.101 09:03:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # jq '.[] | select(.uuid=="edb2d1bd-8b29-463a-9045-6e635d68ecb5") .cluster_size' 00:37:16.101 09:03:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # cs=4194304 00:37:16.101 09:03:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # free_mb=20456 00:37:16.101 09:03:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1371 -- # echo 20456 00:37:16.101 20456 00:37:16.101 09:03:10 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:37:16.101 09:03:10 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u edb2d1bd-8b29-463a-9045-6e635d68ecb5 lbd_nest_0 20456 00:37:16.358 09:03:11 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=247b20ce-3663-41db-a017-c2be6b223e49 00:37:16.358 09:03:11 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:16.616 09:03:11 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:37:16.616 09:03:11 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 247b20ce-3663-41db-a017-c2be6b223e49 00:37:16.873 09:03:11 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:17.131 09:03:11 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:37:17.131 09:03:11 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:37:17.131 09:03:11 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:37:17.131 09:03:11 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:37:17.131 09:03:11 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:17.131 EAL: No free 2048 kB hugepages reported on node 1 00:37:29.319 Initializing NVMe Controllers 00:37:29.319 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:29.319 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:37:29.319 Initialization complete. Launching workers. 00:37:29.319 ======================================================== 00:37:29.319 Latency(us) 00:37:29.319 Device Information : IOPS MiB/s Average min max 00:37:29.319 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 49.10 0.02 20452.93 189.66 46635.67 00:37:29.319 ======================================================== 00:37:29.319 Total : 49.10 0.02 20452.93 189.66 46635.67 00:37:29.319 00:37:29.319 09:03:22 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:37:29.319 09:03:22 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:29.319 EAL: No free 2048 kB hugepages reported on node 1 00:37:39.278 Initializing NVMe Controllers 00:37:39.278 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:39.278 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:37:39.278 Initialization complete. Launching workers. 00:37:39.278 ======================================================== 00:37:39.278 Latency(us) 00:37:39.278 Device Information : IOPS MiB/s Average min max 00:37:39.278 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 82.60 10.32 12121.12 7018.13 47885.19 00:37:39.278 ======================================================== 00:37:39.278 Total : 82.60 10.32 12121.12 7018.13 47885.19 00:37:39.278 00:37:39.278 09:03:32 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:37:39.278 09:03:32 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:37:39.278 09:03:32 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:39.278 EAL: No free 2048 kB hugepages reported on node 1 00:37:49.303 Initializing NVMe Controllers 00:37:49.303 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:49.303 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:37:49.303 Initialization complete. Launching workers. 00:37:49.303 ======================================================== 00:37:49.303 Latency(us) 00:37:49.303 Device Information : IOPS MiB/s Average min max 00:37:49.303 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7534.27 3.68 4247.73 281.50 11030.40 00:37:49.303 ======================================================== 00:37:49.303 Total : 7534.27 3.68 4247.73 281.50 11030.40 00:37:49.303 00:37:49.303 09:03:43 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:37:49.303 09:03:43 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:49.303 EAL: No free 2048 kB hugepages reported on node 1 00:37:59.284 Initializing NVMe Controllers 00:37:59.284 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:59.284 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:37:59.284 Initialization complete. Launching workers. 00:37:59.284 ======================================================== 00:37:59.284 Latency(us) 00:37:59.284 Device Information : IOPS MiB/s Average min max 00:37:59.284 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3253.73 406.72 9834.43 706.74 19186.52 00:37:59.284 ======================================================== 00:37:59.284 Total : 3253.73 406.72 9834.43 706.74 19186.52 00:37:59.284 00:37:59.284 09:03:53 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:37:59.284 09:03:53 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:37:59.285 09:03:53 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:59.285 EAL: No free 2048 kB hugepages reported on node 1 00:38:09.266 Initializing NVMe Controllers 00:38:09.266 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:09.266 Controller IO queue size 128, less than required. 00:38:09.266 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:09.266 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:38:09.266 Initialization complete. Launching workers. 00:38:09.266 ======================================================== 00:38:09.266 Latency(us) 00:38:09.266 Device Information : IOPS MiB/s Average min max 00:38:09.266 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11898.38 5.81 10757.90 1841.05 48026.19 00:38:09.266 ======================================================== 00:38:09.266 Total : 11898.38 5.81 10757.90 1841.05 48026.19 00:38:09.266 00:38:09.266 09:04:03 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:38:09.266 09:04:03 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:09.266 EAL: No free 2048 kB hugepages reported on node 1 00:38:21.476 Initializing NVMe Controllers 00:38:21.476 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:21.476 Controller IO queue size 128, less than required. 00:38:21.476 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:21.476 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:38:21.476 Initialization complete. Launching workers. 00:38:21.476 ======================================================== 00:38:21.476 Latency(us) 00:38:21.476 Device Information : IOPS MiB/s Average min max 00:38:21.476 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1191.59 148.95 107895.01 24104.16 201023.38 00:38:21.476 ======================================================== 00:38:21.476 Total : 1191.59 148.95 107895.01 24104.16 201023.38 00:38:21.476 00:38:21.476 09:04:14 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:21.476 09:04:14 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 247b20ce-3663-41db-a017-c2be6b223e49 00:38:21.476 09:04:15 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:38:21.477 09:04:15 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 332dea66-5f49-4fac-b822-bf139dcd576d 00:38:21.477 09:04:15 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:38:21.477 09:04:16 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:38:21.477 09:04:16 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:38:21.477 09:04:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:21.477 09:04:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:38:21.477 09:04:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:38:21.477 09:04:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:38:21.477 09:04:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:21.477 09:04:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:38:21.477 rmmod nvme_tcp 00:38:21.477 rmmod nvme_fabrics 00:38:21.477 rmmod nvme_keyring 00:38:21.477 09:04:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:21.477 09:04:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:38:21.477 09:04:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:38:21.477 09:04:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 2378136 ']' 00:38:21.477 09:04:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 2378136 00:38:21.477 09:04:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@947 -- # '[' -z 2378136 ']' 00:38:21.477 09:04:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # kill -0 2378136 00:38:21.477 09:04:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # uname 00:38:21.477 09:04:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:38:21.477 09:04:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2378136 00:38:21.477 09:04:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:38:21.477 09:04:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:38:21.477 09:04:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2378136' 00:38:21.477 killing process with pid 2378136 00:38:21.477 09:04:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # kill 2378136 00:38:21.477 [2024-05-15 09:04:16.158367] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:38:21.477 09:04:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@971 -- # wait 2378136 00:38:23.409 09:04:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:38:23.409 09:04:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:38:23.409 09:04:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:38:23.409 09:04:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:23.409 09:04:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:38:23.409 09:04:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:23.409 09:04:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:23.409 09:04:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:25.317 09:04:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:38:25.317 00:38:25.317 real 1m31.762s 00:38:25.317 user 5m30.994s 00:38:25.317 sys 0m17.085s 00:38:25.317 09:04:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:38:25.317 09:04:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:38:25.317 ************************************ 00:38:25.317 END TEST nvmf_perf 00:38:25.317 ************************************ 00:38:25.317 09:04:19 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:38:25.317 09:04:19 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:38:25.317 09:04:19 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:38:25.317 09:04:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:25.317 ************************************ 00:38:25.317 START TEST nvmf_fio_host 00:38:25.317 ************************************ 00:38:25.317 09:04:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:38:25.317 * Looking for test storage... 00:38:25.317 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:38:25.317 09:04:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:25.317 09:04:19 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:25.317 09:04:19 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:25.317 09:04:19 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:25.317 09:04:19 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:25.317 09:04:19 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:25.317 09:04:19 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:25.317 09:04:19 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:38:25.317 09:04:19 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:25.317 09:04:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:25.317 09:04:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:38:25.317 09:04:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:25.317 09:04:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:25.317 09:04:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:25.317 09:04:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:25.317 09:04:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:25.317 09:04:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:25.317 09:04:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:25.317 09:04:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:25.317 09:04:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:25.317 09:04:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:25.317 09:04:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:38:25.317 09:04:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:38:25.317 09:04:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:25.317 09:04:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:25.317 09:04:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:25.317 09:04:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:25.317 09:04:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:25.317 09:04:19 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:25.317 09:04:19 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:25.317 09:04:19 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:25.317 09:04:19 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:25.317 09:04:19 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:25.317 09:04:19 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:25.317 09:04:19 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:38:25.317 09:04:19 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:25.317 09:04:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:38:25.317 09:04:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:25.317 09:04:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:25.317 09:04:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:25.317 09:04:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:25.317 09:04:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:25.317 09:04:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:25.317 09:04:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:25.317 09:04:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:25.317 09:04:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # nvmftestinit 00:38:25.317 09:04:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:38:25.317 09:04:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:25.317 09:04:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:38:25.317 09:04:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:38:25.317 09:04:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:38:25.317 09:04:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:25.317 09:04:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:25.317 09:04:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:25.317 09:04:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:38:25.317 09:04:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:38:25.317 09:04:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:38:25.317 09:04:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:38:27.852 Found 0000:09:00.0 (0x8086 - 0x159b) 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:38:27.852 Found 0000:09:00.1 (0x8086 - 0x159b) 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:38:27.852 Found net devices under 0000:09:00.0: cvl_0_0 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:38:27.852 Found net devices under 0000:09:00.1: cvl_0_1 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:38:27.852 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:27.853 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:27.853 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:38:27.853 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:27.853 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:27.853 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:38:27.853 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:38:27.853 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:38:27.853 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:27.853 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:27.853 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:27.853 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:38:27.853 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:27.853 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:27.853 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:27.853 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:38:27.853 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:27.853 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:38:27.853 00:38:27.853 --- 10.0.0.2 ping statistics --- 00:38:27.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:27.853 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:38:27.853 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:27.853 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:27.853 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:38:27.853 00:38:27.853 --- 10.0.0.1 ping statistics --- 00:38:27.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:27.853 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:38:27.853 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:27.853 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:38:27.853 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:38:27.853 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:27.853 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:38:27.853 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:38:27.853 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:27.853 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:38:27.853 09:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:38:27.853 09:04:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # [[ y != y ]] 00:38:27.853 09:04:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:38:27.853 09:04:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@721 -- # xtrace_disable 00:38:27.853 09:04:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:38:27.853 09:04:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@22 -- # nvmfpid=2391072 00:38:27.853 09:04:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:38:27.853 09:04:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:27.853 09:04:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # waitforlisten 2391072 00:38:27.853 09:04:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@828 -- # '[' -z 2391072 ']' 00:38:27.853 09:04:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:27.853 09:04:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local max_retries=100 00:38:27.853 09:04:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:27.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:27.853 09:04:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@837 -- # xtrace_disable 00:38:27.853 09:04:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:38:27.853 [2024-05-15 09:04:22.547694] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:38:27.853 [2024-05-15 09:04:22.547773] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:27.853 EAL: No free 2048 kB hugepages reported on node 1 00:38:27.853 [2024-05-15 09:04:22.623307] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:28.111 [2024-05-15 09:04:22.711799] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:28.111 [2024-05-15 09:04:22.711852] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:28.111 [2024-05-15 09:04:22.711865] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:28.111 [2024-05-15 09:04:22.711876] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:28.111 [2024-05-15 09:04:22.711885] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:28.111 [2024-05-15 09:04:22.711978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:38:28.111 [2024-05-15 09:04:22.712043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:38:28.111 [2024-05-15 09:04:22.712109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:38:28.111 [2024-05-15 09:04:22.712112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:28.111 09:04:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:38:28.111 09:04:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@861 -- # return 0 00:38:28.111 09:04:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:28.111 09:04:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:28.111 09:04:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:38:28.111 [2024-05-15 09:04:22.837817] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:28.111 09:04:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:28.111 09:04:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:38:28.111 09:04:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@727 -- # xtrace_disable 00:38:28.111 09:04:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:38:28.111 09:04:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:38:28.111 09:04:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:28.111 09:04:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:38:28.111 Malloc1 00:38:28.111 09:04:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:28.111 09:04:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:28.111 09:04:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:28.111 09:04:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:38:28.111 09:04:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:28.111 09:04:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:38:28.111 09:04:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:28.111 09:04:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:38:28.369 09:04:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:28.369 09:04:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:28.369 09:04:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:28.369 09:04:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:38:28.369 [2024-05-15 09:04:22.913424] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:38:28.369 [2024-05-15 09:04:22.913747] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:28.369 09:04:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:28.369 09:04:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:28.369 09:04:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:28.369 09:04:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:38:28.369 09:04:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:28.369 09:04:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:38:28.369 09:04:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:38:28.369 09:04:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1357 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:38:28.369 09:04:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:38:28.369 09:04:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:28.369 09:04:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local sanitizers 00:38:28.369 09:04:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:38:28.369 09:04:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # shift 00:38:28.370 09:04:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local asan_lib= 00:38:28.370 09:04:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:38:28.370 09:04:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:38:28.370 09:04:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # grep libasan 00:38:28.370 09:04:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:38:28.370 09:04:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # asan_lib= 00:38:28.370 09:04:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:38:28.370 09:04:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:38:28.370 09:04:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:38:28.370 09:04:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:38:28.370 09:04:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:38:28.370 09:04:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # asan_lib= 00:38:28.370 09:04:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:38:28.370 09:04:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:38:28.370 09:04:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:38:28.370 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:38:28.370 fio-3.35 00:38:28.370 Starting 1 thread 00:38:28.626 EAL: No free 2048 kB hugepages reported on node 1 00:38:31.153 00:38:31.153 test: (groupid=0, jobs=1): err= 0: pid=2391293: Wed May 15 09:04:25 2024 00:38:31.153 read: IOPS=8158, BW=31.9MiB/s (33.4MB/s)(63.9MiB/2006msec) 00:38:31.153 slat (nsec): min=1973, max=131881, avg=2497.40, stdev=1702.13 00:38:31.153 clat (usec): min=2826, max=14430, avg=8646.72, stdev=663.50 00:38:31.153 lat (usec): min=2848, max=14433, avg=8649.22, stdev=663.40 00:38:31.153 clat percentiles (usec): 00:38:31.153 | 1.00th=[ 7177], 5.00th=[ 7635], 10.00th=[ 7832], 20.00th=[ 8094], 00:38:31.153 | 30.00th=[ 8356], 40.00th=[ 8455], 50.00th=[ 8586], 60.00th=[ 8848], 00:38:31.153 | 70.00th=[ 8979], 80.00th=[ 9110], 90.00th=[ 9372], 95.00th=[ 9634], 00:38:31.153 | 99.00th=[10159], 99.50th=[10421], 99.90th=[12518], 99.95th=[12649], 00:38:31.153 | 99.99th=[14353] 00:38:31.153 bw ( KiB/s): min=31696, max=32984, per=99.84%, avg=32582.00, stdev=595.50, samples=4 00:38:31.153 iops : min= 7924, max= 8246, avg=8145.50, stdev=148.87, samples=4 00:38:31.153 write: IOPS=8153, BW=31.8MiB/s (33.4MB/s)(63.9MiB/2006msec); 0 zone resets 00:38:31.153 slat (usec): min=2, max=105, avg= 2.66, stdev= 1.20 00:38:31.153 clat (usec): min=1196, max=12587, avg=7010.13, stdev=575.53 00:38:31.153 lat (usec): min=1203, max=12589, avg=7012.79, stdev=575.50 00:38:31.153 clat percentiles (usec): 00:38:31.153 | 1.00th=[ 5800], 5.00th=[ 6194], 10.00th=[ 6390], 20.00th=[ 6587], 00:38:31.153 | 30.00th=[ 6718], 40.00th=[ 6849], 50.00th=[ 6980], 60.00th=[ 7111], 00:38:31.153 | 70.00th=[ 7308], 80.00th=[ 7439], 90.00th=[ 7701], 95.00th=[ 7898], 00:38:31.153 | 99.00th=[ 8225], 99.50th=[ 8455], 99.90th=[10945], 99.95th=[11600], 00:38:31.153 | 99.99th=[11731] 00:38:31.153 bw ( KiB/s): min=32192, max=33128, per=99.98%, avg=32608.00, stdev=388.47, samples=4 00:38:31.153 iops : min= 8048, max= 8282, avg=8152.00, stdev=97.12, samples=4 00:38:31.153 lat (msec) : 2=0.01%, 4=0.13%, 10=98.95%, 20=0.91% 00:38:31.153 cpu : usr=62.19%, sys=34.91%, ctx=57, majf=0, minf=34 00:38:31.153 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:38:31.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:31.153 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:38:31.153 issued rwts: total=16366,16356,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:31.153 latency : target=0, window=0, percentile=100.00%, depth=128 00:38:31.153 00:38:31.153 Run status group 0 (all jobs): 00:38:31.153 READ: bw=31.9MiB/s (33.4MB/s), 31.9MiB/s-31.9MiB/s (33.4MB/s-33.4MB/s), io=63.9MiB (67.0MB), run=2006-2006msec 00:38:31.153 WRITE: bw=31.8MiB/s (33.4MB/s), 31.8MiB/s-31.8MiB/s (33.4MB/s-33.4MB/s), io=63.9MiB (67.0MB), run=2006-2006msec 00:38:31.153 09:04:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:38:31.153 09:04:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1357 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:38:31.153 09:04:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:38:31.153 09:04:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:31.153 09:04:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local sanitizers 00:38:31.153 09:04:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:38:31.153 09:04:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # shift 00:38:31.153 09:04:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local asan_lib= 00:38:31.153 09:04:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:38:31.153 09:04:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:38:31.153 09:04:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # grep libasan 00:38:31.153 09:04:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:38:31.153 09:04:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # asan_lib= 00:38:31.153 09:04:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:38:31.153 09:04:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:38:31.153 09:04:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:38:31.153 09:04:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:38:31.153 09:04:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:38:31.153 09:04:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # asan_lib= 00:38:31.153 09:04:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:38:31.153 09:04:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:38:31.153 09:04:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:38:31.153 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:38:31.153 fio-3.35 00:38:31.153 Starting 1 thread 00:38:31.153 EAL: No free 2048 kB hugepages reported on node 1 00:38:33.684 00:38:33.684 test: (groupid=0, jobs=1): err= 0: pid=2391621: Wed May 15 09:04:28 2024 00:38:33.684 read: IOPS=8512, BW=133MiB/s (139MB/s)(267MiB/2006msec) 00:38:33.684 slat (nsec): min=2863, max=92193, avg=3876.41, stdev=1891.41 00:38:33.684 clat (usec): min=2385, max=15798, avg=8731.64, stdev=2030.13 00:38:33.684 lat (usec): min=2388, max=15802, avg=8735.52, stdev=2030.16 00:38:33.684 clat percentiles (usec): 00:38:33.684 | 1.00th=[ 4621], 5.00th=[ 5473], 10.00th=[ 6128], 20.00th=[ 6980], 00:38:33.684 | 30.00th=[ 7504], 40.00th=[ 8094], 50.00th=[ 8717], 60.00th=[ 9372], 00:38:33.684 | 70.00th=[ 9896], 80.00th=[10552], 90.00th=[11338], 95.00th=[11994], 00:38:33.684 | 99.00th=[13566], 99.50th=[14091], 99.90th=[15008], 99.95th=[15139], 00:38:33.684 | 99.99th=[15795] 00:38:33.684 bw ( KiB/s): min=61280, max=75552, per=50.48%, avg=68752.00, stdev=7749.66, samples=4 00:38:33.684 iops : min= 3830, max= 4722, avg=4297.00, stdev=484.35, samples=4 00:38:33.684 write: IOPS=4998, BW=78.1MiB/s (81.9MB/s)(141MiB/1806msec); 0 zone resets 00:38:33.684 slat (usec): min=30, max=141, avg=34.54, stdev= 5.85 00:38:33.684 clat (usec): min=2903, max=17435, avg=11210.69, stdev=2003.35 00:38:33.684 lat (usec): min=2937, max=17481, avg=11245.23, stdev=2003.52 00:38:33.684 clat percentiles (usec): 00:38:33.684 | 1.00th=[ 7504], 5.00th=[ 8356], 10.00th=[ 8848], 20.00th=[ 9503], 00:38:33.684 | 30.00th=[10028], 40.00th=[10552], 50.00th=[11076], 60.00th=[11469], 00:38:33.684 | 70.00th=[11994], 80.00th=[12780], 90.00th=[14091], 95.00th=[15008], 00:38:33.684 | 99.00th=[16188], 99.50th=[16581], 99.90th=[17171], 99.95th=[17171], 00:38:33.684 | 99.99th=[17433] 00:38:33.684 bw ( KiB/s): min=63008, max=79712, per=89.65%, avg=71704.00, stdev=8802.75, samples=4 00:38:33.684 iops : min= 3938, max= 4982, avg=4481.50, stdev=550.17, samples=4 00:38:33.684 lat (msec) : 4=0.30%, 10=57.15%, 20=42.55% 00:38:33.684 cpu : usr=74.86%, sys=22.99%, ctx=32, majf=0, minf=64 00:38:33.684 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:38:33.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:33.684 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:38:33.684 issued rwts: total=17076,9028,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:33.684 latency : target=0, window=0, percentile=100.00%, depth=128 00:38:33.684 00:38:33.684 Run status group 0 (all jobs): 00:38:33.684 READ: bw=133MiB/s (139MB/s), 133MiB/s-133MiB/s (139MB/s-139MB/s), io=267MiB (280MB), run=2006-2006msec 00:38:33.684 WRITE: bw=78.1MiB/s (81.9MB/s), 78.1MiB/s-78.1MiB/s (81.9MB/s-81.9MB/s), io=141MiB (148MB), run=1806-1806msec 00:38:33.684 09:04:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:33.684 09:04:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:33.684 09:04:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:38:33.684 09:04:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:33.684 09:04:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # '[' 1 -eq 1 ']' 00:38:33.684 09:04:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # bdfs=($(get_nvme_bdfs)) 00:38:33.684 09:04:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # get_nvme_bdfs 00:38:33.684 09:04:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # bdfs=() 00:38:33.684 09:04:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # local bdfs 00:38:33.684 09:04:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1511 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:38:33.684 09:04:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1511 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:38:33.684 09:04:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1511 -- # jq -r '.config[].params.traddr' 00:38:33.684 09:04:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1512 -- # (( 1 == 0 )) 00:38:33.684 09:04:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1516 -- # printf '%s\n' 0000:0b:00.0 00:38:33.684 09:04:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@50 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:0b:00.0 -i 10.0.0.2 00:38:33.684 09:04:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:33.684 09:04:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:38:36.977 Nvme0n1 00:38:36.977 09:04:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:36.977 09:04:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # rpc_cmd bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:38:36.977 09:04:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:36.977 09:04:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:38:39.508 09:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:39.508 09:04:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # ls_guid=d5e4a45f-90c9-4743-b372-9cbc8fcda508 00:38:39.508 09:04:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # get_lvs_free_mb d5e4a45f-90c9-4743-b372-9cbc8fcda508 00:38:39.508 09:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_uuid=d5e4a45f-90c9-4743-b372-9cbc8fcda508 00:38:39.508 09:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local lvs_info 00:38:39.508 09:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local fc 00:38:39.508 09:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local cs 00:38:39.508 09:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # rpc_cmd bdev_lvol_get_lvstores 00:38:39.508 09:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:39.508 09:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:38:39.508 09:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:39.508 09:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # lvs_info='[ 00:38:39.509 { 00:38:39.509 "uuid": "d5e4a45f-90c9-4743-b372-9cbc8fcda508", 00:38:39.509 "name": "lvs_0", 00:38:39.509 "base_bdev": "Nvme0n1", 00:38:39.509 "total_data_clusters": 930, 00:38:39.509 "free_clusters": 930, 00:38:39.509 "block_size": 512, 00:38:39.509 "cluster_size": 1073741824 00:38:39.509 } 00:38:39.509 ]' 00:38:39.509 09:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="d5e4a45f-90c9-4743-b372-9cbc8fcda508") .free_clusters' 00:38:39.509 09:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # fc=930 00:38:39.509 09:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # jq '.[] | select(.uuid=="d5e4a45f-90c9-4743-b372-9cbc8fcda508") .cluster_size' 00:38:39.509 09:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # cs=1073741824 00:38:39.509 09:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # free_mb=952320 00:38:39.509 09:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1371 -- # echo 952320 00:38:39.509 952320 00:38:39.509 09:04:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # rpc_cmd bdev_lvol_create -l lvs_0 lbd_0 952320 00:38:39.509 09:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:39.509 09:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:38:39.509 fd8e1a4f-3afa-44a7-8e12-176c8f439cec 00:38:39.509 09:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:39.509 09:04:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:38:39.509 09:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:39.509 09:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:38:39.509 09:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:39.509 09:04:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:38:39.509 09:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:39.509 09:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:38:39.509 09:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:39.509 09:04:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:38:39.509 09:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:39.509 09:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:38:39.509 09:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:39.509 09:04:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:38:39.509 09:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1357 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:38:39.509 09:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:38:39.509 09:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:39.509 09:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local sanitizers 00:38:39.509 09:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:38:39.509 09:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # shift 00:38:39.509 09:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local asan_lib= 00:38:39.509 09:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:38:39.509 09:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:38:39.509 09:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # grep libasan 00:38:39.509 09:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:38:39.509 09:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # asan_lib= 00:38:39.509 09:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:38:39.509 09:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:38:39.509 09:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:38:39.509 09:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:38:39.509 09:04:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:38:39.509 09:04:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # asan_lib= 00:38:39.509 09:04:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:38:39.509 09:04:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:38:39.509 09:04:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:38:39.509 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:38:39.509 fio-3.35 00:38:39.509 Starting 1 thread 00:38:39.509 EAL: No free 2048 kB hugepages reported on node 1 00:38:42.043 00:38:42.043 test: (groupid=0, jobs=1): err= 0: pid=2392756: Wed May 15 09:04:36 2024 00:38:42.043 read: IOPS=5938, BW=23.2MiB/s (24.3MB/s)(46.6MiB/2007msec) 00:38:42.043 slat (usec): min=2, max=180, avg= 2.86, stdev= 2.82 00:38:42.043 clat (usec): min=890, max=171403, avg=11830.12, stdev=11678.09 00:38:42.043 lat (usec): min=893, max=171434, avg=11832.98, stdev=11678.46 00:38:42.043 clat percentiles (msec): 00:38:42.043 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 11], 00:38:42.043 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 12], 00:38:42.043 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 13], 95.00th=[ 13], 00:38:42.043 | 99.00th=[ 14], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:38:42.043 | 99.99th=[ 171] 00:38:42.043 bw ( KiB/s): min=16704, max=26192, per=99.74%, avg=23692.00, stdev=4660.60, samples=4 00:38:42.043 iops : min= 4176, max= 6548, avg=5923.00, stdev=1165.15, samples=4 00:38:42.043 write: IOPS=5932, BW=23.2MiB/s (24.3MB/s)(46.5MiB/2007msec); 0 zone resets 00:38:42.043 slat (usec): min=2, max=148, avg= 2.97, stdev= 2.17 00:38:42.043 clat (usec): min=348, max=169367, avg=9587.30, stdev=10967.55 00:38:42.043 lat (usec): min=352, max=169374, avg=9590.27, stdev=10967.93 00:38:42.043 clat percentiles (msec): 00:38:42.043 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 9], 00:38:42.043 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 9], 00:38:42.043 | 70.00th=[ 10], 80.00th=[ 10], 90.00th=[ 10], 95.00th=[ 11], 00:38:42.043 | 99.00th=[ 11], 99.50th=[ 17], 99.90th=[ 169], 99.95th=[ 169], 00:38:42.043 | 99.99th=[ 169] 00:38:42.043 bw ( KiB/s): min=17704, max=25856, per=99.89%, avg=23706.00, stdev=4003.49, samples=4 00:38:42.043 iops : min= 4426, max= 6464, avg=5926.50, stdev=1000.87, samples=4 00:38:42.043 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:38:42.044 lat (msec) : 2=0.03%, 4=0.13%, 10=53.80%, 20=45.48%, 250=0.54% 00:38:42.044 cpu : usr=55.41%, sys=42.05%, ctx=147, majf=0, minf=34 00:38:42.044 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:38:42.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:42.044 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:38:42.044 issued rwts: total=11919,11907,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:42.044 latency : target=0, window=0, percentile=100.00%, depth=128 00:38:42.044 00:38:42.044 Run status group 0 (all jobs): 00:38:42.044 READ: bw=23.2MiB/s (24.3MB/s), 23.2MiB/s-23.2MiB/s (24.3MB/s-24.3MB/s), io=46.6MiB (48.8MB), run=2007-2007msec 00:38:42.044 WRITE: bw=23.2MiB/s (24.3MB/s), 23.2MiB/s-23.2MiB/s (24.3MB/s-24.3MB/s), io=46.5MiB (48.8MB), run=2007-2007msec 00:38:42.044 09:04:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:38:42.044 09:04:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:42.044 09:04:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:38:42.044 09:04:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:42.044 09:04:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@62 -- # rpc_cmd bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:38:42.044 09:04:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:42.044 09:04:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:38:42.614 09:04:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:42.614 09:04:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@62 -- # ls_nested_guid=2e938a23-b8bb-4a9d-9b88-ae4430ad4f34 00:38:42.614 09:04:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@63 -- # get_lvs_free_mb 2e938a23-b8bb-4a9d-9b88-ae4430ad4f34 00:38:42.614 09:04:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_uuid=2e938a23-b8bb-4a9d-9b88-ae4430ad4f34 00:38:42.914 09:04:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local lvs_info 00:38:42.914 09:04:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local fc 00:38:42.914 09:04:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local cs 00:38:42.914 09:04:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # rpc_cmd bdev_lvol_get_lvstores 00:38:42.914 09:04:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:42.914 09:04:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:38:42.914 09:04:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:42.914 09:04:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # lvs_info='[ 00:38:42.914 { 00:38:42.914 "uuid": "d5e4a45f-90c9-4743-b372-9cbc8fcda508", 00:38:42.914 "name": "lvs_0", 00:38:42.914 "base_bdev": "Nvme0n1", 00:38:42.914 "total_data_clusters": 930, 00:38:42.914 "free_clusters": 0, 00:38:42.914 "block_size": 512, 00:38:42.914 "cluster_size": 1073741824 00:38:42.914 }, 00:38:42.914 { 00:38:42.914 "uuid": "2e938a23-b8bb-4a9d-9b88-ae4430ad4f34", 00:38:42.914 "name": "lvs_n_0", 00:38:42.914 "base_bdev": "fd8e1a4f-3afa-44a7-8e12-176c8f439cec", 00:38:42.914 "total_data_clusters": 237847, 00:38:42.914 "free_clusters": 237847, 00:38:42.914 "block_size": 512, 00:38:42.914 "cluster_size": 4194304 00:38:42.914 } 00:38:42.914 ]' 00:38:42.914 09:04:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="2e938a23-b8bb-4a9d-9b88-ae4430ad4f34") .free_clusters' 00:38:42.914 09:04:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # fc=237847 00:38:42.914 09:04:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # jq '.[] | select(.uuid=="2e938a23-b8bb-4a9d-9b88-ae4430ad4f34") .cluster_size' 00:38:42.914 09:04:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # cs=4194304 00:38:42.914 09:04:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # free_mb=951388 00:38:42.914 09:04:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1371 -- # echo 951388 00:38:42.914 951388 00:38:42.914 09:04:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # rpc_cmd bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:38:42.914 09:04:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:42.914 09:04:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:38:43.171 1e63460a-1a83-44d3-b071-cd0532105521 00:38:43.171 09:04:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:43.171 09:04:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:38:43.171 09:04:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:43.171 09:04:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:38:43.171 09:04:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:43.172 09:04:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:38:43.172 09:04:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:43.172 09:04:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:38:43.172 09:04:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:43.172 09:04:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:38:43.172 09:04:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:43.172 09:04:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:38:43.429 09:04:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:43.429 09:04:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:38:43.429 09:04:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1357 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:38:43.429 09:04:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:38:43.429 09:04:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:43.429 09:04:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local sanitizers 00:38:43.429 09:04:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:38:43.429 09:04:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # shift 00:38:43.429 09:04:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local asan_lib= 00:38:43.429 09:04:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:38:43.429 09:04:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:38:43.429 09:04:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # grep libasan 00:38:43.429 09:04:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:38:43.429 09:04:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # asan_lib= 00:38:43.430 09:04:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:38:43.430 09:04:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:38:43.430 09:04:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:38:43.430 09:04:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:38:43.430 09:04:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:38:43.430 09:04:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # asan_lib= 00:38:43.430 09:04:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:38:43.430 09:04:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:38:43.430 09:04:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:38:43.430 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:38:43.430 fio-3.35 00:38:43.430 Starting 1 thread 00:38:43.687 EAL: No free 2048 kB hugepages reported on node 1 00:38:46.213 00:38:46.213 test: (groupid=0, jobs=1): err= 0: pid=2393231: Wed May 15 09:04:40 2024 00:38:46.213 read: IOPS=5847, BW=22.8MiB/s (23.9MB/s)(45.9MiB/2008msec) 00:38:46.213 slat (usec): min=2, max=141, avg= 2.75, stdev= 2.12 00:38:46.213 clat (usec): min=4326, max=20067, avg=12089.01, stdev=1084.21 00:38:46.213 lat (usec): min=4331, max=20070, avg=12091.76, stdev=1084.09 00:38:46.213 clat percentiles (usec): 00:38:46.213 | 1.00th=[ 9634], 5.00th=[10421], 10.00th=[10814], 20.00th=[11207], 00:38:46.213 | 30.00th=[11600], 40.00th=[11863], 50.00th=[12125], 60.00th=[12387], 00:38:46.213 | 70.00th=[12649], 80.00th=[12911], 90.00th=[13435], 95.00th=[13829], 00:38:46.213 | 99.00th=[14484], 99.50th=[14746], 99.90th=[18744], 99.95th=[19006], 00:38:46.213 | 99.99th=[20055] 00:38:46.213 bw ( KiB/s): min=22672, max=23728, per=99.79%, avg=23340.00, stdev=482.33, samples=4 00:38:46.213 iops : min= 5668, max= 5932, avg=5835.00, stdev=120.58, samples=4 00:38:46.213 write: IOPS=5834, BW=22.8MiB/s (23.9MB/s)(45.8MiB/2008msec); 0 zone resets 00:38:46.213 slat (usec): min=2, max=120, avg= 2.85, stdev= 1.56 00:38:46.213 clat (usec): min=2114, max=17541, avg=9720.40, stdev=908.04 00:38:46.213 lat (usec): min=2121, max=17544, avg=9723.26, stdev=907.99 00:38:46.213 clat percentiles (usec): 00:38:46.213 | 1.00th=[ 7701], 5.00th=[ 8356], 10.00th=[ 8717], 20.00th=[ 8979], 00:38:46.213 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[ 9896], 00:38:46.213 | 70.00th=[10159], 80.00th=[10421], 90.00th=[10814], 95.00th=[11076], 00:38:46.213 | 99.00th=[11600], 99.50th=[12125], 99.90th=[15401], 99.95th=[16712], 00:38:46.213 | 99.99th=[17433] 00:38:46.213 bw ( KiB/s): min=23104, max=23576, per=99.91%, avg=23318.00, stdev=201.26, samples=4 00:38:46.213 iops : min= 5776, max= 5894, avg=5829.50, stdev=50.32, samples=4 00:38:46.213 lat (msec) : 4=0.05%, 10=32.78%, 20=67.16%, 50=0.01% 00:38:46.213 cpu : usr=60.49%, sys=36.97%, ctx=111, majf=0, minf=34 00:38:46.213 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:38:46.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:46.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:38:46.213 issued rwts: total=11741,11716,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:46.213 latency : target=0, window=0, percentile=100.00%, depth=128 00:38:46.213 00:38:46.213 Run status group 0 (all jobs): 00:38:46.213 READ: bw=22.8MiB/s (23.9MB/s), 22.8MiB/s-22.8MiB/s (23.9MB/s-23.9MB/s), io=45.9MiB (48.1MB), run=2008-2008msec 00:38:46.213 WRITE: bw=22.8MiB/s (23.9MB/s), 22.8MiB/s-22.8MiB/s (23.9MB/s-23.9MB/s), io=45.8MiB (48.0MB), run=2008-2008msec 00:38:46.213 09:04:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:38:46.213 09:04:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:46.213 09:04:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:38:46.213 09:04:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:46.213 09:04:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # sync 00:38:46.213 09:04:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # rpc_cmd bdev_lvol_delete lvs_n_0/lbd_nest_0 00:38:46.213 09:04:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:46.213 09:04:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:38:49.491 09:04:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:49.491 09:04:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@75 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_n_0 00:38:49.491 09:04:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:49.491 09:04:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:38:49.491 09:04:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:49.491 09:04:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # rpc_cmd bdev_lvol_delete lvs_0/lbd_0 00:38:49.491 09:04:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:49.491 09:04:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:38:52.769 09:04:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:52.769 09:04:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_0 00:38:52.769 09:04:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:52.769 09:04:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:38:52.769 09:04:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:52.769 09:04:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # rpc_cmd bdev_nvme_detach_controller Nvme0 00:38:52.769 09:04:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:52.769 09:04:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:38:54.143 09:04:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:54.143 09:04:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:38:54.143 09:04:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:38:54.143 09:04:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@84 -- # nvmftestfini 00:38:54.143 09:04:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:54.143 09:04:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:38:54.143 09:04:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:38:54.143 09:04:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:38:54.143 09:04:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:54.143 09:04:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:38:54.143 rmmod nvme_tcp 00:38:54.143 rmmod nvme_fabrics 00:38:54.143 rmmod nvme_keyring 00:38:54.143 09:04:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:54.143 09:04:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:38:54.143 09:04:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:38:54.143 09:04:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 2391072 ']' 00:38:54.143 09:04:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 2391072 00:38:54.143 09:04:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@947 -- # '[' -z 2391072 ']' 00:38:54.143 09:04:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # kill -0 2391072 00:38:54.143 09:04:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # uname 00:38:54.143 09:04:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:38:54.143 09:04:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2391072 00:38:54.143 09:04:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:38:54.143 09:04:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:38:54.143 09:04:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2391072' 00:38:54.143 killing process with pid 2391072 00:38:54.143 09:04:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # kill 2391072 00:38:54.143 [2024-05-15 09:04:48.696472] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:38:54.143 09:04:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@971 -- # wait 2391072 00:38:54.143 09:04:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:38:54.143 09:04:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:38:54.143 09:04:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:38:54.143 09:04:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:54.143 09:04:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:38:54.143 09:04:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:54.143 09:04:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:54.143 09:04:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:56.676 09:04:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:38:56.676 00:38:56.676 real 0m31.162s 00:38:56.676 user 1m51.115s 00:38:56.676 sys 0m6.569s 00:38:56.676 09:04:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # xtrace_disable 00:38:56.676 09:04:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:38:56.676 ************************************ 00:38:56.676 END TEST nvmf_fio_host 00:38:56.676 ************************************ 00:38:56.676 09:04:50 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:38:56.676 09:04:50 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:38:56.676 09:04:50 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:38:56.676 09:04:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:56.676 ************************************ 00:38:56.676 START TEST nvmf_failover 00:38:56.676 ************************************ 00:38:56.676 09:04:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:38:56.676 * Looking for test storage... 00:38:56.676 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:38:56.676 09:04:51 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:56.676 09:04:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:38:56.676 09:04:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:56.676 09:04:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:56.676 09:04:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:56.676 09:04:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:56.676 09:04:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:56.676 09:04:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:56.676 09:04:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:56.676 09:04:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:56.676 09:04:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:56.676 09:04:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:56.676 09:04:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:38:56.676 09:04:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:38:56.676 09:04:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:56.676 09:04:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:56.676 09:04:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:56.676 09:04:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:56.677 09:04:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:56.677 09:04:51 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:56.677 09:04:51 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:56.677 09:04:51 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:56.677 09:04:51 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:56.677 09:04:51 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:56.677 09:04:51 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:56.677 09:04:51 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:38:56.677 09:04:51 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:56.677 09:04:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:38:56.677 09:04:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:56.677 09:04:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:56.677 09:04:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:56.677 09:04:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:56.677 09:04:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:56.677 09:04:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:56.677 09:04:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:56.677 09:04:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:56.677 09:04:51 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:56.677 09:04:51 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:56.677 09:04:51 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:56.677 09:04:51 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:38:56.677 09:04:51 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:38:56.677 09:04:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:38:56.677 09:04:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:56.677 09:04:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:38:56.677 09:04:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:38:56.677 09:04:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:38:56.677 09:04:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:56.677 09:04:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:56.677 09:04:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:56.677 09:04:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:38:56.677 09:04:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:38:56.677 09:04:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:38:56.677 09:04:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:38:59.203 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:59.203 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:38:59.203 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:38:59.203 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:38:59.203 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:38:59.203 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:38:59.203 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:38:59.203 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:38:59.203 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:38:59.203 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:38:59.203 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:38:59.203 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:38:59.203 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:38:59.203 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:38:59.203 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:38:59.204 Found 0000:09:00.0 (0x8086 - 0x159b) 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:38:59.204 Found 0000:09:00.1 (0x8086 - 0x159b) 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:38:59.204 Found net devices under 0000:09:00.0: cvl_0_0 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:38:59.204 Found net devices under 0000:09:00.1: cvl_0_1 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:59.204 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:38:59.205 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:59.205 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:38:59.205 00:38:59.205 --- 10.0.0.2 ping statistics --- 00:38:59.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:59.205 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:38:59.205 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:59.205 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:59.205 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:38:59.205 00:38:59.205 --- 10.0.0.1 ping statistics --- 00:38:59.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:59.205 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:38:59.205 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:59.205 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:38:59.205 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:38:59.205 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:59.205 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:38:59.205 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:38:59.205 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:59.205 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:38:59.205 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:38:59.205 09:04:53 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:38:59.205 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:38:59.205 09:04:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@721 -- # xtrace_disable 00:38:59.205 09:04:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:38:59.205 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=2396735 00:38:59.205 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:38:59.205 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 2396735 00:38:59.205 09:04:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@828 -- # '[' -z 2396735 ']' 00:38:59.205 09:04:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:59.205 09:04:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local max_retries=100 00:38:59.205 09:04:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:59.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:59.205 09:04:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # xtrace_disable 00:38:59.205 09:04:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:38:59.205 [2024-05-15 09:04:53.604425] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:38:59.205 [2024-05-15 09:04:53.604513] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:59.205 EAL: No free 2048 kB hugepages reported on node 1 00:38:59.205 [2024-05-15 09:04:53.682464] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:59.205 [2024-05-15 09:04:53.775741] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:59.205 [2024-05-15 09:04:53.775801] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:59.205 [2024-05-15 09:04:53.775828] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:59.205 [2024-05-15 09:04:53.775842] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:59.205 [2024-05-15 09:04:53.775854] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:59.205 [2024-05-15 09:04:53.775917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:38:59.205 [2024-05-15 09:04:53.776039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:38:59.205 [2024-05-15 09:04:53.776041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:38:59.205 09:04:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:38:59.205 09:04:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@861 -- # return 0 00:38:59.205 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:38:59.205 09:04:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@727 -- # xtrace_disable 00:38:59.205 09:04:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:38:59.205 09:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:59.205 09:04:53 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:38:59.463 [2024-05-15 09:04:54.136669] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:59.463 09:04:54 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:38:59.720 Malloc0 00:38:59.720 09:04:54 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:59.977 09:04:54 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:00.235 09:04:54 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:00.493 [2024-05-15 09:04:55.197313] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:39:00.493 [2024-05-15 09:04:55.197672] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:00.493 09:04:55 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:39:00.750 [2024-05-15 09:04:55.442374] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:39:00.750 09:04:55 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:39:01.009 [2024-05-15 09:04:55.735356] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:39:01.009 09:04:55 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2397023 00:39:01.009 09:04:55 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:39:01.009 09:04:55 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:01.009 09:04:55 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2397023 /var/tmp/bdevperf.sock 00:39:01.009 09:04:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@828 -- # '[' -z 2397023 ']' 00:39:01.009 09:04:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:01.009 09:04:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local max_retries=100 00:39:01.009 09:04:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:01.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:01.009 09:04:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # xtrace_disable 00:39:01.009 09:04:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:39:01.267 09:04:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:39:01.267 09:04:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@861 -- # return 0 00:39:01.267 09:04:56 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:39:01.861 NVMe0n1 00:39:01.861 09:04:56 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:39:02.119 00:39:02.119 09:04:56 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2397064 00:39:02.119 09:04:56 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:39:02.119 09:04:56 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:39:03.054 09:04:57 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:03.313 [2024-05-15 09:04:58.018676] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8f50 is same with the state(5) to be set 00:39:03.313 [2024-05-15 09:04:58.018749] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8f50 is same with the state(5) to be set 00:39:03.313 [2024-05-15 09:04:58.018786] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8f50 is same with the state(5) to be set 00:39:03.313 [2024-05-15 09:04:58.018798] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8f50 is same with the state(5) to be set 00:39:03.313 [2024-05-15 09:04:58.018811] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8f50 is same with the state(5) to be set 00:39:03.313 [2024-05-15 09:04:58.018823] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8f50 is same with the state(5) to be set 00:39:03.313 [2024-05-15 09:04:58.018834] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8f50 is same with the state(5) to be set 00:39:03.313 [2024-05-15 09:04:58.018856] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8f50 is same with the state(5) to be set 00:39:03.313 [2024-05-15 09:04:58.018867] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8f50 is same with the state(5) to be set 00:39:03.313 [2024-05-15 09:04:58.018879] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8f50 is same with the state(5) to be set 00:39:03.313 [2024-05-15 09:04:58.018890] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8f50 is same with the state(5) to be set 00:39:03.313 [2024-05-15 09:04:58.018902] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8f50 is same with the state(5) to be set 00:39:03.313 [2024-05-15 09:04:58.018914] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8f50 is same with the state(5) to be set 00:39:03.313 [2024-05-15 09:04:58.018925] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8f50 is same with the state(5) to be set 00:39:03.313 [2024-05-15 09:04:58.018937] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8f50 is same with the state(5) to be set 00:39:03.313 [2024-05-15 09:04:58.018948] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8f50 is same with the state(5) to be set 00:39:03.313 [2024-05-15 09:04:58.018960] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8f50 is same with the state(5) to be set 00:39:03.313 [2024-05-15 09:04:58.018971] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8f50 is same with the state(5) to be set 00:39:03.313 [2024-05-15 09:04:58.018983] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8f50 is same with the state(5) to be set 00:39:03.313 [2024-05-15 09:04:58.018994] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8f50 is same with the state(5) to be set 00:39:03.314 [2024-05-15 09:04:58.019006] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8f50 is same with the state(5) to be set 00:39:03.314 [2024-05-15 09:04:58.019017] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8f50 is same with the state(5) to be set 00:39:03.314 [2024-05-15 09:04:58.019029] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8f50 is same with the state(5) to be set 00:39:03.314 [2024-05-15 09:04:58.019040] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8f50 is same with the state(5) to be set 00:39:03.314 [2024-05-15 09:04:58.019052] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8f50 is same with the state(5) to be set 00:39:03.314 [2024-05-15 09:04:58.019063] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8f50 is same with the state(5) to be set 00:39:03.314 [2024-05-15 09:04:58.019075] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8f50 is same with the state(5) to be set 00:39:03.314 [2024-05-15 09:04:58.019086] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8f50 is same with the state(5) to be set 00:39:03.314 [2024-05-15 09:04:58.019098] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8f50 is same with the state(5) to be set 00:39:03.314 [2024-05-15 09:04:58.019109] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8f50 is same with the state(5) to be set 00:39:03.314 [2024-05-15 09:04:58.019121] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8f50 is same with the state(5) to be set 00:39:03.314 [2024-05-15 09:04:58.019132] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8f50 is same with the state(5) to be set 00:39:03.314 [2024-05-15 09:04:58.019143] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8f50 is same with the state(5) to be set 00:39:03.314 [2024-05-15 09:04:58.019154] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8f50 is same with the state(5) to be set 00:39:03.314 [2024-05-15 09:04:58.019166] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8f50 is same with the state(5) to be set 00:39:03.314 [2024-05-15 09:04:58.019181] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8f50 is same with the state(5) to be set 00:39:03.314 [2024-05-15 09:04:58.019225] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8f50 is same with the state(5) to be set 00:39:03.314 [2024-05-15 09:04:58.019240] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8f50 is same with the state(5) to be set 00:39:03.314 [2024-05-15 09:04:58.019252] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8f50 is same with the state(5) to be set 00:39:03.314 [2024-05-15 09:04:58.019264] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8f50 is same with the state(5) to be set 00:39:03.314 [2024-05-15 09:04:58.019276] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8f50 is same with the state(5) to be set 00:39:03.314 [2024-05-15 09:04:58.019287] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8f50 is same with the state(5) to be set 00:39:03.314 [2024-05-15 09:04:58.019315] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8f50 is same with the state(5) to be set 00:39:03.314 [2024-05-15 09:04:58.019327] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8f50 is same with the state(5) to be set 00:39:03.314 [2024-05-15 09:04:58.019340] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8f50 is same with the state(5) to be set 00:39:03.314 [2024-05-15 09:04:58.019352] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8f50 is same with the state(5) to be set 00:39:03.314 [2024-05-15 09:04:58.019370] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8f50 is same with the state(5) to be set 00:39:03.314 [2024-05-15 09:04:58.019382] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8f50 is same with the state(5) to be set 00:39:03.314 [2024-05-15 09:04:58.019395] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8f50 is same with the state(5) to be set 00:39:03.314 [2024-05-15 09:04:58.019407] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8f50 is same with the state(5) to be set 00:39:03.314 [2024-05-15 09:04:58.019419] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8f50 is same with the state(5) to be set 00:39:03.314 [2024-05-15 09:04:58.019431] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8f50 is same with the state(5) to be set 00:39:03.314 [2024-05-15 09:04:58.019443] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8f50 is same with the state(5) to be set 00:39:03.314 [2024-05-15 09:04:58.019455] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8f50 is same with the state(5) to be set 00:39:03.314 [2024-05-15 09:04:58.019467] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8f50 is same with the state(5) to be set 00:39:03.314 [2024-05-15 09:04:58.019479] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8f50 is same with the state(5) to be set 00:39:03.314 [2024-05-15 09:04:58.019491] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8f50 is same with the state(5) to be set 00:39:03.314 [2024-05-15 09:04:58.019503] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8f50 is same with the state(5) to be set 00:39:03.314 [2024-05-15 09:04:58.019520] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8f50 is same with the state(5) to be set 00:39:03.314 [2024-05-15 09:04:58.019532] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8f50 is same with the state(5) to be set 00:39:03.314 [2024-05-15 09:04:58.019544] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8f50 is same with the state(5) to be set 00:39:03.314 [2024-05-15 09:04:58.019556] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8f50 is same with the state(5) to be set 00:39:03.314 09:04:58 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:39:06.598 09:05:01 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:39:06.598 00:39:06.856 09:05:01 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:39:06.856 [2024-05-15 09:05:01.633139] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9e00 is same with the state(5) to be set 00:39:06.856 [2024-05-15 09:05:01.633200] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9e00 is same with the state(5) to be set 00:39:06.856 [2024-05-15 09:05:01.633232] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9e00 is same with the state(5) to be set 00:39:06.856 [2024-05-15 09:05:01.633246] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9e00 is same with the state(5) to be set 00:39:06.856 [2024-05-15 09:05:01.633259] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9e00 is same with the state(5) to be set 00:39:06.856 [2024-05-15 09:05:01.633271] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9e00 is same with the state(5) to be set 00:39:06.856 [2024-05-15 09:05:01.633284] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9e00 is same with the state(5) to be set 00:39:06.856 [2024-05-15 09:05:01.633296] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9e00 is same with the state(5) to be set 00:39:07.113 09:05:01 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:39:10.407 09:05:04 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:10.407 [2024-05-15 09:05:04.929593] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:10.407 09:05:04 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:39:11.340 09:05:05 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:39:11.597 [2024-05-15 09:05:06.178943] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db340 is same with the state(5) to be set 00:39:11.597 [2024-05-15 09:05:06.179001] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db340 is same with the state(5) to be set 00:39:11.597 [2024-05-15 09:05:06.179017] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db340 is same with the state(5) to be set 00:39:11.597 [2024-05-15 09:05:06.179030] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db340 is same with the state(5) to be set 00:39:11.597 [2024-05-15 09:05:06.179042] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db340 is same with the state(5) to be set 00:39:11.597 [2024-05-15 09:05:06.179055] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db340 is same with the state(5) to be set 00:39:11.597 [2024-05-15 09:05:06.179068] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db340 is same with the state(5) to be set 00:39:11.597 [2024-05-15 09:05:06.179080] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db340 is same with the state(5) to be set 00:39:11.597 [2024-05-15 09:05:06.179093] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db340 is same with the state(5) to be set 00:39:11.597 09:05:06 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 2397064 00:39:18.164 0 00:39:18.164 09:05:11 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 2397023 00:39:18.164 09:05:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@947 -- # '[' -z 2397023 ']' 00:39:18.164 09:05:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # kill -0 2397023 00:39:18.164 09:05:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # uname 00:39:18.164 09:05:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:39:18.164 09:05:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2397023 00:39:18.164 09:05:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:39:18.164 09:05:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:39:18.164 09:05:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2397023' 00:39:18.164 killing process with pid 2397023 00:39:18.164 09:05:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # kill 2397023 00:39:18.164 09:05:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@971 -- # wait 2397023 00:39:18.164 09:05:12 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:39:18.164 [2024-05-15 09:04:55.799938] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:39:18.164 [2024-05-15 09:04:55.800034] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2397023 ] 00:39:18.164 EAL: No free 2048 kB hugepages reported on node 1 00:39:18.164 [2024-05-15 09:04:55.869303] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:18.164 [2024-05-15 09:04:55.951556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:18.164 Running I/O for 15 seconds... 00:39:18.164 [2024-05-15 09:04:58.020896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:77312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.164 [2024-05-15 09:04:58.020939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.020965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:77320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.164 [2024-05-15 09:04:58.020982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.020998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:77328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.164 [2024-05-15 09:04:58.021013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.021029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:77336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.164 [2024-05-15 09:04:58.021044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.021059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:77344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.164 [2024-05-15 09:04:58.021072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.021087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:77352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.164 [2024-05-15 09:04:58.021100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.021114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.164 [2024-05-15 09:04:58.021128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.021142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:77368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.164 [2024-05-15 09:04:58.021155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.021170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:77376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.164 [2024-05-15 09:04:58.021183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.021213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:77384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.164 [2024-05-15 09:04:58.021238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.021254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:77392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.164 [2024-05-15 09:04:58.021268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.021292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:77400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.164 [2024-05-15 09:04:58.021308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.021322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:77408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.164 [2024-05-15 09:04:58.021336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.021351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:77416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.164 [2024-05-15 09:04:58.021364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.021379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:77424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.164 [2024-05-15 09:04:58.021393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.021408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:77432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.164 [2024-05-15 09:04:58.021421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.021436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:77440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.164 [2024-05-15 09:04:58.021450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.021465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:77448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.164 [2024-05-15 09:04:58.021479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.021494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:77456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.164 [2024-05-15 09:04:58.021522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.021537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:77464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.164 [2024-05-15 09:04:58.021550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.021565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:77472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.164 [2024-05-15 09:04:58.021578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.021593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:77480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.164 [2024-05-15 09:04:58.021606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.021621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:77488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.164 [2024-05-15 09:04:58.021634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.021649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:77496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.164 [2024-05-15 09:04:58.021666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.021682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:77648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.164 [2024-05-15 09:04:58.021696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.021710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:77656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.164 [2024-05-15 09:04:58.021724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.021738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:77664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.164 [2024-05-15 09:04:58.021752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.021767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:77672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.164 [2024-05-15 09:04:58.021781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.021796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.164 [2024-05-15 09:04:58.021809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.021823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:77688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.164 [2024-05-15 09:04:58.021837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.021851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:77696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.164 [2024-05-15 09:04:58.021864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.021878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:77704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.164 [2024-05-15 09:04:58.021891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.021906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:77712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.164 [2024-05-15 09:04:58.021919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.021933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:77720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.164 [2024-05-15 09:04:58.021946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.021960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:77728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.164 [2024-05-15 09:04:58.021974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.021988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:77736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.164 [2024-05-15 09:04:58.022000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.022018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.164 [2024-05-15 09:04:58.022032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.022047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:77752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.164 [2024-05-15 09:04:58.022059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.022074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:77760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.164 [2024-05-15 09:04:58.022087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.022101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:77768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.164 [2024-05-15 09:04:58.022114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.022128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:77776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.164 [2024-05-15 09:04:58.022141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.022155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:77784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.164 [2024-05-15 09:04:58.022168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.022182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:77792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.164 [2024-05-15 09:04:58.022210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.022234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:77800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.164 [2024-05-15 09:04:58.022249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.022264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:77808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.164 [2024-05-15 09:04:58.022277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.022292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:77816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.164 [2024-05-15 09:04:58.022306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.022321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:77824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.164 [2024-05-15 09:04:58.022336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.022350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:77832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.164 [2024-05-15 09:04:58.022364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.022379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:77840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.164 [2024-05-15 09:04:58.022392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.022411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:77848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.164 [2024-05-15 09:04:58.022425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.022439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:77856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.164 [2024-05-15 09:04:58.022453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.022468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:77864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.164 [2024-05-15 09:04:58.022481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.022496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:77872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.164 [2024-05-15 09:04:58.022509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.022524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:77880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.164 [2024-05-15 09:04:58.022537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.022551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:77888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.164 [2024-05-15 09:04:58.022565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.022580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:77896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.164 [2024-05-15 09:04:58.022594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.022609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:77904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.164 [2024-05-15 09:04:58.022622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.022637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:77912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.164 [2024-05-15 09:04:58.022650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.022666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:77920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.164 [2024-05-15 09:04:58.022679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.022694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:77928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.164 [2024-05-15 09:04:58.022707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.022722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:77936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.164 [2024-05-15 09:04:58.022735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.022750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:77944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.164 [2024-05-15 09:04:58.022767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.022782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:77952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.164 [2024-05-15 09:04:58.022796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.022811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:77960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.164 [2024-05-15 09:04:58.022825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.022840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:77968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.164 [2024-05-15 09:04:58.022853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.022868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:77976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.164 [2024-05-15 09:04:58.022881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.022897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:77984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.164 [2024-05-15 09:04:58.022910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.022925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:77992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.164 [2024-05-15 09:04:58.022938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.022953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:78000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.164 [2024-05-15 09:04:58.022966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.022981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:78008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.164 [2024-05-15 09:04:58.022994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.023009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:78016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.164 [2024-05-15 09:04:58.023022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.023037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:78024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.164 [2024-05-15 09:04:58.023051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.023066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:78032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.164 [2024-05-15 09:04:58.023079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.164 [2024-05-15 09:04:58.023094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.165 [2024-05-15 09:04:58.023107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.165 [2024-05-15 09:04:58.023125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:78048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.165 [2024-05-15 09:04:58.023139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.165 [2024-05-15 09:04:58.023154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:78056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.165 [2024-05-15 09:04:58.023167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.165 [2024-05-15 09:04:58.023182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:78064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.165 [2024-05-15 09:04:58.023196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.165 [2024-05-15 09:04:58.023242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:78072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.165 [2024-05-15 09:04:58.023259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.165 [2024-05-15 09:04:58.023275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:78080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.165 [2024-05-15 09:04:58.023288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.165 [2024-05-15 09:04:58.023304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:78088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.165 [2024-05-15 09:04:58.023318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.165 [2024-05-15 09:04:58.023333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:78096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.165 [2024-05-15 09:04:58.023347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.165 [2024-05-15 09:04:58.023362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:78104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.165 [2024-05-15 09:04:58.023376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.165 [2024-05-15 09:04:58.023391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.165 [2024-05-15 09:04:58.023405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.165 [2024-05-15 09:04:58.023420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:78120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.165 [2024-05-15 09:04:58.023435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.165 [2024-05-15 09:04:58.023450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:78128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.165 [2024-05-15 09:04:58.023464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.165 [2024-05-15 09:04:58.023479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:78136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.165 [2024-05-15 09:04:58.023492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.165 [2024-05-15 09:04:58.023511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:78144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.165 [2024-05-15 09:04:58.023543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.165 [2024-05-15 09:04:58.023560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:77504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.165 [2024-05-15 09:04:58.023574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.165 [2024-05-15 09:04:58.023588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:77512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.165 [2024-05-15 09:04:58.023602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.165 [2024-05-15 09:04:58.023630] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.165 [2024-05-15 09:04:58.023646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78152 len:8 PRP1 0x0 PRP2 0x0 00:39:18.165 [2024-05-15 09:04:58.023659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.165 [2024-05-15 09:04:58.023718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:39:18.165 [2024-05-15 09:04:58.023740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.165 [2024-05-15 09:04:58.023756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:39:18.165 [2024-05-15 09:04:58.023770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.165 [2024-05-15 09:04:58.023783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:39:18.165 [2024-05-15 09:04:58.023797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.165 [2024-05-15 09:04:58.023811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:39:18.165 [2024-05-15 09:04:58.023824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.165 [2024-05-15 09:04:58.023837] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa4600 is same with the state(5) to be set 00:39:18.165 [2024-05-15 09:04:58.024048] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.165 [2024-05-15 09:04:58.024082] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.165 [2024-05-15 09:04:58.024096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78160 len:8 PRP1 0x0 PRP2 0x0 00:39:18.165 [2024-05-15 09:04:58.024109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.165 [2024-05-15 09:04:58.024126] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.165 [2024-05-15 09:04:58.024138] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.165 [2024-05-15 09:04:58.024150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78168 len:8 PRP1 0x0 PRP2 0x0 00:39:18.165 [2024-05-15 09:04:58.024162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.165 [2024-05-15 09:04:58.024176] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.165 [2024-05-15 09:04:58.024187] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.165 [2024-05-15 09:04:58.024231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78176 len:8 PRP1 0x0 PRP2 0x0 00:39:18.165 [2024-05-15 09:04:58.024250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.165 [2024-05-15 09:04:58.024265] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.165 [2024-05-15 09:04:58.024277] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.165 [2024-05-15 09:04:58.024288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78184 len:8 PRP1 0x0 PRP2 0x0 00:39:18.165 [2024-05-15 09:04:58.024301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.165 [2024-05-15 09:04:58.024315] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.165 [2024-05-15 09:04:58.024326] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.165 [2024-05-15 09:04:58.024337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78192 len:8 PRP1 0x0 PRP2 0x0 00:39:18.165 [2024-05-15 09:04:58.024350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.165 [2024-05-15 09:04:58.024363] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.165 [2024-05-15 09:04:58.024374] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.165 [2024-05-15 09:04:58.024385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78200 len:8 PRP1 0x0 PRP2 0x0 00:39:18.165 [2024-05-15 09:04:58.024398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.165 [2024-05-15 09:04:58.024411] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.165 [2024-05-15 09:04:58.024423] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.165 [2024-05-15 09:04:58.024434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78208 len:8 PRP1 0x0 PRP2 0x0 00:39:18.165 [2024-05-15 09:04:58.024447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.165 [2024-05-15 09:04:58.024461] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.165 [2024-05-15 09:04:58.024472] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.165 [2024-05-15 09:04:58.024483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78216 len:8 PRP1 0x0 PRP2 0x0 00:39:18.165 [2024-05-15 09:04:58.024503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.165 [2024-05-15 09:04:58.024516] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.165 [2024-05-15 09:04:58.024542] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.165 [2024-05-15 09:04:58.024553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78224 len:8 PRP1 0x0 PRP2 0x0 00:39:18.165 [2024-05-15 09:04:58.024565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.165 [2024-05-15 09:04:58.024579] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.165 [2024-05-15 09:04:58.024589] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.165 [2024-05-15 09:04:58.024600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78232 len:8 PRP1 0x0 PRP2 0x0 00:39:18.165 [2024-05-15 09:04:58.024613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.165 [2024-05-15 09:04:58.024626] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.165 [2024-05-15 09:04:58.024637] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.165 [2024-05-15 09:04:58.024651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78240 len:8 PRP1 0x0 PRP2 0x0 00:39:18.165 [2024-05-15 09:04:58.024664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.165 [2024-05-15 09:04:58.024677] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.165 [2024-05-15 09:04:58.024688] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.165 [2024-05-15 09:04:58.024699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78248 len:8 PRP1 0x0 PRP2 0x0 00:39:18.165 [2024-05-15 09:04:58.024712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.165 [2024-05-15 09:04:58.024724] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.165 [2024-05-15 09:04:58.024735] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.165 [2024-05-15 09:04:58.024746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78256 len:8 PRP1 0x0 PRP2 0x0 00:39:18.165 [2024-05-15 09:04:58.024759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.165 [2024-05-15 09:04:58.024773] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.165 [2024-05-15 09:04:58.024783] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.165 [2024-05-15 09:04:58.024794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78264 len:8 PRP1 0x0 PRP2 0x0 00:39:18.165 [2024-05-15 09:04:58.024806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.165 [2024-05-15 09:04:58.024819] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.165 [2024-05-15 09:04:58.024830] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.165 [2024-05-15 09:04:58.024841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78272 len:8 PRP1 0x0 PRP2 0x0 00:39:18.165 [2024-05-15 09:04:58.024853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.165 [2024-05-15 09:04:58.024866] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.165 [2024-05-15 09:04:58.024877] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.165 [2024-05-15 09:04:58.024889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78280 len:8 PRP1 0x0 PRP2 0x0 00:39:18.165 [2024-05-15 09:04:58.024901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.165 [2024-05-15 09:04:58.024914] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.165 [2024-05-15 09:04:58.024925] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.165 [2024-05-15 09:04:58.024936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78288 len:8 PRP1 0x0 PRP2 0x0 00:39:18.165 [2024-05-15 09:04:58.024948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.165 [2024-05-15 09:04:58.024961] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.165 [2024-05-15 09:04:58.024971] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.165 [2024-05-15 09:04:58.024982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78296 len:8 PRP1 0x0 PRP2 0x0 00:39:18.165 [2024-05-15 09:04:58.024995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.165 [2024-05-15 09:04:58.025008] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.165 [2024-05-15 09:04:58.025024] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.165 [2024-05-15 09:04:58.025036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78304 len:8 PRP1 0x0 PRP2 0x0 00:39:18.165 [2024-05-15 09:04:58.025050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.165 [2024-05-15 09:04:58.025063] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.165 [2024-05-15 09:04:58.025074] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.165 [2024-05-15 09:04:58.025085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78312 len:8 PRP1 0x0 PRP2 0x0 00:39:18.165 [2024-05-15 09:04:58.025098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.165 [2024-05-15 09:04:58.025111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.165 [2024-05-15 09:04:58.025122] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.165 [2024-05-15 09:04:58.025133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78320 len:8 PRP1 0x0 PRP2 0x0 00:39:18.165 [2024-05-15 09:04:58.025146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.165 [2024-05-15 09:04:58.025159] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.165 [2024-05-15 09:04:58.025170] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.165 [2024-05-15 09:04:58.025181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78328 len:8 PRP1 0x0 PRP2 0x0 00:39:18.165 [2024-05-15 09:04:58.025193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.165 [2024-05-15 09:04:58.025231] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.165 [2024-05-15 09:04:58.025243] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.165 [2024-05-15 09:04:58.025265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77520 len:8 PRP1 0x0 PRP2 0x0 00:39:18.165 [2024-05-15 09:04:58.025278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.165 [2024-05-15 09:04:58.025293] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.165 [2024-05-15 09:04:58.025305] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.165 [2024-05-15 09:04:58.025317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77528 len:8 PRP1 0x0 PRP2 0x0 00:39:18.165 [2024-05-15 09:04:58.025330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.165 [2024-05-15 09:04:58.025343] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.165 [2024-05-15 09:04:58.025354] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.165 [2024-05-15 09:04:58.025366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77536 len:8 PRP1 0x0 PRP2 0x0 00:39:18.165 [2024-05-15 09:04:58.025379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.165 [2024-05-15 09:04:58.025393] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.165 [2024-05-15 09:04:58.025404] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.165 [2024-05-15 09:04:58.025415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77544 len:8 PRP1 0x0 PRP2 0x0 00:39:18.165 [2024-05-15 09:04:58.025429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.165 [2024-05-15 09:04:58.025446] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.165 [2024-05-15 09:04:58.025458] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.165 [2024-05-15 09:04:58.025469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77552 len:8 PRP1 0x0 PRP2 0x0 00:39:18.165 [2024-05-15 09:04:58.025482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.165 [2024-05-15 09:04:58.025497] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.165 [2024-05-15 09:04:58.025533] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.165 [2024-05-15 09:04:58.025545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77560 len:8 PRP1 0x0 PRP2 0x0 00:39:18.165 [2024-05-15 09:04:58.025558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.165 [2024-05-15 09:04:58.025572] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.165 [2024-05-15 09:04:58.025583] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.165 [2024-05-15 09:04:58.025595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77568 len:8 PRP1 0x0 PRP2 0x0 00:39:18.165 [2024-05-15 09:04:58.025607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.165 [2024-05-15 09:04:58.025621] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.165 [2024-05-15 09:04:58.025632] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.165 [2024-05-15 09:04:58.025643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77576 len:8 PRP1 0x0 PRP2 0x0 00:39:18.165 [2024-05-15 09:04:58.025656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.165 [2024-05-15 09:04:58.025670] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.165 [2024-05-15 09:04:58.025681] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.165 [2024-05-15 09:04:58.025692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77584 len:8 PRP1 0x0 PRP2 0x0 00:39:18.165 [2024-05-15 09:04:58.025704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.165 [2024-05-15 09:04:58.025718] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.165 [2024-05-15 09:04:58.025730] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.165 [2024-05-15 09:04:58.025741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77592 len:8 PRP1 0x0 PRP2 0x0 00:39:18.165 [2024-05-15 09:04:58.025753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.165 [2024-05-15 09:04:58.025766] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.165 [2024-05-15 09:04:58.025776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.165 [2024-05-15 09:04:58.025788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77600 len:8 PRP1 0x0 PRP2 0x0 00:39:18.165 [2024-05-15 09:04:58.025800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.165 [2024-05-15 09:04:58.025813] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.165 [2024-05-15 09:04:58.025823] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.165 [2024-05-15 09:04:58.025835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77608 len:8 PRP1 0x0 PRP2 0x0 00:39:18.165 [2024-05-15 09:04:58.025851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.165 [2024-05-15 09:04:58.025864] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.165 [2024-05-15 09:04:58.025875] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.165 [2024-05-15 09:04:58.025886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77616 len:8 PRP1 0x0 PRP2 0x0 00:39:18.165 [2024-05-15 09:04:58.025899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.165 [2024-05-15 09:04:58.025912] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.165 [2024-05-15 09:04:58.025923] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.165 [2024-05-15 09:04:58.025934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77624 len:8 PRP1 0x0 PRP2 0x0 00:39:18.165 [2024-05-15 09:04:58.025946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.165 [2024-05-15 09:04:58.025959] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.165 [2024-05-15 09:04:58.025971] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.165 [2024-05-15 09:04:58.025981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77632 len:8 PRP1 0x0 PRP2 0x0 00:39:18.165 [2024-05-15 09:04:58.025994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.166 [2024-05-15 09:04:58.026007] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.166 [2024-05-15 09:04:58.026018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.166 [2024-05-15 09:04:58.026029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77640 len:8 PRP1 0x0 PRP2 0x0 00:39:18.166 [2024-05-15 09:04:58.026041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.166 [2024-05-15 09:04:58.026054] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.166 [2024-05-15 09:04:58.026064] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.166 [2024-05-15 09:04:58.026075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77312 len:8 PRP1 0x0 PRP2 0x0 00:39:18.166 [2024-05-15 09:04:58.026088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.166 [2024-05-15 09:04:58.026102] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.166 [2024-05-15 09:04:58.026113] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.166 [2024-05-15 09:04:58.026125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77320 len:8 PRP1 0x0 PRP2 0x0 00:39:18.166 [2024-05-15 09:04:58.026137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.166 [2024-05-15 09:04:58.026151] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.166 [2024-05-15 09:04:58.026162] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.166 [2024-05-15 09:04:58.026173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77328 len:8 PRP1 0x0 PRP2 0x0 00:39:18.166 [2024-05-15 09:04:58.026186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.166 [2024-05-15 09:04:58.026199] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.166 [2024-05-15 09:04:58.026211] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.166 [2024-05-15 09:04:58.026261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77336 len:8 PRP1 0x0 PRP2 0x0 00:39:18.166 [2024-05-15 09:04:58.026276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.166 [2024-05-15 09:04:58.026291] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.166 [2024-05-15 09:04:58.026302] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.166 [2024-05-15 09:04:58.026313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77344 len:8 PRP1 0x0 PRP2 0x0 00:39:18.166 [2024-05-15 09:04:58.026327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.166 [2024-05-15 09:04:58.026340] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.166 [2024-05-15 09:04:58.026351] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.166 [2024-05-15 09:04:58.026363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77352 len:8 PRP1 0x0 PRP2 0x0 00:39:18.166 [2024-05-15 09:04:58.026376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.166 [2024-05-15 09:04:58.026389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.166 [2024-05-15 09:04:58.026400] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.166 [2024-05-15 09:04:58.026412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77360 len:8 PRP1 0x0 PRP2 0x0 00:39:18.166 [2024-05-15 09:04:58.026425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.166 [2024-05-15 09:04:58.026438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.166 [2024-05-15 09:04:58.026451] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.166 [2024-05-15 09:04:58.026462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77368 len:8 PRP1 0x0 PRP2 0x0 00:39:18.166 [2024-05-15 09:04:58.026476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.166 [2024-05-15 09:04:58.026489] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.166 [2024-05-15 09:04:58.026500] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.166 [2024-05-15 09:04:58.026518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77376 len:8 PRP1 0x0 PRP2 0x0 00:39:18.166 [2024-05-15 09:04:58.026545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.166 [2024-05-15 09:04:58.026559] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.166 [2024-05-15 09:04:58.026570] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.166 [2024-05-15 09:04:58.026582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77384 len:8 PRP1 0x0 PRP2 0x0 00:39:18.166 [2024-05-15 09:04:58.026595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.166 [2024-05-15 09:04:58.026609] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.166 [2024-05-15 09:04:58.026619] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.166 [2024-05-15 09:04:58.026631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77392 len:8 PRP1 0x0 PRP2 0x0 00:39:18.166 [2024-05-15 09:04:58.026644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.166 [2024-05-15 09:04:58.026661] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.166 [2024-05-15 09:04:58.026672] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.166 [2024-05-15 09:04:58.026684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77400 len:8 PRP1 0x0 PRP2 0x0 00:39:18.166 [2024-05-15 09:04:58.026696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.166 [2024-05-15 09:04:58.026709] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.166 [2024-05-15 09:04:58.026720] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.166 [2024-05-15 09:04:58.026732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77408 len:8 PRP1 0x0 PRP2 0x0 00:39:18.166 [2024-05-15 09:04:58.026745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.166 [2024-05-15 09:04:58.026757] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.166 [2024-05-15 09:04:58.026768] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.166 [2024-05-15 09:04:58.026780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77416 len:8 PRP1 0x0 PRP2 0x0 00:39:18.166 [2024-05-15 09:04:58.026793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.166 [2024-05-15 09:04:58.026807] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.166 [2024-05-15 09:04:58.026818] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.166 [2024-05-15 09:04:58.026829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77424 len:8 PRP1 0x0 PRP2 0x0 00:39:18.166 [2024-05-15 09:04:58.026842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.166 [2024-05-15 09:04:58.026855] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.166 [2024-05-15 09:04:58.026866] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.166 [2024-05-15 09:04:58.026878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77432 len:8 PRP1 0x0 PRP2 0x0 00:39:18.166 [2024-05-15 09:04:58.026891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.166 [2024-05-15 09:04:58.026903] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.166 [2024-05-15 09:04:58.026914] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.166 [2024-05-15 09:04:58.026926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77440 len:8 PRP1 0x0 PRP2 0x0 00:39:18.166 [2024-05-15 09:04:58.026939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.166 [2024-05-15 09:04:58.026959] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.166 [2024-05-15 09:04:58.026971] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.166 [2024-05-15 09:04:58.026981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77448 len:8 PRP1 0x0 PRP2 0x0 00:39:18.166 [2024-05-15 09:04:58.026995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.166 [2024-05-15 09:04:58.027008] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.166 [2024-05-15 09:04:58.027019] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.166 [2024-05-15 09:04:58.027030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77456 len:8 PRP1 0x0 PRP2 0x0 00:39:18.166 [2024-05-15 09:04:58.027051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.166 [2024-05-15 09:04:58.027066] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.166 [2024-05-15 09:04:58.027077] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.166 [2024-05-15 09:04:58.027088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77464 len:8 PRP1 0x0 PRP2 0x0 00:39:18.166 [2024-05-15 09:04:58.027100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.166 [2024-05-15 09:04:58.027113] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.166 [2024-05-15 09:04:58.027124] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.166 [2024-05-15 09:04:58.027135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77472 len:8 PRP1 0x0 PRP2 0x0 00:39:18.166 [2024-05-15 09:04:58.027148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.166 [2024-05-15 09:04:58.027161] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.166 [2024-05-15 09:04:58.027171] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.166 [2024-05-15 09:04:58.027182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77480 len:8 PRP1 0x0 PRP2 0x0 00:39:18.166 [2024-05-15 09:04:58.027210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.166 [2024-05-15 09:04:58.027234] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.166 [2024-05-15 09:04:58.027245] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.166 [2024-05-15 09:04:58.027262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77488 len:8 PRP1 0x0 PRP2 0x0 00:39:18.166 [2024-05-15 09:04:58.027274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.166 [2024-05-15 09:04:58.027288] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.166 [2024-05-15 09:04:58.027299] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.166 [2024-05-15 09:04:58.027311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77496 len:8 PRP1 0x0 PRP2 0x0 00:39:18.166 [2024-05-15 09:04:58.027324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.166 [2024-05-15 09:04:58.027337] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.166 [2024-05-15 09:04:58.027348] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.166 [2024-05-15 09:04:58.027359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77648 len:8 PRP1 0x0 PRP2 0x0 00:39:18.166 [2024-05-15 09:04:58.027372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.166 [2024-05-15 09:04:58.027390] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.166 [2024-05-15 09:04:58.027402] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.166 [2024-05-15 09:04:58.027414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77656 len:8 PRP1 0x0 PRP2 0x0 00:39:18.166 [2024-05-15 09:04:58.027427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.166 [2024-05-15 09:04:58.027440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.166 [2024-05-15 09:04:58.027451] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.166 [2024-05-15 09:04:58.027466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77664 len:8 PRP1 0x0 PRP2 0x0 00:39:18.166 [2024-05-15 09:04:58.027480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.166 [2024-05-15 09:04:58.027494] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.166 [2024-05-15 09:04:58.040834] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.166 [2024-05-15 09:04:58.040873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77672 len:8 PRP1 0x0 PRP2 0x0 00:39:18.166 [2024-05-15 09:04:58.040888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.166 [2024-05-15 09:04:58.040903] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.166 [2024-05-15 09:04:58.040915] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.166 [2024-05-15 09:04:58.040925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77680 len:8 PRP1 0x0 PRP2 0x0 00:39:18.166 [2024-05-15 09:04:58.040938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.166 [2024-05-15 09:04:58.040950] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.166 [2024-05-15 09:04:58.040961] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.166 [2024-05-15 09:04:58.040972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77688 len:8 PRP1 0x0 PRP2 0x0 00:39:18.166 [2024-05-15 09:04:58.040985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.166 [2024-05-15 09:04:58.040998] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.166 [2024-05-15 09:04:58.041009] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.166 [2024-05-15 09:04:58.041020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77696 len:8 PRP1 0x0 PRP2 0x0 00:39:18.166 [2024-05-15 09:04:58.041032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.166 [2024-05-15 09:04:58.041045] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.166 [2024-05-15 09:04:58.041055] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.166 [2024-05-15 09:04:58.041066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77704 len:8 PRP1 0x0 PRP2 0x0 00:39:18.166 [2024-05-15 09:04:58.041078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.166 [2024-05-15 09:04:58.041090] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.166 [2024-05-15 09:04:58.041100] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.166 [2024-05-15 09:04:58.041111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77712 len:8 PRP1 0x0 PRP2 0x0 00:39:18.166 [2024-05-15 09:04:58.041123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.166 [2024-05-15 09:04:58.041136] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.166 [2024-05-15 09:04:58.041147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.166 [2024-05-15 09:04:58.041158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77720 len:8 PRP1 0x0 PRP2 0x0 00:39:18.166 [2024-05-15 09:04:58.041171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.166 [2024-05-15 09:04:58.041183] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.166 [2024-05-15 09:04:58.041214] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.166 [2024-05-15 09:04:58.041238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77728 len:8 PRP1 0x0 PRP2 0x0 00:39:18.166 [2024-05-15 09:04:58.041252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.166 [2024-05-15 09:04:58.041282] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.166 [2024-05-15 09:04:58.041294] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.166 [2024-05-15 09:04:58.041306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77736 len:8 PRP1 0x0 PRP2 0x0 00:39:18.166 [2024-05-15 09:04:58.041320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.166 [2024-05-15 09:04:58.041334] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.166 [2024-05-15 09:04:58.041345] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.166 [2024-05-15 09:04:58.041356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77744 len:8 PRP1 0x0 PRP2 0x0 00:39:18.166 [2024-05-15 09:04:58.041371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.166 [2024-05-15 09:04:58.041385] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.166 [2024-05-15 09:04:58.041397] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.166 [2024-05-15 09:04:58.041408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77752 len:8 PRP1 0x0 PRP2 0x0 00:39:18.166 [2024-05-15 09:04:58.041421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.166 [2024-05-15 09:04:58.041436] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.166 [2024-05-15 09:04:58.041447] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.166 [2024-05-15 09:04:58.041460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77760 len:8 PRP1 0x0 PRP2 0x0 00:39:18.166 [2024-05-15 09:04:58.041473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.166 [2024-05-15 09:04:58.041487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.166 [2024-05-15 09:04:58.041498] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.166 [2024-05-15 09:04:58.041510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77768 len:8 PRP1 0x0 PRP2 0x0 00:39:18.166 [2024-05-15 09:04:58.041523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.166 [2024-05-15 09:04:58.041536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.166 [2024-05-15 09:04:58.041548] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.166 [2024-05-15 09:04:58.041574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77776 len:8 PRP1 0x0 PRP2 0x0 00:39:18.166 [2024-05-15 09:04:58.041588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.166 [2024-05-15 09:04:58.041602] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.166 [2024-05-15 09:04:58.041613] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.166 [2024-05-15 09:04:58.041638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77784 len:8 PRP1 0x0 PRP2 0x0 00:39:18.166 [2024-05-15 09:04:58.041651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.166 [2024-05-15 09:04:58.041669] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.166 [2024-05-15 09:04:58.041681] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.166 [2024-05-15 09:04:58.041692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77792 len:8 PRP1 0x0 PRP2 0x0 00:39:18.166 [2024-05-15 09:04:58.041704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.166 [2024-05-15 09:04:58.041717] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.166 [2024-05-15 09:04:58.041728] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.166 [2024-05-15 09:04:58.041739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77800 len:8 PRP1 0x0 PRP2 0x0 00:39:18.166 [2024-05-15 09:04:58.041751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.166 [2024-05-15 09:04:58.041764] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.166 [2024-05-15 09:04:58.041774] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.167 [2024-05-15 09:04:58.041785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77808 len:8 PRP1 0x0 PRP2 0x0 00:39:18.167 [2024-05-15 09:04:58.041797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.167 [2024-05-15 09:04:58.041810] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.167 [2024-05-15 09:04:58.041820] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.167 [2024-05-15 09:04:58.041831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77816 len:8 PRP1 0x0 PRP2 0x0 00:39:18.167 [2024-05-15 09:04:58.041844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.167 [2024-05-15 09:04:58.041856] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.167 [2024-05-15 09:04:58.041867] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.167 [2024-05-15 09:04:58.041877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77824 len:8 PRP1 0x0 PRP2 0x0 00:39:18.167 [2024-05-15 09:04:58.041890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.167 [2024-05-15 09:04:58.041903] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.167 [2024-05-15 09:04:58.041917] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.167 [2024-05-15 09:04:58.041928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77832 len:8 PRP1 0x0 PRP2 0x0 00:39:18.167 [2024-05-15 09:04:58.041941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.167 [2024-05-15 09:04:58.041954] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.167 [2024-05-15 09:04:58.041964] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.167 [2024-05-15 09:04:58.041975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77840 len:8 PRP1 0x0 PRP2 0x0 00:39:18.167 [2024-05-15 09:04:58.041988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.167 [2024-05-15 09:04:58.042000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.167 [2024-05-15 09:04:58.042012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.167 [2024-05-15 09:04:58.042023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77848 len:8 PRP1 0x0 PRP2 0x0 00:39:18.167 [2024-05-15 09:04:58.042038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.167 [2024-05-15 09:04:58.042052] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.167 [2024-05-15 09:04:58.042063] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.167 [2024-05-15 09:04:58.042074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77856 len:8 PRP1 0x0 PRP2 0x0 00:39:18.167 [2024-05-15 09:04:58.042087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.167 [2024-05-15 09:04:58.042100] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.167 [2024-05-15 09:04:58.042110] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.167 [2024-05-15 09:04:58.042122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77864 len:8 PRP1 0x0 PRP2 0x0 00:39:18.167 [2024-05-15 09:04:58.042133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.167 [2024-05-15 09:04:58.042147] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.167 [2024-05-15 09:04:58.042157] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.167 [2024-05-15 09:04:58.042168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77872 len:8 PRP1 0x0 PRP2 0x0 00:39:18.167 [2024-05-15 09:04:58.042180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.167 [2024-05-15 09:04:58.042193] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.167 [2024-05-15 09:04:58.042227] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.167 [2024-05-15 09:04:58.042240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77880 len:8 PRP1 0x0 PRP2 0x0 00:39:18.167 [2024-05-15 09:04:58.042259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.167 [2024-05-15 09:04:58.042289] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.167 [2024-05-15 09:04:58.042301] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.167 [2024-05-15 09:04:58.042312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77888 len:8 PRP1 0x0 PRP2 0x0 00:39:18.167 [2024-05-15 09:04:58.042326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.167 [2024-05-15 09:04:58.042339] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.167 [2024-05-15 09:04:58.042350] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.167 [2024-05-15 09:04:58.042362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77896 len:8 PRP1 0x0 PRP2 0x0 00:39:18.167 [2024-05-15 09:04:58.042375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.167 [2024-05-15 09:04:58.042388] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.167 [2024-05-15 09:04:58.042399] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.167 [2024-05-15 09:04:58.042411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77904 len:8 PRP1 0x0 PRP2 0x0 00:39:18.167 [2024-05-15 09:04:58.042424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.167 [2024-05-15 09:04:58.042438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.167 [2024-05-15 09:04:58.042448] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.167 [2024-05-15 09:04:58.042463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77912 len:8 PRP1 0x0 PRP2 0x0 00:39:18.167 [2024-05-15 09:04:58.042477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.167 [2024-05-15 09:04:58.042490] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.167 [2024-05-15 09:04:58.042516] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.167 [2024-05-15 09:04:58.042538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77920 len:8 PRP1 0x0 PRP2 0x0 00:39:18.167 [2024-05-15 09:04:58.042551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.167 [2024-05-15 09:04:58.042579] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.167 [2024-05-15 09:04:58.042590] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.167 [2024-05-15 09:04:58.042600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77928 len:8 PRP1 0x0 PRP2 0x0 00:39:18.167 [2024-05-15 09:04:58.042613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.167 [2024-05-15 09:04:58.042626] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.167 [2024-05-15 09:04:58.042637] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.167 [2024-05-15 09:04:58.042648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77936 len:8 PRP1 0x0 PRP2 0x0 00:39:18.167 [2024-05-15 09:04:58.042661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.167 [2024-05-15 09:04:58.042673] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.167 [2024-05-15 09:04:58.042684] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.167 [2024-05-15 09:04:58.042695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77944 len:8 PRP1 0x0 PRP2 0x0 00:39:18.167 [2024-05-15 09:04:58.042708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.167 [2024-05-15 09:04:58.042720] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.167 [2024-05-15 09:04:58.042731] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.167 [2024-05-15 09:04:58.042742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77952 len:8 PRP1 0x0 PRP2 0x0 00:39:18.167 [2024-05-15 09:04:58.042754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.167 [2024-05-15 09:04:58.042766] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.167 [2024-05-15 09:04:58.042777] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.167 [2024-05-15 09:04:58.042787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77960 len:8 PRP1 0x0 PRP2 0x0 00:39:18.167 [2024-05-15 09:04:58.042800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.167 [2024-05-15 09:04:58.042813] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.167 [2024-05-15 09:04:58.042823] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.167 [2024-05-15 09:04:58.042835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77968 len:8 PRP1 0x0 PRP2 0x0 00:39:18.167 [2024-05-15 09:04:58.042848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.167 [2024-05-15 09:04:58.042864] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.167 [2024-05-15 09:04:58.042875] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.167 [2024-05-15 09:04:58.042887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77976 len:8 PRP1 0x0 PRP2 0x0 00:39:18.167 [2024-05-15 09:04:58.042899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.167 [2024-05-15 09:04:58.042912] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.167 [2024-05-15 09:04:58.042923] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.167 [2024-05-15 09:04:58.042933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77984 len:8 PRP1 0x0 PRP2 0x0 00:39:18.167 [2024-05-15 09:04:58.042946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.167 [2024-05-15 09:04:58.042959] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.167 [2024-05-15 09:04:58.042970] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.167 [2024-05-15 09:04:58.042981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77992 len:8 PRP1 0x0 PRP2 0x0 00:39:18.167 [2024-05-15 09:04:58.042993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.167 [2024-05-15 09:04:58.043005] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.167 [2024-05-15 09:04:58.043017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.167 [2024-05-15 09:04:58.043029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78000 len:8 PRP1 0x0 PRP2 0x0 00:39:18.167 [2024-05-15 09:04:58.043041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.167 [2024-05-15 09:04:58.043054] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.167 [2024-05-15 09:04:58.043064] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.167 [2024-05-15 09:04:58.043075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78008 len:8 PRP1 0x0 PRP2 0x0 00:39:18.167 [2024-05-15 09:04:58.043088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.167 [2024-05-15 09:04:58.043101] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.167 [2024-05-15 09:04:58.043111] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.167 [2024-05-15 09:04:58.043122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78016 len:8 PRP1 0x0 PRP2 0x0 00:39:18.167 [2024-05-15 09:04:58.043134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.167 [2024-05-15 09:04:58.043147] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.167 [2024-05-15 09:04:58.043157] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.167 [2024-05-15 09:04:58.043169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78024 len:8 PRP1 0x0 PRP2 0x0 00:39:18.167 [2024-05-15 09:04:58.043181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.167 [2024-05-15 09:04:58.043194] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.167 [2024-05-15 09:04:58.043227] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.167 [2024-05-15 09:04:58.043240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78032 len:8 PRP1 0x0 PRP2 0x0 00:39:18.167 [2024-05-15 09:04:58.043286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.167 [2024-05-15 09:04:58.043302] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.167 [2024-05-15 09:04:58.043313] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.167 [2024-05-15 09:04:58.043325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78040 len:8 PRP1 0x0 PRP2 0x0 00:39:18.167 [2024-05-15 09:04:58.043338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.167 [2024-05-15 09:04:58.043351] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.167 [2024-05-15 09:04:58.043362] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.167 [2024-05-15 09:04:58.043373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78048 len:8 PRP1 0x0 PRP2 0x0 00:39:18.167 [2024-05-15 09:04:58.043386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.167 [2024-05-15 09:04:58.043399] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.167 [2024-05-15 09:04:58.043410] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.167 [2024-05-15 09:04:58.043422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78056 len:8 PRP1 0x0 PRP2 0x0 00:39:18.167 [2024-05-15 09:04:58.043434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.167 [2024-05-15 09:04:58.043447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.167 [2024-05-15 09:04:58.043459] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.167 [2024-05-15 09:04:58.043470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78064 len:8 PRP1 0x0 PRP2 0x0 00:39:18.167 [2024-05-15 09:04:58.043483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.167 [2024-05-15 09:04:58.043496] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.167 [2024-05-15 09:04:58.043533] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.167 [2024-05-15 09:04:58.043544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78072 len:8 PRP1 0x0 PRP2 0x0 00:39:18.167 [2024-05-15 09:04:58.043556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.167 [2024-05-15 09:04:58.043586] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.167 [2024-05-15 09:04:58.043596] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.167 [2024-05-15 09:04:58.043607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78080 len:8 PRP1 0x0 PRP2 0x0 00:39:18.167 [2024-05-15 09:04:58.043619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.167 [2024-05-15 09:04:58.043632] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.167 [2024-05-15 09:04:58.043642] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.167 [2024-05-15 09:04:58.043653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78088 len:8 PRP1 0x0 PRP2 0x0 00:39:18.167 [2024-05-15 09:04:58.043665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.167 [2024-05-15 09:04:58.043677] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.167 [2024-05-15 09:04:58.043688] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.167 [2024-05-15 09:04:58.043701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78096 len:8 PRP1 0x0 PRP2 0x0 00:39:18.167 [2024-05-15 09:04:58.043719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.167 [2024-05-15 09:04:58.043733] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.167 [2024-05-15 09:04:58.043744] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.167 [2024-05-15 09:04:58.043755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78104 len:8 PRP1 0x0 PRP2 0x0 00:39:18.167 [2024-05-15 09:04:58.043767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.167 [2024-05-15 09:04:58.043779] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.167 [2024-05-15 09:04:58.043789] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.167 [2024-05-15 09:04:58.043800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78112 len:8 PRP1 0x0 PRP2 0x0 00:39:18.167 [2024-05-15 09:04:58.043812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.167 [2024-05-15 09:04:58.043825] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.167 [2024-05-15 09:04:58.043835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.167 [2024-05-15 09:04:58.043845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78120 len:8 PRP1 0x0 PRP2 0x0 00:39:18.167 [2024-05-15 09:04:58.043857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.167 [2024-05-15 09:04:58.043870] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.167 [2024-05-15 09:04:58.043880] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.167 [2024-05-15 09:04:58.043891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78128 len:8 PRP1 0x0 PRP2 0x0 00:39:18.167 [2024-05-15 09:04:58.043903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.167 [2024-05-15 09:04:58.043915] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.167 [2024-05-15 09:04:58.043926] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.167 [2024-05-15 09:04:58.043937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78136 len:8 PRP1 0x0 PRP2 0x0 00:39:18.167 [2024-05-15 09:04:58.043949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.167 [2024-05-15 09:04:58.043962] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.167 [2024-05-15 09:04:58.043972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.167 [2024-05-15 09:04:58.043982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78144 len:8 PRP1 0x0 PRP2 0x0 00:39:18.167 [2024-05-15 09:04:58.043995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.167 [2024-05-15 09:04:58.044007] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.167 [2024-05-15 09:04:58.044018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.167 [2024-05-15 09:04:58.044028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77504 len:8 PRP1 0x0 PRP2 0x0 00:39:18.167 [2024-05-15 09:04:58.044040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.167 [2024-05-15 09:04:58.044053] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.167 [2024-05-15 09:04:58.044066] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.167 [2024-05-15 09:04:58.044078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77512 len:8 PRP1 0x0 PRP2 0x0 00:39:18.167 [2024-05-15 09:04:58.044095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.167 [2024-05-15 09:04:58.044108] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.167 [2024-05-15 09:04:58.044119] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.167 [2024-05-15 09:04:58.044130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78152 len:8 PRP1 0x0 PRP2 0x0 00:39:18.167 [2024-05-15 09:04:58.044142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.167 [2024-05-15 09:04:58.044229] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xfc3db0 was disconnected and freed. reset controller. 00:39:18.167 [2024-05-15 09:04:58.044285] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:39:18.167 [2024-05-15 09:04:58.044302] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.167 [2024-05-15 09:04:58.044359] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa4600 (9): Bad file descriptor 00:39:18.167 [2024-05-15 09:04:58.047666] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.167 [2024-05-15 09:04:58.122272] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:39:18.167 [2024-05-15 09:05:01.634364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:82560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.167 [2024-05-15 09:05:01.634409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.167 [2024-05-15 09:05:01.634436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.167 [2024-05-15 09:05:01.634453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.167 [2024-05-15 09:05:01.634470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:82576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.167 [2024-05-15 09:05:01.634485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.167 [2024-05-15 09:05:01.634511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:82584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.167 [2024-05-15 09:05:01.634527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.167 [2024-05-15 09:05:01.634542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:82592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.167 [2024-05-15 09:05:01.634557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.167 [2024-05-15 09:05:01.634573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:82600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.167 [2024-05-15 09:05:01.634587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.167 [2024-05-15 09:05:01.634602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.167 [2024-05-15 09:05:01.634616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.167 [2024-05-15 09:05:01.634631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:82616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.167 [2024-05-15 09:05:01.634654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.167 [2024-05-15 09:05:01.634670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:82624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.167 [2024-05-15 09:05:01.634684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.167 [2024-05-15 09:05:01.634699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:82632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.168 [2024-05-15 09:05:01.634713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.634728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:82640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.168 [2024-05-15 09:05:01.634742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.634757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:82648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.168 [2024-05-15 09:05:01.634771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.634786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:82656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.168 [2024-05-15 09:05:01.634800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.634814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:82664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.168 [2024-05-15 09:05:01.634828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.634843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:82672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.168 [2024-05-15 09:05:01.634857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.634872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:82680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.168 [2024-05-15 09:05:01.634885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.634900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:82688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.168 [2024-05-15 09:05:01.634914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.634929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:82696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.168 [2024-05-15 09:05:01.634942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.634957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:82704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.168 [2024-05-15 09:05:01.634971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.634987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:82712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.168 [2024-05-15 09:05:01.635000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.635020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:82720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.168 [2024-05-15 09:05:01.635034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.635049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:82728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.168 [2024-05-15 09:05:01.635063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.635078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:82736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.168 [2024-05-15 09:05:01.635092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.635108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:82744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.168 [2024-05-15 09:05:01.635122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.635137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.168 [2024-05-15 09:05:01.635151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.635166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:82760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.168 [2024-05-15 09:05:01.635180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.635195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:82768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.168 [2024-05-15 09:05:01.635208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.635252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:82776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.168 [2024-05-15 09:05:01.635268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.635285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:82784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.168 [2024-05-15 09:05:01.635299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.635315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:82792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.168 [2024-05-15 09:05:01.635329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.635345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:82800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.168 [2024-05-15 09:05:01.635359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.635374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:82808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.168 [2024-05-15 09:05:01.635389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.635404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:82816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.168 [2024-05-15 09:05:01.635425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.635441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:82824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.168 [2024-05-15 09:05:01.635456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.635471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:82832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.168 [2024-05-15 09:05:01.635485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.635500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:82840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.168 [2024-05-15 09:05:01.635514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.635545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:82848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.168 [2024-05-15 09:05:01.635558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.635574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:82856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.168 [2024-05-15 09:05:01.635587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.635602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:82864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.168 [2024-05-15 09:05:01.635616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.635630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:82872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.168 [2024-05-15 09:05:01.635644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.635659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:82880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.168 [2024-05-15 09:05:01.635672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.635687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.168 [2024-05-15 09:05:01.635701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.635716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:82952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.168 [2024-05-15 09:05:01.635730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.635744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:82960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.168 [2024-05-15 09:05:01.635758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.635773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:82968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.168 [2024-05-15 09:05:01.635787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.635806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:82976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.168 [2024-05-15 09:05:01.635820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.635835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.168 [2024-05-15 09:05:01.635848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.635863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:82992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.168 [2024-05-15 09:05:01.635877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.635893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:83000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.168 [2024-05-15 09:05:01.635906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.635921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:83008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.168 [2024-05-15 09:05:01.635934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.635949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:83016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.168 [2024-05-15 09:05:01.635963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.635978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:83024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.168 [2024-05-15 09:05:01.635991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.636007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.168 [2024-05-15 09:05:01.636021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.636035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:83040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.168 [2024-05-15 09:05:01.636049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.636064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:83048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.168 [2024-05-15 09:05:01.636077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.636092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:83056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.168 [2024-05-15 09:05:01.636106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.636120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:83064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.168 [2024-05-15 09:05:01.636133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.636148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:83072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.168 [2024-05-15 09:05:01.636162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.636181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:83080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.168 [2024-05-15 09:05:01.636195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.636210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:83088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.168 [2024-05-15 09:05:01.636246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.636263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:83096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.168 [2024-05-15 09:05:01.636278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.636294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:83104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.168 [2024-05-15 09:05:01.636308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.636323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:83112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.168 [2024-05-15 09:05:01.636337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.636353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:83120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.168 [2024-05-15 09:05:01.636367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.636383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:83128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.168 [2024-05-15 09:05:01.636397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.636412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:83136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.168 [2024-05-15 09:05:01.636426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.636441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:83144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.168 [2024-05-15 09:05:01.636455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.636470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.168 [2024-05-15 09:05:01.636484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.636500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:83160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.168 [2024-05-15 09:05:01.636514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.636544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:83168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.168 [2024-05-15 09:05:01.636557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.636572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:83176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.168 [2024-05-15 09:05:01.636589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.636605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:83184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.168 [2024-05-15 09:05:01.636619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.636633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:83192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.168 [2024-05-15 09:05:01.636647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.636662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:83200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.168 [2024-05-15 09:05:01.636676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.636691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:83208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.168 [2024-05-15 09:05:01.636705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.636719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:83216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.168 [2024-05-15 09:05:01.636733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.636748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:83224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.168 [2024-05-15 09:05:01.636762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.636777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:83232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.168 [2024-05-15 09:05:01.636791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.636806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:83240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.168 [2024-05-15 09:05:01.636819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.636834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:83248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.168 [2024-05-15 09:05:01.636848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.636862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:83256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.168 [2024-05-15 09:05:01.636876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.636891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:83264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.168 [2024-05-15 09:05:01.636905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.636920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:83272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.168 [2024-05-15 09:05:01.636933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.636951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.168 [2024-05-15 09:05:01.636965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.636980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:83288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.168 [2024-05-15 09:05:01.636994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.637009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:83296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.168 [2024-05-15 09:05:01.637022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.637037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:83304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.168 [2024-05-15 09:05:01.637051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.637066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:83312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.168 [2024-05-15 09:05:01.637079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.637094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:83320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.168 [2024-05-15 09:05:01.637107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.168 [2024-05-15 09:05:01.637122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:83328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.169 [2024-05-15 09:05:01.637136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.169 [2024-05-15 09:05:01.637151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:83336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.169 [2024-05-15 09:05:01.637164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.169 [2024-05-15 09:05:01.637179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:83344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.169 [2024-05-15 09:05:01.637193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.169 [2024-05-15 09:05:01.637232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:83352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.169 [2024-05-15 09:05:01.637247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.169 [2024-05-15 09:05:01.637262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:83360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.169 [2024-05-15 09:05:01.637277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.169 [2024-05-15 09:05:01.637292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:83368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.169 [2024-05-15 09:05:01.637306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.169 [2024-05-15 09:05:01.637321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:83376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.169 [2024-05-15 09:05:01.637339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.169 [2024-05-15 09:05:01.637355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:83384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.169 [2024-05-15 09:05:01.637369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.169 [2024-05-15 09:05:01.637385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:83392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.169 [2024-05-15 09:05:01.637398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.169 [2024-05-15 09:05:01.637413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:83400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.169 [2024-05-15 09:05:01.637427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.169 [2024-05-15 09:05:01.637442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:83408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.169 [2024-05-15 09:05:01.637457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.169 [2024-05-15 09:05:01.637472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.169 [2024-05-15 09:05:01.637486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.169 [2024-05-15 09:05:01.637501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:83424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.169 [2024-05-15 09:05:01.637530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.169 [2024-05-15 09:05:01.637546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:83432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.169 [2024-05-15 09:05:01.637560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.169 [2024-05-15 09:05:01.637575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:83440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.169 [2024-05-15 09:05:01.637588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.169 [2024-05-15 09:05:01.637602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:83448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.169 [2024-05-15 09:05:01.637616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.169 [2024-05-15 09:05:01.637631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:83456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.169 [2024-05-15 09:05:01.637644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.169 [2024-05-15 09:05:01.637658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.169 [2024-05-15 09:05:01.637672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.169 [2024-05-15 09:05:01.637687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:83472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.169 [2024-05-15 09:05:01.637700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.169 [2024-05-15 09:05:01.637716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:83480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.169 [2024-05-15 09:05:01.637733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.169 [2024-05-15 09:05:01.637748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:83488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.169 [2024-05-15 09:05:01.637762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.169 [2024-05-15 09:05:01.637777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:83496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.169 [2024-05-15 09:05:01.637791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.169 [2024-05-15 09:05:01.637806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:83504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.169 [2024-05-15 09:05:01.637820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.169 [2024-05-15 09:05:01.637834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.169 [2024-05-15 09:05:01.637849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.169 [2024-05-15 09:05:01.637864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:83520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.169 [2024-05-15 09:05:01.637878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.169 [2024-05-15 09:05:01.637893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:83528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.169 [2024-05-15 09:05:01.637906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.169 [2024-05-15 09:05:01.637921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:83536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.169 [2024-05-15 09:05:01.637934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.169 [2024-05-15 09:05:01.637949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:83544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.169 [2024-05-15 09:05:01.637963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.169 [2024-05-15 09:05:01.637977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:83552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.169 [2024-05-15 09:05:01.637991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.169 [2024-05-15 09:05:01.638006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:83560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.169 [2024-05-15 09:05:01.638019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.169 [2024-05-15 09:05:01.638034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:83568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.169 [2024-05-15 09:05:01.638048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.169 [2024-05-15 09:05:01.638063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:83576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.169 [2024-05-15 09:05:01.638076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.169 [2024-05-15 09:05:01.638094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:82896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.169 [2024-05-15 09:05:01.638109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.169 [2024-05-15 09:05:01.638124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:82904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.169 [2024-05-15 09:05:01.638137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.169 [2024-05-15 09:05:01.638170] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.169 [2024-05-15 09:05:01.638187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82912 len:8 PRP1 0x0 PRP2 0x0 00:39:18.169 [2024-05-15 09:05:01.638201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.169 [2024-05-15 09:05:01.638487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.169 [2024-05-15 09:05:01.638508] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.169 [2024-05-15 09:05:01.638536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82920 len:8 PRP1 0x0 PRP2 0x0 00:39:18.169 [2024-05-15 09:05:01.638550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.169 [2024-05-15 09:05:01.638567] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.169 [2024-05-15 09:05:01.638580] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.169 [2024-05-15 09:05:01.638591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82928 len:8 PRP1 0x0 PRP2 0x0 00:39:18.169 [2024-05-15 09:05:01.638604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.169 [2024-05-15 09:05:01.638617] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.169 [2024-05-15 09:05:01.638628] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.169 [2024-05-15 09:05:01.638639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82936 len:8 PRP1 0x0 PRP2 0x0 00:39:18.169 [2024-05-15 09:05:01.638652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.169 [2024-05-15 09:05:01.638665] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.169 [2024-05-15 09:05:01.638676] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.169 [2024-05-15 09:05:01.638687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82944 len:8 PRP1 0x0 PRP2 0x0 00:39:18.169 [2024-05-15 09:05:01.638700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.169 [2024-05-15 09:05:01.638713] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.169 [2024-05-15 09:05:01.638723] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.169 [2024-05-15 09:05:01.638734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82560 len:8 PRP1 0x0 PRP2 0x0 00:39:18.169 [2024-05-15 09:05:01.638747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.169 [2024-05-15 09:05:01.638760] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.169 [2024-05-15 09:05:01.638771] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.169 [2024-05-15 09:05:01.638783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82568 len:8 PRP1 0x0 PRP2 0x0 00:39:18.169 [2024-05-15 09:05:01.638799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.169 [2024-05-15 09:05:01.638813] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.169 [2024-05-15 09:05:01.638824] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.169 [2024-05-15 09:05:01.638836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82576 len:8 PRP1 0x0 PRP2 0x0 00:39:18.169 [2024-05-15 09:05:01.638848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.169 [2024-05-15 09:05:01.638861] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.169 [2024-05-15 09:05:01.638872] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.169 [2024-05-15 09:05:01.638883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82584 len:8 PRP1 0x0 PRP2 0x0 00:39:18.169 [2024-05-15 09:05:01.638896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.169 [2024-05-15 09:05:01.638909] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.169 [2024-05-15 09:05:01.638920] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.169 [2024-05-15 09:05:01.638930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82592 len:8 PRP1 0x0 PRP2 0x0 00:39:18.169 [2024-05-15 09:05:01.638943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.169 [2024-05-15 09:05:01.638956] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.169 [2024-05-15 09:05:01.638966] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.169 [2024-05-15 09:05:01.638977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82600 len:8 PRP1 0x0 PRP2 0x0 00:39:18.169 [2024-05-15 09:05:01.638989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.169 [2024-05-15 09:05:01.639002] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.169 [2024-05-15 09:05:01.639013] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.169 [2024-05-15 09:05:01.639024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82608 len:8 PRP1 0x0 PRP2 0x0 00:39:18.169 [2024-05-15 09:05:01.639036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.169 [2024-05-15 09:05:01.639048] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.169 [2024-05-15 09:05:01.639058] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.169 [2024-05-15 09:05:01.639070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82616 len:8 PRP1 0x0 PRP2 0x0 00:39:18.169 [2024-05-15 09:05:01.639082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.169 [2024-05-15 09:05:01.639094] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.169 [2024-05-15 09:05:01.639105] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.169 [2024-05-15 09:05:01.639116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82624 len:8 PRP1 0x0 PRP2 0x0 00:39:18.169 [2024-05-15 09:05:01.639128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.169 [2024-05-15 09:05:01.639141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.169 [2024-05-15 09:05:01.639152] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.169 [2024-05-15 09:05:01.639166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82632 len:8 PRP1 0x0 PRP2 0x0 00:39:18.169 [2024-05-15 09:05:01.639179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.169 [2024-05-15 09:05:01.639192] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.169 [2024-05-15 09:05:01.639203] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.169 [2024-05-15 09:05:01.639214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82640 len:8 PRP1 0x0 PRP2 0x0 00:39:18.169 [2024-05-15 09:05:01.639253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.169 [2024-05-15 09:05:01.639268] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.169 [2024-05-15 09:05:01.639280] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.169 [2024-05-15 09:05:01.639291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82648 len:8 PRP1 0x0 PRP2 0x0 00:39:18.169 [2024-05-15 09:05:01.639305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.169 [2024-05-15 09:05:01.639318] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.169 [2024-05-15 09:05:01.639329] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.169 [2024-05-15 09:05:01.639340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82656 len:8 PRP1 0x0 PRP2 0x0 00:39:18.169 [2024-05-15 09:05:01.639353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.169 [2024-05-15 09:05:01.639366] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.169 [2024-05-15 09:05:01.639377] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.169 [2024-05-15 09:05:01.639389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82664 len:8 PRP1 0x0 PRP2 0x0 00:39:18.169 [2024-05-15 09:05:01.639401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.169 [2024-05-15 09:05:01.639414] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.169 [2024-05-15 09:05:01.639425] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.169 [2024-05-15 09:05:01.639437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82672 len:8 PRP1 0x0 PRP2 0x0 00:39:18.169 [2024-05-15 09:05:01.639450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.169 [2024-05-15 09:05:01.639463] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.169 [2024-05-15 09:05:01.639474] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.169 [2024-05-15 09:05:01.639486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82680 len:8 PRP1 0x0 PRP2 0x0 00:39:18.169 [2024-05-15 09:05:01.639498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.169 [2024-05-15 09:05:01.639511] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.169 [2024-05-15 09:05:01.639538] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.169 [2024-05-15 09:05:01.639549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82688 len:8 PRP1 0x0 PRP2 0x0 00:39:18.169 [2024-05-15 09:05:01.639562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.169 [2024-05-15 09:05:01.639579] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.169 [2024-05-15 09:05:01.639590] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.169 [2024-05-15 09:05:01.639601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82696 len:8 PRP1 0x0 PRP2 0x0 00:39:18.169 [2024-05-15 09:05:01.639614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.169 [2024-05-15 09:05:01.639627] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.169 [2024-05-15 09:05:01.639637] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.169 [2024-05-15 09:05:01.639649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82704 len:8 PRP1 0x0 PRP2 0x0 00:39:18.169 [2024-05-15 09:05:01.639661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.169 [2024-05-15 09:05:01.639674] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.169 [2024-05-15 09:05:01.639685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.169 [2024-05-15 09:05:01.639697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82712 len:8 PRP1 0x0 PRP2 0x0 00:39:18.169 [2024-05-15 09:05:01.639710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.169 [2024-05-15 09:05:01.639722] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.169 [2024-05-15 09:05:01.639733] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.169 [2024-05-15 09:05:01.639745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82720 len:8 PRP1 0x0 PRP2 0x0 00:39:18.169 [2024-05-15 09:05:01.639757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.169 [2024-05-15 09:05:01.639770] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.169 [2024-05-15 09:05:01.639780] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.169 [2024-05-15 09:05:01.639792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82728 len:8 PRP1 0x0 PRP2 0x0 00:39:18.169 [2024-05-15 09:05:01.639804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.169 [2024-05-15 09:05:01.639817] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.169 [2024-05-15 09:05:01.639828] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.169 [2024-05-15 09:05:01.639839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82736 len:8 PRP1 0x0 PRP2 0x0 00:39:18.169 [2024-05-15 09:05:01.639852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.169 [2024-05-15 09:05:01.639864] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.169 [2024-05-15 09:05:01.639875] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.169 [2024-05-15 09:05:01.639886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82744 len:8 PRP1 0x0 PRP2 0x0 00:39:18.169 [2024-05-15 09:05:01.639898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.169 [2024-05-15 09:05:01.639911] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.169 [2024-05-15 09:05:01.639922] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.169 [2024-05-15 09:05:01.639933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82752 len:8 PRP1 0x0 PRP2 0x0 00:39:18.169 [2024-05-15 09:05:01.639949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.169 [2024-05-15 09:05:01.639962] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.169 [2024-05-15 09:05:01.639973] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.169 [2024-05-15 09:05:01.639984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82760 len:8 PRP1 0x0 PRP2 0x0 00:39:18.169 [2024-05-15 09:05:01.639997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.169 [2024-05-15 09:05:01.640010] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.169 [2024-05-15 09:05:01.640020] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.169 [2024-05-15 09:05:01.640032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82768 len:8 PRP1 0x0 PRP2 0x0 00:39:18.170 [2024-05-15 09:05:01.640044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.170 [2024-05-15 09:05:01.640057] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.170 [2024-05-15 09:05:01.640067] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.170 [2024-05-15 09:05:01.640079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82776 len:8 PRP1 0x0 PRP2 0x0 00:39:18.170 [2024-05-15 09:05:01.640091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.170 [2024-05-15 09:05:01.640104] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.170 [2024-05-15 09:05:01.640115] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.170 [2024-05-15 09:05:01.640126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82784 len:8 PRP1 0x0 PRP2 0x0 00:39:18.170 [2024-05-15 09:05:01.640139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.170 [2024-05-15 09:05:01.640151] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.170 [2024-05-15 09:05:01.640162] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.170 [2024-05-15 09:05:01.640174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82792 len:8 PRP1 0x0 PRP2 0x0 00:39:18.170 [2024-05-15 09:05:01.640186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.170 [2024-05-15 09:05:01.640199] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.170 [2024-05-15 09:05:01.640239] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.170 [2024-05-15 09:05:01.640254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82800 len:8 PRP1 0x0 PRP2 0x0 00:39:18.170 [2024-05-15 09:05:01.640267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.170 [2024-05-15 09:05:01.640282] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.170 [2024-05-15 09:05:01.640294] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.170 [2024-05-15 09:05:01.640305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82808 len:8 PRP1 0x0 PRP2 0x0 00:39:18.170 [2024-05-15 09:05:01.640319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.170 [2024-05-15 09:05:01.640332] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.170 [2024-05-15 09:05:01.640343] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.170 [2024-05-15 09:05:01.640354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82816 len:8 PRP1 0x0 PRP2 0x0 00:39:18.170 [2024-05-15 09:05:01.640371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.170 [2024-05-15 09:05:01.640385] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.170 [2024-05-15 09:05:01.640397] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.170 [2024-05-15 09:05:01.640408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82824 len:8 PRP1 0x0 PRP2 0x0 00:39:18.170 [2024-05-15 09:05:01.640422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.170 [2024-05-15 09:05:01.640435] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.170 [2024-05-15 09:05:01.640446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.170 [2024-05-15 09:05:01.640458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82832 len:8 PRP1 0x0 PRP2 0x0 00:39:18.170 [2024-05-15 09:05:01.640471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.170 [2024-05-15 09:05:01.640484] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.170 [2024-05-15 09:05:01.640495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.170 [2024-05-15 09:05:01.640507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82840 len:8 PRP1 0x0 PRP2 0x0 00:39:18.170 [2024-05-15 09:05:01.640534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.170 [2024-05-15 09:05:01.640548] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.170 [2024-05-15 09:05:01.640559] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.170 [2024-05-15 09:05:01.640570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82848 len:8 PRP1 0x0 PRP2 0x0 00:39:18.170 [2024-05-15 09:05:01.640583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.170 [2024-05-15 09:05:01.640596] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.170 [2024-05-15 09:05:01.640606] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.170 [2024-05-15 09:05:01.640618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82856 len:8 PRP1 0x0 PRP2 0x0 00:39:18.170 [2024-05-15 09:05:01.640630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.170 [2024-05-15 09:05:01.640643] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.170 [2024-05-15 09:05:01.640663] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.170 [2024-05-15 09:05:01.640674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82864 len:8 PRP1 0x0 PRP2 0x0 00:39:18.170 [2024-05-15 09:05:01.640687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.170 [2024-05-15 09:05:01.640700] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.170 [2024-05-15 09:05:01.640712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.170 [2024-05-15 09:05:01.640723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82872 len:8 PRP1 0x0 PRP2 0x0 00:39:18.170 [2024-05-15 09:05:01.640736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.170 [2024-05-15 09:05:01.640749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.170 [2024-05-15 09:05:01.640763] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.170 [2024-05-15 09:05:01.640774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82880 len:8 PRP1 0x0 PRP2 0x0 00:39:18.170 [2024-05-15 09:05:01.640787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.170 [2024-05-15 09:05:01.640800] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.170 [2024-05-15 09:05:01.640810] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.170 [2024-05-15 09:05:01.640822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82888 len:8 PRP1 0x0 PRP2 0x0 00:39:18.170 [2024-05-15 09:05:01.640835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.170 [2024-05-15 09:05:01.640848] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.170 [2024-05-15 09:05:01.640858] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.170 [2024-05-15 09:05:01.640870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82952 len:8 PRP1 0x0 PRP2 0x0 00:39:18.170 [2024-05-15 09:05:01.640882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.170 [2024-05-15 09:05:01.640895] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.170 [2024-05-15 09:05:01.640906] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.170 [2024-05-15 09:05:01.640917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82960 len:8 PRP1 0x0 PRP2 0x0 00:39:18.170 [2024-05-15 09:05:01.640930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.170 [2024-05-15 09:05:01.640943] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.170 [2024-05-15 09:05:01.640954] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.170 [2024-05-15 09:05:01.640965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82968 len:8 PRP1 0x0 PRP2 0x0 00:39:18.170 [2024-05-15 09:05:01.640978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.170 [2024-05-15 09:05:01.640991] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.170 [2024-05-15 09:05:01.641001] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.170 [2024-05-15 09:05:01.641013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82976 len:8 PRP1 0x0 PRP2 0x0 00:39:18.170 [2024-05-15 09:05:01.641026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.170 [2024-05-15 09:05:01.641039] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.170 [2024-05-15 09:05:01.641054] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.170 [2024-05-15 09:05:01.641066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82984 len:8 PRP1 0x0 PRP2 0x0 00:39:18.170 [2024-05-15 09:05:01.641078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.170 [2024-05-15 09:05:01.641092] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.170 [2024-05-15 09:05:01.641102] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.170 [2024-05-15 09:05:01.641114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82992 len:8 PRP1 0x0 PRP2 0x0 00:39:18.170 [2024-05-15 09:05:01.641126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.170 [2024-05-15 09:05:01.641143] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.170 [2024-05-15 09:05:01.641154] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.170 [2024-05-15 09:05:01.641166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83000 len:8 PRP1 0x0 PRP2 0x0 00:39:18.170 [2024-05-15 09:05:01.641179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.170 [2024-05-15 09:05:01.641206] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.170 [2024-05-15 09:05:01.641224] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.170 [2024-05-15 09:05:01.641237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83008 len:8 PRP1 0x0 PRP2 0x0 00:39:18.170 [2024-05-15 09:05:01.641250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.170 [2024-05-15 09:05:01.641264] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.170 [2024-05-15 09:05:01.641275] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.170 [2024-05-15 09:05:01.641287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83016 len:8 PRP1 0x0 PRP2 0x0 00:39:18.170 [2024-05-15 09:05:01.641300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.170 [2024-05-15 09:05:01.641314] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.170 [2024-05-15 09:05:01.641325] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.170 [2024-05-15 09:05:01.641341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83024 len:8 PRP1 0x0 PRP2 0x0 00:39:18.170 [2024-05-15 09:05:01.641355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.170 [2024-05-15 09:05:01.641368] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.170 [2024-05-15 09:05:01.641379] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.170 [2024-05-15 09:05:01.641390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83032 len:8 PRP1 0x0 PRP2 0x0 00:39:18.170 [2024-05-15 09:05:01.641403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.170 [2024-05-15 09:05:01.641416] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.170 [2024-05-15 09:05:01.641427] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.170 [2024-05-15 09:05:01.641439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83040 len:8 PRP1 0x0 PRP2 0x0 00:39:18.170 [2024-05-15 09:05:01.641451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.170 [2024-05-15 09:05:01.641464] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.170 [2024-05-15 09:05:01.641476] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.170 [2024-05-15 09:05:01.641488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83048 len:8 PRP1 0x0 PRP2 0x0 00:39:18.170 [2024-05-15 09:05:01.641500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.170 [2024-05-15 09:05:01.641529] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.170 [2024-05-15 09:05:01.641540] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.170 [2024-05-15 09:05:01.641552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83056 len:8 PRP1 0x0 PRP2 0x0 00:39:18.170 [2024-05-15 09:05:01.641570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.170 [2024-05-15 09:05:01.641584] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.170 [2024-05-15 09:05:01.641595] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.170 [2024-05-15 09:05:01.641606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83064 len:8 PRP1 0x0 PRP2 0x0 00:39:18.170 [2024-05-15 09:05:01.641619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.170 [2024-05-15 09:05:01.641631] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.170 [2024-05-15 09:05:01.641642] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.170 [2024-05-15 09:05:01.641653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83072 len:8 PRP1 0x0 PRP2 0x0 00:39:18.170 [2024-05-15 09:05:01.641665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.170 [2024-05-15 09:05:01.641678] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.170 [2024-05-15 09:05:01.641688] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.170 [2024-05-15 09:05:01.641699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83080 len:8 PRP1 0x0 PRP2 0x0 00:39:18.170 [2024-05-15 09:05:01.641712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.170 [2024-05-15 09:05:01.641724] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.170 [2024-05-15 09:05:01.641735] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.170 [2024-05-15 09:05:01.641750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83088 len:8 PRP1 0x0 PRP2 0x0 00:39:18.170 [2024-05-15 09:05:01.641763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.170 [2024-05-15 09:05:01.641777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.170 [2024-05-15 09:05:01.641787] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.170 [2024-05-15 09:05:01.641798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83096 len:8 PRP1 0x0 PRP2 0x0 00:39:18.170 [2024-05-15 09:05:01.641810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.170 [2024-05-15 09:05:01.641823] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.170 [2024-05-15 09:05:01.641833] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.170 [2024-05-15 09:05:01.641844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83104 len:8 PRP1 0x0 PRP2 0x0 00:39:18.170 [2024-05-15 09:05:01.641857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.170 [2024-05-15 09:05:01.641870] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.170 [2024-05-15 09:05:01.641880] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.170 [2024-05-15 09:05:01.641891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83112 len:8 PRP1 0x0 PRP2 0x0 00:39:18.170 [2024-05-15 09:05:01.641904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.170 [2024-05-15 09:05:01.641916] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.170 [2024-05-15 09:05:01.641927] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.170 [2024-05-15 09:05:01.641942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83120 len:8 PRP1 0x0 PRP2 0x0 00:39:18.170 [2024-05-15 09:05:01.641955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.170 [2024-05-15 09:05:01.641968] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.170 [2024-05-15 09:05:01.641979] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.170 [2024-05-15 09:05:01.641990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83128 len:8 PRP1 0x0 PRP2 0x0 00:39:18.170 [2024-05-15 09:05:01.642002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.170 [2024-05-15 09:05:01.642015] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.170 [2024-05-15 09:05:01.642025] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.170 [2024-05-15 09:05:01.642036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83136 len:8 PRP1 0x0 PRP2 0x0 00:39:18.170 [2024-05-15 09:05:01.642049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.170 [2024-05-15 09:05:01.642062] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.170 [2024-05-15 09:05:01.642072] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.170 [2024-05-15 09:05:01.642083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83144 len:8 PRP1 0x0 PRP2 0x0 00:39:18.170 [2024-05-15 09:05:01.642096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.170 [2024-05-15 09:05:01.642109] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.170 [2024-05-15 09:05:01.642119] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.170 [2024-05-15 09:05:01.642135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83152 len:8 PRP1 0x0 PRP2 0x0 00:39:18.170 [2024-05-15 09:05:01.642148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.170 [2024-05-15 09:05:01.642161] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.170 [2024-05-15 09:05:01.642172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.170 [2024-05-15 09:05:01.642183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83160 len:8 PRP1 0x0 PRP2 0x0 00:39:18.170 [2024-05-15 09:05:01.642211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.170 [2024-05-15 09:05:01.642232] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.170 [2024-05-15 09:05:01.642244] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.170 [2024-05-15 09:05:01.642255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83168 len:8 PRP1 0x0 PRP2 0x0 00:39:18.170 [2024-05-15 09:05:01.642268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.170 [2024-05-15 09:05:01.642281] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.170 [2024-05-15 09:05:01.642292] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.170 [2024-05-15 09:05:01.642303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83176 len:8 PRP1 0x0 PRP2 0x0 00:39:18.170 [2024-05-15 09:05:01.642316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.170 [2024-05-15 09:05:01.642329] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.170 [2024-05-15 09:05:01.642344] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.170 [2024-05-15 09:05:01.642356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83184 len:8 PRP1 0x0 PRP2 0x0 00:39:18.170 [2024-05-15 09:05:01.642369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.170 [2024-05-15 09:05:01.642383] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.171 [2024-05-15 09:05:01.642394] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.171 [2024-05-15 09:05:01.642406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83192 len:8 PRP1 0x0 PRP2 0x0 00:39:18.171 [2024-05-15 09:05:01.642419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.171 [2024-05-15 09:05:01.642432] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.171 [2024-05-15 09:05:01.642448] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.171 [2024-05-15 09:05:01.642460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83200 len:8 PRP1 0x0 PRP2 0x0 00:39:18.171 [2024-05-15 09:05:01.642473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.171 [2024-05-15 09:05:01.642486] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.171 [2024-05-15 09:05:01.642497] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.171 [2024-05-15 09:05:01.642509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83208 len:8 PRP1 0x0 PRP2 0x0 00:39:18.171 [2024-05-15 09:05:01.642522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.171 [2024-05-15 09:05:01.642550] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.171 [2024-05-15 09:05:01.642561] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.171 [2024-05-15 09:05:01.642577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83216 len:8 PRP1 0x0 PRP2 0x0 00:39:18.171 [2024-05-15 09:05:01.642590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.171 [2024-05-15 09:05:01.642603] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.171 [2024-05-15 09:05:01.642614] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.171 [2024-05-15 09:05:01.642625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83224 len:8 PRP1 0x0 PRP2 0x0 00:39:18.171 [2024-05-15 09:05:01.642638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.171 [2024-05-15 09:05:01.642650] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.171 [2024-05-15 09:05:01.642661] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.171 [2024-05-15 09:05:01.642672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83232 len:8 PRP1 0x0 PRP2 0x0 00:39:18.171 [2024-05-15 09:05:01.642685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.171 [2024-05-15 09:05:01.642698] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.171 [2024-05-15 09:05:01.642709] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.171 [2024-05-15 09:05:01.642720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83240 len:8 PRP1 0x0 PRP2 0x0 00:39:18.171 [2024-05-15 09:05:01.642732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.171 [2024-05-15 09:05:01.642748] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.171 [2024-05-15 09:05:01.642759] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.171 [2024-05-15 09:05:01.642770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83248 len:8 PRP1 0x0 PRP2 0x0 00:39:18.171 [2024-05-15 09:05:01.642783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.171 [2024-05-15 09:05:01.642795] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.171 [2024-05-15 09:05:01.642806] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.171 [2024-05-15 09:05:01.648187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83256 len:8 PRP1 0x0 PRP2 0x0 00:39:18.171 [2024-05-15 09:05:01.648228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.171 [2024-05-15 09:05:01.648248] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.171 [2024-05-15 09:05:01.648261] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.171 [2024-05-15 09:05:01.648277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83264 len:8 PRP1 0x0 PRP2 0x0 00:39:18.171 [2024-05-15 09:05:01.648292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.171 [2024-05-15 09:05:01.648307] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.171 [2024-05-15 09:05:01.648318] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.171 [2024-05-15 09:05:01.648330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83272 len:8 PRP1 0x0 PRP2 0x0 00:39:18.171 [2024-05-15 09:05:01.648343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.171 [2024-05-15 09:05:01.648357] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.171 [2024-05-15 09:05:01.648368] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.171 [2024-05-15 09:05:01.648381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83280 len:8 PRP1 0x0 PRP2 0x0 00:39:18.171 [2024-05-15 09:05:01.648394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.171 [2024-05-15 09:05:01.648408] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.171 [2024-05-15 09:05:01.648419] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.171 [2024-05-15 09:05:01.648431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83288 len:8 PRP1 0x0 PRP2 0x0 00:39:18.171 [2024-05-15 09:05:01.648444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.171 [2024-05-15 09:05:01.648457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.171 [2024-05-15 09:05:01.648468] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.171 [2024-05-15 09:05:01.648480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83296 len:8 PRP1 0x0 PRP2 0x0 00:39:18.171 [2024-05-15 09:05:01.648493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.171 [2024-05-15 09:05:01.648507] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.171 [2024-05-15 09:05:01.648518] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.171 [2024-05-15 09:05:01.648530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83304 len:8 PRP1 0x0 PRP2 0x0 00:39:18.171 [2024-05-15 09:05:01.648549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.171 [2024-05-15 09:05:01.648563] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.171 [2024-05-15 09:05:01.648575] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.171 [2024-05-15 09:05:01.648586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83312 len:8 PRP1 0x0 PRP2 0x0 00:39:18.171 [2024-05-15 09:05:01.648599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.171 [2024-05-15 09:05:01.648612] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.171 [2024-05-15 09:05:01.648623] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.171 [2024-05-15 09:05:01.648636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83320 len:8 PRP1 0x0 PRP2 0x0 00:39:18.171 [2024-05-15 09:05:01.648653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.171 [2024-05-15 09:05:01.648668] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.171 [2024-05-15 09:05:01.648679] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.171 [2024-05-15 09:05:01.648693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83328 len:8 PRP1 0x0 PRP2 0x0 00:39:18.171 [2024-05-15 09:05:01.648708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.171 [2024-05-15 09:05:01.648722] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.171 [2024-05-15 09:05:01.648733] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.171 [2024-05-15 09:05:01.648745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83336 len:8 PRP1 0x0 PRP2 0x0 00:39:18.171 [2024-05-15 09:05:01.648758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.171 [2024-05-15 09:05:01.648772] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.171 [2024-05-15 09:05:01.648783] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.171 [2024-05-15 09:05:01.648795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83344 len:8 PRP1 0x0 PRP2 0x0 00:39:18.171 [2024-05-15 09:05:01.648808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.171 [2024-05-15 09:05:01.648821] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.171 [2024-05-15 09:05:01.648833] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.171 [2024-05-15 09:05:01.648844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83352 len:8 PRP1 0x0 PRP2 0x0 00:39:18.171 [2024-05-15 09:05:01.648857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.171 [2024-05-15 09:05:01.648870] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.171 [2024-05-15 09:05:01.648881] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.171 [2024-05-15 09:05:01.648893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83360 len:8 PRP1 0x0 PRP2 0x0 00:39:18.171 [2024-05-15 09:05:01.648907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.171 [2024-05-15 09:05:01.648920] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.171 [2024-05-15 09:05:01.648935] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.171 [2024-05-15 09:05:01.648947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83368 len:8 PRP1 0x0 PRP2 0x0 00:39:18.171 [2024-05-15 09:05:01.648960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.171 [2024-05-15 09:05:01.648974] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.171 [2024-05-15 09:05:01.648985] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.171 [2024-05-15 09:05:01.648996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83376 len:8 PRP1 0x0 PRP2 0x0 00:39:18.171 [2024-05-15 09:05:01.649009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.171 [2024-05-15 09:05:01.649023] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.171 [2024-05-15 09:05:01.649034] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.171 [2024-05-15 09:05:01.649045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83384 len:8 PRP1 0x0 PRP2 0x0 00:39:18.171 [2024-05-15 09:05:01.649058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.171 [2024-05-15 09:05:01.649071] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.171 [2024-05-15 09:05:01.649082] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.171 [2024-05-15 09:05:01.649094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83392 len:8 PRP1 0x0 PRP2 0x0 00:39:18.171 [2024-05-15 09:05:01.649106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.171 [2024-05-15 09:05:01.649120] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.171 [2024-05-15 09:05:01.649131] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.171 [2024-05-15 09:05:01.649143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83400 len:8 PRP1 0x0 PRP2 0x0 00:39:18.171 [2024-05-15 09:05:01.649156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.171 [2024-05-15 09:05:01.649169] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.171 [2024-05-15 09:05:01.649181] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.171 [2024-05-15 09:05:01.649193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83408 len:8 PRP1 0x0 PRP2 0x0 00:39:18.171 [2024-05-15 09:05:01.649206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.171 [2024-05-15 09:05:01.649228] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.171 [2024-05-15 09:05:01.649240] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.171 [2024-05-15 09:05:01.649252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83416 len:8 PRP1 0x0 PRP2 0x0 00:39:18.171 [2024-05-15 09:05:01.649266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.171 [2024-05-15 09:05:01.649279] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.171 [2024-05-15 09:05:01.649290] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.171 [2024-05-15 09:05:01.649302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83424 len:8 PRP1 0x0 PRP2 0x0 00:39:18.171 [2024-05-15 09:05:01.649314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.171 [2024-05-15 09:05:01.649332] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.171 [2024-05-15 09:05:01.649343] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.171 [2024-05-15 09:05:01.649355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83432 len:8 PRP1 0x0 PRP2 0x0 00:39:18.171 [2024-05-15 09:05:01.649368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.171 [2024-05-15 09:05:01.649381] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.171 [2024-05-15 09:05:01.649392] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.171 [2024-05-15 09:05:01.649403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83440 len:8 PRP1 0x0 PRP2 0x0 00:39:18.171 [2024-05-15 09:05:01.649416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.171 [2024-05-15 09:05:01.649429] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.171 [2024-05-15 09:05:01.649441] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.171 [2024-05-15 09:05:01.649452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83448 len:8 PRP1 0x0 PRP2 0x0 00:39:18.171 [2024-05-15 09:05:01.649465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.171 [2024-05-15 09:05:01.649478] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.171 [2024-05-15 09:05:01.649489] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.171 [2024-05-15 09:05:01.649501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83456 len:8 PRP1 0x0 PRP2 0x0 00:39:18.171 [2024-05-15 09:05:01.649514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.171 [2024-05-15 09:05:01.649527] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.171 [2024-05-15 09:05:01.649538] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.171 [2024-05-15 09:05:01.649549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83464 len:8 PRP1 0x0 PRP2 0x0 00:39:18.171 [2024-05-15 09:05:01.649562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.171 [2024-05-15 09:05:01.649575] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.171 [2024-05-15 09:05:01.649586] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.171 [2024-05-15 09:05:01.649598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83472 len:8 PRP1 0x0 PRP2 0x0 00:39:18.171 [2024-05-15 09:05:01.649611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.171 [2024-05-15 09:05:01.649624] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.171 [2024-05-15 09:05:01.649635] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.171 [2024-05-15 09:05:01.649646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83480 len:8 PRP1 0x0 PRP2 0x0 00:39:18.171 [2024-05-15 09:05:01.649659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.171 [2024-05-15 09:05:01.649673] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.171 [2024-05-15 09:05:01.649684] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.171 [2024-05-15 09:05:01.649695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83488 len:8 PRP1 0x0 PRP2 0x0 00:39:18.171 [2024-05-15 09:05:01.649712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.171 [2024-05-15 09:05:01.649730] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.171 [2024-05-15 09:05:01.649742] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.171 [2024-05-15 09:05:01.649754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83496 len:8 PRP1 0x0 PRP2 0x0 00:39:18.171 [2024-05-15 09:05:01.649768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.171 [2024-05-15 09:05:01.649782] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.171 [2024-05-15 09:05:01.649793] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.171 [2024-05-15 09:05:01.649808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83504 len:8 PRP1 0x0 PRP2 0x0 00:39:18.171 [2024-05-15 09:05:01.649822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.171 [2024-05-15 09:05:01.649836] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.171 [2024-05-15 09:05:01.649848] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.171 [2024-05-15 09:05:01.649859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83512 len:8 PRP1 0x0 PRP2 0x0 00:39:18.171 [2024-05-15 09:05:01.649872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.171 [2024-05-15 09:05:01.649886] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.171 [2024-05-15 09:05:01.649897] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.171 [2024-05-15 09:05:01.649909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83520 len:8 PRP1 0x0 PRP2 0x0 00:39:18.171 [2024-05-15 09:05:01.649922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.171 [2024-05-15 09:05:01.649936] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.171 [2024-05-15 09:05:01.649947] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.171 [2024-05-15 09:05:01.649959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83528 len:8 PRP1 0x0 PRP2 0x0 00:39:18.171 [2024-05-15 09:05:01.649972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.171 [2024-05-15 09:05:01.649995] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.171 [2024-05-15 09:05:01.650007] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.171 [2024-05-15 09:05:01.650019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83536 len:8 PRP1 0x0 PRP2 0x0 00:39:18.171 [2024-05-15 09:05:01.650032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.171 [2024-05-15 09:05:01.650061] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.171 [2024-05-15 09:05:01.650072] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.171 [2024-05-15 09:05:01.650083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83544 len:8 PRP1 0x0 PRP2 0x0 00:39:18.171 [2024-05-15 09:05:01.650096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.171 [2024-05-15 09:05:01.650109] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.171 [2024-05-15 09:05:01.650120] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.171 [2024-05-15 09:05:01.650150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83552 len:8 PRP1 0x0 PRP2 0x0 00:39:18.171 [2024-05-15 09:05:01.650165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.171 [2024-05-15 09:05:01.650179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.171 [2024-05-15 09:05:01.650191] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.171 [2024-05-15 09:05:01.650203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83560 len:8 PRP1 0x0 PRP2 0x0 00:39:18.171 [2024-05-15 09:05:01.650222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.171 [2024-05-15 09:05:01.650237] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.171 [2024-05-15 09:05:01.650248] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.171 [2024-05-15 09:05:01.650260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83568 len:8 PRP1 0x0 PRP2 0x0 00:39:18.171 [2024-05-15 09:05:01.650273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.171 [2024-05-15 09:05:01.650287] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.171 [2024-05-15 09:05:01.650298] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.171 [2024-05-15 09:05:01.650310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83576 len:8 PRP1 0x0 PRP2 0x0 00:39:18.171 [2024-05-15 09:05:01.650327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.171 [2024-05-15 09:05:01.650341] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.171 [2024-05-15 09:05:01.650353] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.171 [2024-05-15 09:05:01.650366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82896 len:8 PRP1 0x0 PRP2 0x0 00:39:18.171 [2024-05-15 09:05:01.650380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.171 [2024-05-15 09:05:01.650393] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.171 [2024-05-15 09:05:01.650405] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.171 [2024-05-15 09:05:01.650419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82904 len:8 PRP1 0x0 PRP2 0x0 00:39:18.171 [2024-05-15 09:05:01.650434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.171 [2024-05-15 09:05:01.650448] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.171 [2024-05-15 09:05:01.650460] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.171 [2024-05-15 09:05:01.650475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82912 len:8 PRP1 0x0 PRP2 0x0 00:39:18.171 [2024-05-15 09:05:01.650489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.171 [2024-05-15 09:05:01.650553] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xfc5eb0 was disconnected and freed. reset controller. 00:39:18.171 [2024-05-15 09:05:01.650572] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:39:18.171 [2024-05-15 09:05:01.650609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:39:18.171 [2024-05-15 09:05:01.650629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.171 [2024-05-15 09:05:01.650659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:39:18.171 [2024-05-15 09:05:01.650674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.171 [2024-05-15 09:05:01.650688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:39:18.171 [2024-05-15 09:05:01.650701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.171 [2024-05-15 09:05:01.650716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:39:18.171 [2024-05-15 09:05:01.650730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.171 [2024-05-15 09:05:01.650743] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.171 [2024-05-15 09:05:01.650784] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa4600 (9): Bad file descriptor 00:39:18.171 [2024-05-15 09:05:01.654128] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.171 [2024-05-15 09:05:01.683430] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:39:18.171 [2024-05-15 09:05:06.180144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.171 [2024-05-15 09:05:06.180189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.171 [2024-05-15 09:05:06.180234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:12744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.172 [2024-05-15 09:05:06.180268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.180286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.172 [2024-05-15 09:05:06.180300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.180316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.172 [2024-05-15 09:05:06.180346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.180363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.172 [2024-05-15 09:05:06.180377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.180392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.172 [2024-05-15 09:05:06.180406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.180421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:12784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.172 [2024-05-15 09:05:06.180434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.180448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:12792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.172 [2024-05-15 09:05:06.180462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.180487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:12800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.172 [2024-05-15 09:05:06.180502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.180516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.172 [2024-05-15 09:05:06.180529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.180544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.172 [2024-05-15 09:05:06.180557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.180572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:12824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.172 [2024-05-15 09:05:06.180585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.180600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.172 [2024-05-15 09:05:06.180612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.180627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.172 [2024-05-15 09:05:06.180641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.180655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.172 [2024-05-15 09:05:06.180668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.180682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.172 [2024-05-15 09:05:06.180695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.180709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:12864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.172 [2024-05-15 09:05:06.180722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.180737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.172 [2024-05-15 09:05:06.180750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.180764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.172 [2024-05-15 09:05:06.180778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.180793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:12888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.172 [2024-05-15 09:05:06.180806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.180821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.172 [2024-05-15 09:05:06.180838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.180854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.172 [2024-05-15 09:05:06.180867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.180881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.172 [2024-05-15 09:05:06.180894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.180909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.172 [2024-05-15 09:05:06.180922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.180937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:12928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.172 [2024-05-15 09:05:06.180950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.180965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:12936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.172 [2024-05-15 09:05:06.180978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.180993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.172 [2024-05-15 09:05:06.181007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.181021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.172 [2024-05-15 09:05:06.181034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.181049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.172 [2024-05-15 09:05:06.181062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.181077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.172 [2024-05-15 09:05:06.181090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.181105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.172 [2024-05-15 09:05:06.181118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.181132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:12984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.172 [2024-05-15 09:05:06.181145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.181161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.172 [2024-05-15 09:05:06.181174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.181188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:13016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.172 [2024-05-15 09:05:06.181205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.181227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.172 [2024-05-15 09:05:06.181242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.181256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:13032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.172 [2024-05-15 09:05:06.181269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.181283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:13040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.172 [2024-05-15 09:05:06.181296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.181310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.172 [2024-05-15 09:05:06.181323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.181337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:13056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.172 [2024-05-15 09:05:06.181350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.181364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.172 [2024-05-15 09:05:06.181378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.181392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:13072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.172 [2024-05-15 09:05:06.181405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.181419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.172 [2024-05-15 09:05:06.181432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.181446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.172 [2024-05-15 09:05:06.181459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.181474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:13096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.172 [2024-05-15 09:05:06.181487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.181501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.172 [2024-05-15 09:05:06.181515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.181529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:13112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.172 [2024-05-15 09:05:06.181541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.181559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:13120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.172 [2024-05-15 09:05:06.181573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.181588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.172 [2024-05-15 09:05:06.181601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.181615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.172 [2024-05-15 09:05:06.181628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.181642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:13144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.172 [2024-05-15 09:05:06.181655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.181670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.172 [2024-05-15 09:05:06.181683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.181697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.172 [2024-05-15 09:05:06.181710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.181724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:13168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.172 [2024-05-15 09:05:06.181737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.181751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.172 [2024-05-15 09:05:06.181764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.181779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.172 [2024-05-15 09:05:06.181791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.181806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.172 [2024-05-15 09:05:06.181834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.181851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.172 [2024-05-15 09:05:06.181864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.181879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:13208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.172 [2024-05-15 09:05:06.181892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.181906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:13216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.172 [2024-05-15 09:05:06.181923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.181939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:13224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.172 [2024-05-15 09:05:06.181952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.181968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.172 [2024-05-15 09:05:06.181982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.181998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:13240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.172 [2024-05-15 09:05:06.182011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.182026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.172 [2024-05-15 09:05:06.182040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.182055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:13256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.172 [2024-05-15 09:05:06.182069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.182084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:13264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.172 [2024-05-15 09:05:06.182097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.182112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.172 [2024-05-15 09:05:06.182125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.182140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.172 [2024-05-15 09:05:06.182153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.182168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.172 [2024-05-15 09:05:06.182181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.182210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:13296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.172 [2024-05-15 09:05:06.182232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.182249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:13304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.172 [2024-05-15 09:05:06.182263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.182278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:13312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.172 [2024-05-15 09:05:06.182292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.182314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.172 [2024-05-15 09:05:06.182329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.182344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:13328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.172 [2024-05-15 09:05:06.182358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.182373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:13336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.172 [2024-05-15 09:05:06.182387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.182402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:13344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.172 [2024-05-15 09:05:06.182415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.182430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.172 [2024-05-15 09:05:06.182444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.182460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.172 [2024-05-15 09:05:06.182474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.182489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.172 [2024-05-15 09:05:06.182503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.182534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.172 [2024-05-15 09:05:06.182548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.182562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:13384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.172 [2024-05-15 09:05:06.182576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.182591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:13392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.172 [2024-05-15 09:05:06.182604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.182619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.172 [2024-05-15 09:05:06.182632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.182647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.172 [2024-05-15 09:05:06.182660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.182674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.172 [2024-05-15 09:05:06.182687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.182706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:13424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.172 [2024-05-15 09:05:06.182720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.182735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:13432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.172 [2024-05-15 09:05:06.182748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.182763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:13440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.172 [2024-05-15 09:05:06.182776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.182791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.172 [2024-05-15 09:05:06.182805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.182819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.172 [2024-05-15 09:05:06.182833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.182847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.172 [2024-05-15 09:05:06.182861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.172 [2024-05-15 09:05:06.182875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:13472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.172 [2024-05-15 09:05:06.182888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.173 [2024-05-15 09:05:06.182903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.173 [2024-05-15 09:05:06.182917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.173 [2024-05-15 09:05:06.182931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.173 [2024-05-15 09:05:06.182944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.173 [2024-05-15 09:05:06.182959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:13496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.173 [2024-05-15 09:05:06.182973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.173 [2024-05-15 09:05:06.182988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.173 [2024-05-15 09:05:06.183001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.173 [2024-05-15 09:05:06.183017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.173 [2024-05-15 09:05:06.183030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.173 [2024-05-15 09:05:06.183044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.173 [2024-05-15 09:05:06.183061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.173 [2024-05-15 09:05:06.183077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.173 [2024-05-15 09:05:06.183091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.173 [2024-05-15 09:05:06.183106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:13536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.173 [2024-05-15 09:05:06.183120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.173 [2024-05-15 09:05:06.183134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.173 [2024-05-15 09:05:06.183148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.173 [2024-05-15 09:05:06.183163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:13552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.173 [2024-05-15 09:05:06.183176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.173 [2024-05-15 09:05:06.183191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.173 [2024-05-15 09:05:06.183204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.173 [2024-05-15 09:05:06.183245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.173 [2024-05-15 09:05:06.183262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.173 [2024-05-15 09:05:06.183278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.173 [2024-05-15 09:05:06.183292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.173 [2024-05-15 09:05:06.183307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.173 [2024-05-15 09:05:06.183321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.173 [2024-05-15 09:05:06.183336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:13592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.173 [2024-05-15 09:05:06.183350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.173 [2024-05-15 09:05:06.183365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:13600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.173 [2024-05-15 09:05:06.183380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.173 [2024-05-15 09:05:06.183395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.173 [2024-05-15 09:05:06.183409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.173 [2024-05-15 09:05:06.183425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.173 [2024-05-15 09:05:06.183439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.173 [2024-05-15 09:05:06.183459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.173 [2024-05-15 09:05:06.183474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.173 [2024-05-15 09:05:06.183490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:13632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.173 [2024-05-15 09:05:06.183504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.173 [2024-05-15 09:05:06.183520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.173 [2024-05-15 09:05:06.183548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.173 [2024-05-15 09:05:06.183583] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.173 [2024-05-15 09:05:06.183600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13648 len:8 PRP1 0x0 PRP2 0x0 00:39:18.173 [2024-05-15 09:05:06.183614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.173 [2024-05-15 09:05:06.183633] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.173 [2024-05-15 09:05:06.183645] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.173 [2024-05-15 09:05:06.183656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13656 len:8 PRP1 0x0 PRP2 0x0 00:39:18.173 [2024-05-15 09:05:06.183668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.173 [2024-05-15 09:05:06.183681] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.173 [2024-05-15 09:05:06.183692] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.173 [2024-05-15 09:05:06.183703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:8 PRP1 0x0 PRP2 0x0 00:39:18.173 [2024-05-15 09:05:06.183716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.173 [2024-05-15 09:05:06.183729] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.173 [2024-05-15 09:05:06.183739] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.173 [2024-05-15 09:05:06.183750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13672 len:8 PRP1 0x0 PRP2 0x0 00:39:18.173 [2024-05-15 09:05:06.183763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.173 [2024-05-15 09:05:06.183776] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.173 [2024-05-15 09:05:06.183787] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.173 [2024-05-15 09:05:06.183798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13680 len:8 PRP1 0x0 PRP2 0x0 00:39:18.173 [2024-05-15 09:05:06.183810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.173 [2024-05-15 09:05:06.183822] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.173 [2024-05-15 09:05:06.183833] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.173 [2024-05-15 09:05:06.183844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13688 len:8 PRP1 0x0 PRP2 0x0 00:39:18.173 [2024-05-15 09:05:06.183857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.173 [2024-05-15 09:05:06.183875] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.173 [2024-05-15 09:05:06.183887] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.173 [2024-05-15 09:05:06.183898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:8 PRP1 0x0 PRP2 0x0 00:39:18.173 [2024-05-15 09:05:06.183910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.173 [2024-05-15 09:05:06.183924] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.173 [2024-05-15 09:05:06.183935] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.173 [2024-05-15 09:05:06.183946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13704 len:8 PRP1 0x0 PRP2 0x0 00:39:18.173 [2024-05-15 09:05:06.183958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.173 [2024-05-15 09:05:06.183971] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.173 [2024-05-15 09:05:06.183982] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.173 [2024-05-15 09:05:06.183993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13712 len:8 PRP1 0x0 PRP2 0x0 00:39:18.173 [2024-05-15 09:05:06.184006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.173 [2024-05-15 09:05:06.184018] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.173 [2024-05-15 09:05:06.184029] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.173 [2024-05-15 09:05:06.184039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13720 len:8 PRP1 0x0 PRP2 0x0 00:39:18.173 [2024-05-15 09:05:06.184053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.173 [2024-05-15 09:05:06.184066] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.173 [2024-05-15 09:05:06.184077] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.173 [2024-05-15 09:05:06.184088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:8 PRP1 0x0 PRP2 0x0 00:39:18.173 [2024-05-15 09:05:06.184101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.173 [2024-05-15 09:05:06.184113] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.173 [2024-05-15 09:05:06.184124] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.173 [2024-05-15 09:05:06.184135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13736 len:8 PRP1 0x0 PRP2 0x0 00:39:18.173 [2024-05-15 09:05:06.184148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.173 [2024-05-15 09:05:06.184160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.173 [2024-05-15 09:05:06.184171] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.173 [2024-05-15 09:05:06.184182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13744 len:8 PRP1 0x0 PRP2 0x0 00:39:18.173 [2024-05-15 09:05:06.184194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.173 [2024-05-15 09:05:06.184207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.173 [2024-05-15 09:05:06.184241] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.173 [2024-05-15 09:05:06.184255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13752 len:8 PRP1 0x0 PRP2 0x0 00:39:18.173 [2024-05-15 09:05:06.184268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.173 [2024-05-15 09:05:06.184287] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.173 [2024-05-15 09:05:06.184299] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.173 [2024-05-15 09:05:06.184311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12992 len:8 PRP1 0x0 PRP2 0x0 00:39:18.173 [2024-05-15 09:05:06.184324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.173 [2024-05-15 09:05:06.184337] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.173 [2024-05-15 09:05:06.184348] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.173 [2024-05-15 09:05:06.184360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13000 len:8 PRP1 0x0 PRP2 0x0 00:39:18.173 [2024-05-15 09:05:06.184373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.173 [2024-05-15 09:05:06.184433] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xfd33a0 was disconnected and freed. reset controller. 00:39:18.173 [2024-05-15 09:05:06.184452] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:39:18.173 [2024-05-15 09:05:06.184484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:39:18.173 [2024-05-15 09:05:06.184504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.173 [2024-05-15 09:05:06.184519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:39:18.173 [2024-05-15 09:05:06.184533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.173 [2024-05-15 09:05:06.184547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:39:18.173 [2024-05-15 09:05:06.184561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.173 [2024-05-15 09:05:06.184575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:39:18.173 [2024-05-15 09:05:06.184588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.173 [2024-05-15 09:05:06.184601] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.173 [2024-05-15 09:05:06.184654] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa4600 (9): Bad file descriptor 00:39:18.173 [2024-05-15 09:05:06.187958] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.173 [2024-05-15 09:05:06.263181] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:39:18.173 00:39:18.173 Latency(us) 00:39:18.173 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:18.173 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:39:18.173 Verification LBA range: start 0x0 length 0x4000 00:39:18.173 NVMe0n1 : 15.00 8636.93 33.74 441.81 0.00 14070.93 579.51 34369.99 00:39:18.173 =================================================================================================================== 00:39:18.173 Total : 8636.93 33.74 441.81 0.00 14070.93 579.51 34369.99 00:39:18.173 Received shutdown signal, test time was about 15.000000 seconds 00:39:18.173 00:39:18.173 Latency(us) 00:39:18.173 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:18.173 =================================================================================================================== 00:39:18.173 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:18.173 09:05:12 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:39:18.173 09:05:12 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:39:18.173 09:05:12 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:39:18.173 09:05:12 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2398879 00:39:18.173 09:05:12 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:39:18.173 09:05:12 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2398879 /var/tmp/bdevperf.sock 00:39:18.173 09:05:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@828 -- # '[' -z 2398879 ']' 00:39:18.173 09:05:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:18.173 09:05:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local max_retries=100 00:39:18.173 09:05:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:18.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:18.173 09:05:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # xtrace_disable 00:39:18.173 09:05:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:39:18.173 09:05:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:39:18.173 09:05:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@861 -- # return 0 00:39:18.173 09:05:12 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:39:18.173 [2024-05-15 09:05:12.726879] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:39:18.173 09:05:12 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:39:18.431 [2024-05-15 09:05:12.971544] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:39:18.431 09:05:12 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:39:18.688 NVMe0n1 00:39:18.688 09:05:13 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:39:18.945 00:39:18.945 09:05:13 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:39:19.510 00:39:19.510 09:05:14 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:39:19.510 09:05:14 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:39:19.511 09:05:14 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:39:19.768 09:05:14 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:39:23.048 09:05:17 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:39:23.048 09:05:17 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:39:23.048 09:05:17 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2399545 00:39:23.048 09:05:17 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:39:23.048 09:05:17 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 2399545 00:39:24.421 0 00:39:24.421 09:05:18 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:39:24.421 [2024-05-15 09:05:12.229023] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:39:24.421 [2024-05-15 09:05:12.229119] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2398879 ] 00:39:24.421 EAL: No free 2048 kB hugepages reported on node 1 00:39:24.421 [2024-05-15 09:05:12.299816] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:24.421 [2024-05-15 09:05:12.379605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:24.421 [2024-05-15 09:05:14.496617] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:39:24.421 [2024-05-15 09:05:14.496710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:39:24.421 [2024-05-15 09:05:14.496735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:24.421 [2024-05-15 09:05:14.496751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:39:24.421 [2024-05-15 09:05:14.496765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:24.421 [2024-05-15 09:05:14.496780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:39:24.421 [2024-05-15 09:05:14.496795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:24.421 [2024-05-15 09:05:14.496809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:39:24.421 [2024-05-15 09:05:14.496823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:24.421 [2024-05-15 09:05:14.496837] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:24.421 [2024-05-15 09:05:14.496876] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:24.421 [2024-05-15 09:05:14.496909] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1753600 (9): Bad file descriptor 00:39:24.421 [2024-05-15 09:05:14.509877] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:39:24.421 Running I/O for 1 seconds... 00:39:24.421 00:39:24.421 Latency(us) 00:39:24.421 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:24.421 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:39:24.421 Verification LBA range: start 0x0 length 0x4000 00:39:24.421 NVMe0n1 : 1.01 8631.43 33.72 0.00 0.00 14768.61 3106.89 18252.99 00:39:24.421 =================================================================================================================== 00:39:24.421 Total : 8631.43 33.72 0.00 0.00 14768.61 3106.89 18252.99 00:39:24.421 09:05:18 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:39:24.421 09:05:18 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:39:24.421 09:05:19 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:39:24.679 09:05:19 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:39:24.679 09:05:19 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:39:24.936 09:05:19 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:39:25.194 09:05:19 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:39:28.473 09:05:22 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:39:28.473 09:05:22 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:39:28.473 09:05:23 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 2398879 00:39:28.473 09:05:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@947 -- # '[' -z 2398879 ']' 00:39:28.473 09:05:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # kill -0 2398879 00:39:28.473 09:05:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # uname 00:39:28.473 09:05:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:39:28.473 09:05:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2398879 00:39:28.473 09:05:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:39:28.473 09:05:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:39:28.473 09:05:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2398879' 00:39:28.473 killing process with pid 2398879 00:39:28.473 09:05:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # kill 2398879 00:39:28.473 09:05:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@971 -- # wait 2398879 00:39:28.731 09:05:23 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:39:28.731 09:05:23 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:28.989 09:05:23 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:39:28.989 09:05:23 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:39:28.989 09:05:23 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:39:28.989 09:05:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:39:28.989 09:05:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:39:28.989 09:05:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:39:28.989 09:05:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:39:28.989 09:05:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:39:28.989 09:05:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:39:28.989 rmmod nvme_tcp 00:39:28.989 rmmod nvme_fabrics 00:39:28.989 rmmod nvme_keyring 00:39:28.989 09:05:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:39:28.989 09:05:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:39:28.989 09:05:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:39:28.989 09:05:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 2396735 ']' 00:39:28.989 09:05:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 2396735 00:39:28.989 09:05:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@947 -- # '[' -z 2396735 ']' 00:39:28.989 09:05:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # kill -0 2396735 00:39:28.989 09:05:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # uname 00:39:28.989 09:05:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:39:28.989 09:05:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2396735 00:39:28.989 09:05:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:39:28.989 09:05:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:39:28.989 09:05:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2396735' 00:39:28.989 killing process with pid 2396735 00:39:28.989 09:05:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # kill 2396735 00:39:28.989 [2024-05-15 09:05:23.715898] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:39:28.989 09:05:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@971 -- # wait 2396735 00:39:29.247 09:05:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:39:29.247 09:05:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:39:29.247 09:05:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:39:29.247 09:05:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:39:29.247 09:05:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:39:29.247 09:05:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:29.247 09:05:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:29.247 09:05:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:31.778 09:05:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:39:31.778 00:39:31.778 real 0m34.971s 00:39:31.778 user 2m1.343s 00:39:31.778 sys 0m6.244s 00:39:31.778 09:05:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # xtrace_disable 00:39:31.778 09:05:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:39:31.778 ************************************ 00:39:31.778 END TEST nvmf_failover 00:39:31.778 ************************************ 00:39:31.778 09:05:26 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:39:31.778 09:05:26 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:39:31.778 09:05:26 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:39:31.778 09:05:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:31.778 ************************************ 00:39:31.778 START TEST nvmf_host_discovery 00:39:31.778 ************************************ 00:39:31.778 09:05:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:39:31.778 * Looking for test storage... 00:39:31.778 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:39:31.778 09:05:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:31.778 09:05:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:39:31.778 09:05:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:31.778 09:05:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:31.778 09:05:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:31.778 09:05:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:31.778 09:05:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:31.778 09:05:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:31.778 09:05:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:31.778 09:05:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:31.778 09:05:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:31.778 09:05:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:31.778 09:05:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:39:31.778 09:05:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:39:31.778 09:05:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:31.778 09:05:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:31.778 09:05:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:31.778 09:05:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:31.778 09:05:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:31.778 09:05:26 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:31.778 09:05:26 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:31.778 09:05:26 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:31.778 09:05:26 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:31.778 09:05:26 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:31.778 09:05:26 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:31.778 09:05:26 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:39:31.778 09:05:26 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:31.778 09:05:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:39:31.778 09:05:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:39:31.778 09:05:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:39:31.778 09:05:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:31.778 09:05:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:31.778 09:05:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:31.778 09:05:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:39:31.778 09:05:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:39:31.779 09:05:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:39:31.779 09:05:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:39:31.779 09:05:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:39:31.779 09:05:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:39:31.779 09:05:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:39:31.779 09:05:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:39:31.779 09:05:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:39:31.779 09:05:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:39:31.779 09:05:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:39:31.779 09:05:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:31.779 09:05:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:39:31.779 09:05:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:39:31.779 09:05:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:39:31.779 09:05:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:31.779 09:05:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:31.779 09:05:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:31.779 09:05:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:39:31.779 09:05:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:39:31.779 09:05:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:39:31.779 09:05:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:33.709 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:33.709 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:39:33.709 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:39:33.709 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:39:33.709 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:39:33.709 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:39:33.709 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:39:33.709 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:39:33.709 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:39:33.709 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:39:33.709 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:39:33.709 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:39:33.709 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:39:33.709 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:39:33.709 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:39:33.709 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:33.709 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:33.709 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:33.709 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:33.709 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:33.709 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:39:33.710 Found 0000:09:00.0 (0x8086 - 0x159b) 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:39:33.710 Found 0000:09:00.1 (0x8086 - 0x159b) 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:39:33.710 Found net devices under 0000:09:00.0: cvl_0_0 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:39:33.710 Found net devices under 0000:09:00.1: cvl_0_1 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:33.710 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:33.968 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:33.968 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:39:33.968 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:33.968 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:39:33.968 00:39:33.968 --- 10.0.0.2 ping statistics --- 00:39:33.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:33.968 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:39:33.968 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:33.968 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:33.968 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:39:33.968 00:39:33.968 --- 10.0.0.1 ping statistics --- 00:39:33.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:33.968 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:39:33.968 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:33.968 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:39:33.968 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:39:33.968 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:33.968 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:39:33.968 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:39:33.968 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:33.968 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:39:33.968 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:39:33.968 09:05:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:39:33.968 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:39:33.968 09:05:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@721 -- # xtrace_disable 00:39:33.968 09:05:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:33.968 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=2402449 00:39:33.968 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:39:33.968 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 2402449 00:39:33.968 09:05:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@828 -- # '[' -z 2402449 ']' 00:39:33.968 09:05:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:33.968 09:05:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local max_retries=100 00:39:33.968 09:05:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:33.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:33.968 09:05:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@837 -- # xtrace_disable 00:39:33.968 09:05:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:33.968 [2024-05-15 09:05:28.588533] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:39:33.968 [2024-05-15 09:05:28.588619] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:33.968 EAL: No free 2048 kB hugepages reported on node 1 00:39:33.968 [2024-05-15 09:05:28.669803] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:33.968 [2024-05-15 09:05:28.758313] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:33.968 [2024-05-15 09:05:28.758370] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:33.968 [2024-05-15 09:05:28.758396] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:33.968 [2024-05-15 09:05:28.758410] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:33.968 [2024-05-15 09:05:28.758423] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:33.968 [2024-05-15 09:05:28.758454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:39:34.224 09:05:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:39:34.224 09:05:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@861 -- # return 0 00:39:34.224 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:39:34.224 09:05:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@727 -- # xtrace_disable 00:39:34.224 09:05:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:34.224 09:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:34.224 09:05:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:34.224 09:05:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:34.224 09:05:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:34.224 [2024-05-15 09:05:28.908478] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:34.224 09:05:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:34.224 09:05:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:39:34.224 09:05:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:34.224 09:05:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:34.225 [2024-05-15 09:05:28.916415] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:39:34.225 [2024-05-15 09:05:28.916739] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:39:34.225 09:05:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:34.225 09:05:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:39:34.225 09:05:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:34.225 09:05:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:34.225 null0 00:39:34.225 09:05:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:34.225 09:05:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:39:34.225 09:05:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:34.225 09:05:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:34.225 null1 00:39:34.225 09:05:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:34.225 09:05:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:39:34.225 09:05:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:34.225 09:05:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:34.225 09:05:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:34.225 09:05:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2402576 00:39:34.225 09:05:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:39:34.225 09:05:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2402576 /tmp/host.sock 00:39:34.225 09:05:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@828 -- # '[' -z 2402576 ']' 00:39:34.225 09:05:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local rpc_addr=/tmp/host.sock 00:39:34.225 09:05:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local max_retries=100 00:39:34.225 09:05:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:39:34.225 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:39:34.225 09:05:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@837 -- # xtrace_disable 00:39:34.225 09:05:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:34.225 [2024-05-15 09:05:28.987320] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:39:34.225 [2024-05-15 09:05:28.987387] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2402576 ] 00:39:34.481 EAL: No free 2048 kB hugepages reported on node 1 00:39:34.481 [2024-05-15 09:05:29.058242] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:34.481 [2024-05-15 09:05:29.146104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:34.481 09:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:39:34.481 09:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@861 -- # return 0 00:39:34.481 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:34.481 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:39:34.481 09:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:34.481 09:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:34.481 09:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:34.481 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:39:34.481 09:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:34.481 09:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:34.739 09:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:34.739 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:39:34.739 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:39:34.739 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:39:34.739 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:39:34.739 09:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:34.739 09:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:34.739 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:39:34.739 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:39:34.739 09:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:34.739 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:39:34.739 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:39:34.739 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:39:34.739 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:39:34.739 09:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:34.739 09:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:34.739 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:39:34.739 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:39:34.739 09:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:34.739 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:39:34.739 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:39:34.739 09:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:34.739 09:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:34.739 09:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:34.739 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:39:34.739 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:39:34.739 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:39:34.739 09:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:34.739 09:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:34.739 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:39:34.739 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:39:34.739 09:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:34.739 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:39:34.739 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:39:34.739 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:39:34.739 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:39:34.739 09:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:34.739 09:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:34.739 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:39:34.739 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:39:34.739 09:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:34.739 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:39:34.739 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:39:34.739 09:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:34.739 09:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:34.739 09:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:34.739 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:39:34.739 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:39:34.739 09:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:34.739 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:39:34.739 09:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:34.739 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:39:34.739 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:39:34.739 09:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:34.739 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:39:34.739 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:39:34.739 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:39:34.739 09:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:34.739 09:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:34.739 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:39:34.739 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:39:34.739 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:39:34.739 09:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:34.997 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:39:34.997 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:34.997 09:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:34.997 09:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:34.997 [2024-05-15 09:05:29.546348] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:34.997 09:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:34.997 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:39:34.997 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:39:34.997 09:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:34.997 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:39:34.997 09:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:34.997 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:39:34.997 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:39:34.997 09:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:34.997 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:39:34.997 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:39:34.997 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:39:34.997 09:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:34.997 09:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:34.997 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:39:34.997 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:39:34.997 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:39:34.997 09:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:34.997 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:39:34.997 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:39:34.997 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:39:34.997 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:39:34.997 09:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:39:34.997 09:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:39:34.997 09:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:39:34.997 09:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:39:34.997 09:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:39:34.997 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:39:34.997 09:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:34.997 09:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:34.997 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:39:34.997 09:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:34.997 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:39:34.997 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:39:34.997 09:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:39:34.997 09:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:39:34.997 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:39:34.997 09:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:34.997 09:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:34.997 09:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:34.997 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:39:34.997 09:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:39:34.997 09:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:39:34.997 09:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:39:34.997 09:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:39:34.997 09:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_names 00:39:34.997 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:39:34.997 09:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:34.997 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:39:34.997 09:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:34.997 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:39:34.997 09:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:39:34.997 09:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:34.997 09:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ '' == \n\v\m\e\0 ]] 00:39:34.997 09:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # sleep 1 00:39:35.562 [2024-05-15 09:05:30.326003] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:39:35.562 [2024-05-15 09:05:30.326044] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:39:35.562 [2024-05-15 09:05:30.326070] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:39:35.820 [2024-05-15 09:05:30.454487] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:39:35.820 [2024-05-15 09:05:30.558212] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:39:35.820 [2024-05-15 09:05:30.558263] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:39:36.078 09:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:39:36.078 09:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:39:36.078 09:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_names 00:39:36.078 09:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:39:36.078 09:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:36.078 09:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:39:36.078 09:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:36.078 09:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:39:36.078 09:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:39:36.078 09:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:36.078 09:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:36.078 09:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:39:36.078 09:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:39:36.078 09:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:39:36.078 09:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:39:36.078 09:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:39:36.078 09:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:39:36.078 09:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_bdev_list 00:39:36.078 09:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:39:36.078 09:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:36.078 09:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:39:36.078 09:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:36.078 09:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:39:36.078 09:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:39:36.078 09:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:36.078 09:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:39:36.078 09:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:39:36.078 09:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:39:36.078 09:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:39:36.078 09:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:39:36.078 09:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:39:36.078 09:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:39:36.078 09:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_paths nvme0 00:39:36.078 09:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:39:36.078 09:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:36.078 09:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:39:36.078 09:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:36.078 09:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:39:36.078 09:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:39:36.078 09:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:36.078 09:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ 4420 == \4\4\2\0 ]] 00:39:36.078 09:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:39:36.078 09:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:39:36.078 09:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:39:36.078 09:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:39:36.078 09:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:39:36.078 09:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:39:36.078 09:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:39:36.078 09:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:39:36.078 09:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:39:36.078 09:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:39:36.078 09:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:36.078 09:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:39:36.078 09:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:36.335 09:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:36.335 09:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:39:36.335 09:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:39:36.335 09:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:39:36.335 09:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:39:36.335 09:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:39:36.335 09:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:36.335 09:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:36.335 09:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:36.335 09:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:39:36.336 09:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:39:36.336 09:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:39:36.336 09:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:39:36.336 09:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:39:36.336 09:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_bdev_list 00:39:36.336 09:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:39:36.336 09:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:36.336 09:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:39:36.336 09:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:36.336 09:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:39:36.336 09:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:39:36.336 09:05:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:36.336 09:05:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0n1 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:39:36.336 09:05:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # sleep 1 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_bdev_list 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:37.708 [2024-05-15 09:05:32.190129] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:39:37.708 [2024-05-15 09:05:32.190499] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:39:37.708 [2024-05-15 09:05:32.190551] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_names 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_bdev_list 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_paths nvme0 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:37.708 [2024-05-15 09:05:32.316904] bdev_nvme.c:6891:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:39:37.708 09:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # sleep 1 00:39:37.708 [2024-05-15 09:05:32.374457] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:39:37.709 [2024-05-15 09:05:32.374481] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:39:37.709 [2024-05-15 09:05:32.374490] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:39:38.638 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:39:38.638 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:39:38.638 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_paths nvme0 00:39:38.638 09:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:39:38.639 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:38.639 09:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:39:38.639 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:38.639 09:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:39:38.639 09:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:39:38.639 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:38.639 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:39:38.639 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:39:38.639 09:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:39:38.639 09:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:39:38.639 09:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:39:38.639 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:39:38.639 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:39:38.639 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:39:38.639 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:39:38.639 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:39:38.639 09:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:39:38.639 09:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:39:38.639 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:38.639 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:38.639 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:38.639 09:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:39:38.639 09:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:39:38.639 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:39:38.639 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:39:38.639 09:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:38.639 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:38.639 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:38.639 [2024-05-15 09:05:33.426868] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:39:38.639 [2024-05-15 09:05:33.426914] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:39:38.639 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:38.639 09:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:39:38.639 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:39:38.639 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:39:38.639 [2024-05-15 09:05:33.430623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:39:38.639 [2024-05-15 09:05:33.430663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.639 [2024-05-15 09:05:33.430683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:39:38.639 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:39:38.639 [2024-05-15 09:05:33.430698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.639 [2024-05-15 09:05:33.430714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:39:38.639 [2024-05-15 09:05:33.430730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.639 [2024-05-15 09:05:33.430762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:39:38.639 [2024-05-15 09:05:33.430776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.639 [2024-05-15 09:05:33.430790] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204ec60 is same with the state(5) to be set 00:39:38.639 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:39:38.926 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_names 00:39:38.926 09:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:39:38.926 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:38.926 09:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:39:38.926 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:38.926 09:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:39:38.926 09:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:39:38.926 [2024-05-15 09:05:33.440628] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x204ec60 (9): Bad file descriptor 00:39:38.926 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:38.926 [2024-05-15 09:05:33.450691] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:39:38.926 [2024-05-15 09:05:33.450987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.926 [2024-05-15 09:05:33.451175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.927 [2024-05-15 09:05:33.451241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x204ec60 with addr=10.0.0.2, port=4420 00:39:38.927 [2024-05-15 09:05:33.451294] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204ec60 is same with the state(5) to be set 00:39:38.927 [2024-05-15 09:05:33.451334] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x204ec60 (9): Bad file descriptor 00:39:38.927 [2024-05-15 09:05:33.451410] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:39:38.927 [2024-05-15 09:05:33.451443] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:39:38.927 [2024-05-15 09:05:33.451472] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:39:38.927 [2024-05-15 09:05:33.451507] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:38.927 [2024-05-15 09:05:33.460797] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:39:38.927 [2024-05-15 09:05:33.460996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.927 [2024-05-15 09:05:33.461161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.927 [2024-05-15 09:05:33.461188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x204ec60 with addr=10.0.0.2, port=4420 00:39:38.927 [2024-05-15 09:05:33.461232] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204ec60 is same with the state(5) to be set 00:39:38.927 [2024-05-15 09:05:33.461280] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x204ec60 (9): Bad file descriptor 00:39:38.927 [2024-05-15 09:05:33.461303] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:39:38.927 [2024-05-15 09:05:33.461318] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:39:38.927 [2024-05-15 09:05:33.461332] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:39:38.927 [2024-05-15 09:05:33.461351] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:38.927 [2024-05-15 09:05:33.470882] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:39:38.927 [2024-05-15 09:05:33.471059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.927 [2024-05-15 09:05:33.471239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.927 [2024-05-15 09:05:33.471285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x204ec60 with addr=10.0.0.2, port=4420 00:39:38.927 [2024-05-15 09:05:33.471303] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204ec60 is same with the state(5) to be set 00:39:38.927 [2024-05-15 09:05:33.471326] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x204ec60 (9): Bad file descriptor 00:39:38.927 [2024-05-15 09:05:33.471347] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:39:38.927 [2024-05-15 09:05:33.471363] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:39:38.927 [2024-05-15 09:05:33.471377] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:39:38.927 [2024-05-15 09:05:33.471397] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:38.927 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:38.927 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:39:38.927 09:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:39:38.927 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:39:38.927 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:39:38.927 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:39:38.927 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:39:38.927 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_bdev_list 00:39:38.927 09:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:39:38.927 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:38.927 09:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:39:38.927 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:38.927 09:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:39:38.927 09:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:39:38.927 [2024-05-15 09:05:33.480964] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:39:38.927 [2024-05-15 09:05:33.481141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.927 [2024-05-15 09:05:33.481338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.927 [2024-05-15 09:05:33.481366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x204ec60 with addr=10.0.0.2, port=4420 00:39:38.927 [2024-05-15 09:05:33.481383] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204ec60 is same with the state(5) to be set 00:39:38.927 [2024-05-15 09:05:33.481406] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x204ec60 (9): Bad file descriptor 00:39:38.927 [2024-05-15 09:05:33.481440] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:39:38.927 [2024-05-15 09:05:33.481459] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:39:38.927 [2024-05-15 09:05:33.481473] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:39:38.927 [2024-05-15 09:05:33.481522] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:38.927 [2024-05-15 09:05:33.491045] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:39:38.927 [2024-05-15 09:05:33.491247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.927 [2024-05-15 09:05:33.491388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.927 [2024-05-15 09:05:33.491414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x204ec60 with addr=10.0.0.2, port=4420 00:39:38.927 [2024-05-15 09:05:33.491431] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204ec60 is same with the state(5) to be set 00:39:38.927 [2024-05-15 09:05:33.491454] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x204ec60 (9): Bad file descriptor 00:39:38.927 [2024-05-15 09:05:33.491504] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:39:38.927 [2024-05-15 09:05:33.491525] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:39:38.927 [2024-05-15 09:05:33.491541] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:39:38.927 [2024-05-15 09:05:33.491577] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:38.927 [2024-05-15 09:05:33.501124] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:39:38.927 [2024-05-15 09:05:33.501379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.927 [2024-05-15 09:05:33.501505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.927 [2024-05-15 09:05:33.501531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x204ec60 with addr=10.0.0.2, port=4420 00:39:38.927 [2024-05-15 09:05:33.501565] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204ec60 is same with the state(5) to be set 00:39:38.927 [2024-05-15 09:05:33.501596] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x204ec60 (9): Bad file descriptor 00:39:38.927 [2024-05-15 09:05:33.501634] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:39:38.927 [2024-05-15 09:05:33.501655] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:39:38.927 [2024-05-15 09:05:33.501670] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:39:38.927 [2024-05-15 09:05:33.501691] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:38.927 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:38.927 [2024-05-15 09:05:33.511201] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:39:38.927 [2024-05-15 09:05:33.511417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.927 [2024-05-15 09:05:33.511572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.927 [2024-05-15 09:05:33.511600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x204ec60 with addr=10.0.0.2, port=4420 00:39:38.927 [2024-05-15 09:05:33.511618] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204ec60 is same with the state(5) to be set 00:39:38.927 [2024-05-15 09:05:33.511642] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x204ec60 (9): Bad file descriptor 00:39:38.927 [2024-05-15 09:05:33.511681] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:39:38.927 [2024-05-15 09:05:33.511701] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:39:38.927 [2024-05-15 09:05:33.511717] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:39:38.927 [2024-05-15 09:05:33.511751] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:38.927 [2024-05-15 09:05:33.512724] bdev_nvme.c:6754:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:39:38.927 [2024-05-15 09:05:33.512756] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:39:38.927 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:39:38.927 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:39:38.927 09:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:39:38.927 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:39:38.927 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:39:38.927 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:39:38.927 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:39:38.927 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_paths nvme0 00:39:38.927 09:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:39:38.927 09:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:39:38.927 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:38.927 09:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:39:38.928 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:38.928 09:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:39:38.928 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:38.928 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ 4421 == \4\4\2\1 ]] 00:39:38.928 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:39:38.928 09:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:39:38.928 09:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:39:38.928 09:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:39:38.928 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:39:38.928 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:39:38.928 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:39:38.928 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:39:38.928 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:39:38.928 09:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:39:38.928 09:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:39:38.928 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:38.928 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:38.928 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:38.928 09:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:39:38.928 09:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:39:38.928 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:39:38.928 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:39:38.928 09:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:39:38.928 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:38.928 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:38.928 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:38.928 09:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:39:38.928 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:39:38.928 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:39:38.928 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:39:38.928 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:39:38.928 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_names 00:39:38.928 09:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:39:38.928 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:38.928 09:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:39:38.928 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:38.928 09:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:39:38.928 09:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:39:38.928 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:38.928 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ '' == '' ]] 00:39:38.928 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:39:38.928 09:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:39:38.928 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:39:38.928 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:39:38.928 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:39:38.928 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:39:38.928 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_bdev_list 00:39:38.928 09:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:39:38.928 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:38.928 09:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:39:38.928 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:38.928 09:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:39:38.928 09:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:39:38.928 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:39.185 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ '' == '' ]] 00:39:39.185 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:39:39.185 09:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:39:39.185 09:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:39:39.185 09:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:39:39.185 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:39:39.185 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:39:39.185 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:39:39.185 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:39:39.185 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:39:39.185 09:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:39:39.185 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:39.185 09:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:39:39.185 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:39.185 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:39.185 09:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:39:39.185 09:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:39:39.185 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:39:39.185 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:39:39.185 09:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:39:39.185 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:39.185 09:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:40.117 [2024-05-15 09:05:34.798376] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:39:40.117 [2024-05-15 09:05:34.798400] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:39:40.117 [2024-05-15 09:05:34.798423] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:39:40.117 [2024-05-15 09:05:34.884704] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:39:40.375 [2024-05-15 09:05:34.951958] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:39:40.375 [2024-05-15 09:05:34.951998] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:39:40.375 09:05:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:40.375 09:05:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:39:40.375 09:05:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:39:40.375 09:05:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:39:40.375 09:05:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:39:40.375 09:05:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:39:40.375 09:05:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:39:40.375 09:05:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:39:40.375 09:05:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:39:40.375 09:05:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:40.375 09:05:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:40.375 request: 00:39:40.375 { 00:39:40.375 "name": "nvme", 00:39:40.375 "trtype": "tcp", 00:39:40.375 "traddr": "10.0.0.2", 00:39:40.375 "hostnqn": "nqn.2021-12.io.spdk:test", 00:39:40.375 "adrfam": "ipv4", 00:39:40.375 "trsvcid": "8009", 00:39:40.375 "wait_for_attach": true, 00:39:40.375 "method": "bdev_nvme_start_discovery", 00:39:40.375 "req_id": 1 00:39:40.375 } 00:39:40.375 Got JSON-RPC error response 00:39:40.375 response: 00:39:40.375 { 00:39:40.375 "code": -17, 00:39:40.375 "message": "File exists" 00:39:40.375 } 00:39:40.375 09:05:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:39:40.375 09:05:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:39:40.375 09:05:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:39:40.375 09:05:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:39:40.375 09:05:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:39:40.375 09:05:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:39:40.375 09:05:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:39:40.375 09:05:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:39:40.375 09:05:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:40.375 09:05:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:39:40.375 09:05:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:40.375 09:05:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:39:40.375 09:05:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:40.375 09:05:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:39:40.375 09:05:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:39:40.375 09:05:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:39:40.375 09:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:40.375 09:05:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:39:40.375 09:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:40.375 09:05:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:39:40.375 09:05:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:39:40.375 09:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:40.375 09:05:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:39:40.375 09:05:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:39:40.375 09:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:39:40.375 09:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:39:40.375 09:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:39:40.375 09:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:39:40.375 09:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:39:40.375 09:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:39:40.375 09:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:39:40.375 09:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:40.375 09:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:40.375 request: 00:39:40.375 { 00:39:40.375 "name": "nvme_second", 00:39:40.375 "trtype": "tcp", 00:39:40.375 "traddr": "10.0.0.2", 00:39:40.375 "hostnqn": "nqn.2021-12.io.spdk:test", 00:39:40.375 "adrfam": "ipv4", 00:39:40.375 "trsvcid": "8009", 00:39:40.375 "wait_for_attach": true, 00:39:40.375 "method": "bdev_nvme_start_discovery", 00:39:40.375 "req_id": 1 00:39:40.375 } 00:39:40.375 Got JSON-RPC error response 00:39:40.375 response: 00:39:40.375 { 00:39:40.375 "code": -17, 00:39:40.375 "message": "File exists" 00:39:40.375 } 00:39:40.375 09:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:39:40.376 09:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:39:40.376 09:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:39:40.376 09:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:39:40.376 09:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:39:40.376 09:05:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:39:40.376 09:05:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:39:40.376 09:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:40.376 09:05:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:39:40.376 09:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:40.376 09:05:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:39:40.376 09:05:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:39:40.376 09:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:40.376 09:05:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:39:40.376 09:05:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:39:40.376 09:05:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:39:40.376 09:05:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:39:40.376 09:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:40.376 09:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:40.376 09:05:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:39:40.376 09:05:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:39:40.376 09:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:40.376 09:05:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:39:40.376 09:05:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:39:40.376 09:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:39:40.376 09:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:39:40.376 09:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:39:40.376 09:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:39:40.376 09:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:39:40.376 09:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:39:40.376 09:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:39:40.376 09:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:40.376 09:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:41.747 [2024-05-15 09:05:36.163506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.747 [2024-05-15 09:05:36.163717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.747 [2024-05-15 09:05:36.163745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x204d0b0 with addr=10.0.0.2, port=8010 00:39:41.747 [2024-05-15 09:05:36.163775] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:39:41.747 [2024-05-15 09:05:36.163790] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:39:41.747 [2024-05-15 09:05:36.163805] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:39:42.681 [2024-05-15 09:05:37.165972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.681 [2024-05-15 09:05:37.166154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.681 [2024-05-15 09:05:37.166184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x204d0b0 with addr=10.0.0.2, port=8010 00:39:42.681 [2024-05-15 09:05:37.166225] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:39:42.681 [2024-05-15 09:05:37.166244] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:39:42.681 [2024-05-15 09:05:37.166275] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:39:43.616 [2024-05-15 09:05:38.168113] bdev_nvme.c:7010:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:39:43.616 request: 00:39:43.616 { 00:39:43.616 "name": "nvme_second", 00:39:43.616 "trtype": "tcp", 00:39:43.616 "traddr": "10.0.0.2", 00:39:43.616 "hostnqn": "nqn.2021-12.io.spdk:test", 00:39:43.616 "adrfam": "ipv4", 00:39:43.616 "trsvcid": "8010", 00:39:43.616 "attach_timeout_ms": 3000, 00:39:43.616 "method": "bdev_nvme_start_discovery", 00:39:43.616 "req_id": 1 00:39:43.616 } 00:39:43.616 Got JSON-RPC error response 00:39:43.616 response: 00:39:43.616 { 00:39:43.616 "code": -110, 00:39:43.616 "message": "Connection timed out" 00:39:43.616 } 00:39:43.616 09:05:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:39:43.616 09:05:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:39:43.616 09:05:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:39:43.616 09:05:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:39:43.616 09:05:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:39:43.616 09:05:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:39:43.616 09:05:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:39:43.616 09:05:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:39:43.616 09:05:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:43.616 09:05:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:43.616 09:05:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:39:43.616 09:05:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:39:43.616 09:05:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:43.616 09:05:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:39:43.616 09:05:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:39:43.616 09:05:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2402576 00:39:43.616 09:05:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:39:43.616 09:05:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:39:43.616 09:05:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:39:43.616 09:05:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:39:43.616 09:05:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:39:43.616 09:05:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:39:43.616 09:05:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:39:43.616 rmmod nvme_tcp 00:39:43.616 rmmod nvme_fabrics 00:39:43.616 rmmod nvme_keyring 00:39:43.616 09:05:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:39:43.616 09:05:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:39:43.616 09:05:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:39:43.616 09:05:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 2402449 ']' 00:39:43.616 09:05:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 2402449 00:39:43.616 09:05:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@947 -- # '[' -z 2402449 ']' 00:39:43.616 09:05:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # kill -0 2402449 00:39:43.616 09:05:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # uname 00:39:43.616 09:05:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:39:43.616 09:05:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2402449 00:39:43.616 09:05:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:39:43.616 09:05:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:39:43.616 09:05:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2402449' 00:39:43.616 killing process with pid 2402449 00:39:43.616 09:05:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # kill 2402449 00:39:43.616 [2024-05-15 09:05:38.282650] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:39:43.616 09:05:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@971 -- # wait 2402449 00:39:43.875 09:05:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:39:43.875 09:05:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:39:43.875 09:05:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:39:43.875 09:05:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:39:43.875 09:05:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:39:43.875 09:05:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:43.875 09:05:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:43.875 09:05:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:45.775 09:05:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:39:45.775 00:39:45.775 real 0m14.501s 00:39:45.775 user 0m21.149s 00:39:45.775 sys 0m3.168s 00:39:45.775 09:05:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # xtrace_disable 00:39:45.775 09:05:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:45.775 ************************************ 00:39:45.775 END TEST nvmf_host_discovery 00:39:45.775 ************************************ 00:39:46.033 09:05:40 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:39:46.033 09:05:40 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:39:46.033 09:05:40 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:39:46.033 09:05:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:46.033 ************************************ 00:39:46.033 START TEST nvmf_host_multipath_status 00:39:46.033 ************************************ 00:39:46.033 09:05:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:39:46.033 * Looking for test storage... 00:39:46.033 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:39:46.033 09:05:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:46.033 09:05:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:39:46.033 09:05:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:46.033 09:05:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:46.033 09:05:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:46.033 09:05:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:46.033 09:05:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:46.033 09:05:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:46.033 09:05:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:46.033 09:05:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:46.033 09:05:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:46.033 09:05:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:46.033 09:05:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:39:46.033 09:05:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:39:46.033 09:05:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:46.033 09:05:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:46.033 09:05:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:46.033 09:05:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:46.033 09:05:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:46.033 09:05:40 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:46.033 09:05:40 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:46.033 09:05:40 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:46.033 09:05:40 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:46.033 09:05:40 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:46.033 09:05:40 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:46.033 09:05:40 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:39:46.033 09:05:40 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:46.033 09:05:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:39:46.033 09:05:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:39:46.033 09:05:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:39:46.033 09:05:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:46.033 09:05:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:46.033 09:05:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:46.033 09:05:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:39:46.033 09:05:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:39:46.033 09:05:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:39:46.033 09:05:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:39:46.033 09:05:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:39:46.033 09:05:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:46.033 09:05:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:39:46.033 09:05:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:39:46.033 09:05:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:39:46.033 09:05:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:39:46.033 09:05:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:39:46.033 09:05:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:46.033 09:05:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:39:46.033 09:05:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:39:46.033 09:05:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:39:46.033 09:05:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:46.033 09:05:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:46.033 09:05:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:46.033 09:05:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:39:46.033 09:05:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:39:46.033 09:05:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:39:46.033 09:05:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:39:48.564 Found 0000:09:00.0 (0x8086 - 0x159b) 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:39:48.564 Found 0000:09:00.1 (0x8086 - 0x159b) 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:39:48.564 Found net devices under 0000:09:00.0: cvl_0_0 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:39:48.564 Found net devices under 0000:09:00.1: cvl_0_1 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:39:48.564 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:48.564 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:39:48.564 00:39:48.564 --- 10.0.0.2 ping statistics --- 00:39:48.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:48.564 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:48.564 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:48.564 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:39:48.564 00:39:48.564 --- 10.0.0.1 ping statistics --- 00:39:48.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:48.564 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:39:48.564 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@721 -- # xtrace_disable 00:39:48.565 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:39:48.565 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=2406044 00:39:48.565 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:39:48.565 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 2406044 00:39:48.565 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@828 -- # '[' -z 2406044 ']' 00:39:48.565 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:48.565 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local max_retries=100 00:39:48.565 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:48.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:48.565 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # xtrace_disable 00:39:48.565 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:39:48.565 [2024-05-15 09:05:43.281228] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:39:48.565 [2024-05-15 09:05:43.281327] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:48.565 EAL: No free 2048 kB hugepages reported on node 1 00:39:48.822 [2024-05-15 09:05:43.360195] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:48.822 [2024-05-15 09:05:43.449070] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:48.822 [2024-05-15 09:05:43.449119] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:48.822 [2024-05-15 09:05:43.449145] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:48.822 [2024-05-15 09:05:43.449160] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:48.822 [2024-05-15 09:05:43.449172] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:48.822 [2024-05-15 09:05:43.449250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:39:48.822 [2024-05-15 09:05:43.449270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:48.822 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:39:48.822 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@861 -- # return 0 00:39:48.822 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:39:48.822 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@727 -- # xtrace_disable 00:39:48.822 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:39:48.822 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:48.822 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2406044 00:39:48.822 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:49.080 [2024-05-15 09:05:43.808317] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:49.080 09:05:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:39:49.645 Malloc0 00:39:49.645 09:05:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:39:49.645 09:05:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:49.903 09:05:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:50.161 [2024-05-15 09:05:44.877485] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:39:50.161 [2024-05-15 09:05:44.877835] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:50.162 09:05:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:39:50.420 [2024-05-15 09:05:45.118367] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:39:50.420 09:05:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2406321 00:39:50.420 09:05:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:39:50.420 09:05:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:39:50.420 09:05:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2406321 /var/tmp/bdevperf.sock 00:39:50.420 09:05:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@828 -- # '[' -z 2406321 ']' 00:39:50.420 09:05:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:50.420 09:05:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local max_retries=100 00:39:50.420 09:05:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:50.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:50.420 09:05:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # xtrace_disable 00:39:50.420 09:05:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:39:50.679 09:05:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:39:50.679 09:05:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@861 -- # return 0 00:39:50.679 09:05:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:39:50.937 09:05:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:39:51.504 Nvme0n1 00:39:51.504 09:05:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:39:52.070 Nvme0n1 00:39:52.070 09:05:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:39:52.070 09:05:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:39:53.974 09:05:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:39:53.974 09:05:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:39:54.232 09:05:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:39:54.521 09:05:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:39:55.499 09:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:39:55.499 09:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:39:55.499 09:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:39:55.499 09:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:39:55.759 09:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:39:55.759 09:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:39:55.759 09:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:39:55.759 09:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:39:56.018 09:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:39:56.018 09:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:39:56.018 09:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:39:56.018 09:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:39:56.276 09:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:39:56.276 09:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:39:56.276 09:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:39:56.277 09:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:39:56.536 09:05:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:39:56.536 09:05:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:39:56.536 09:05:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:39:56.536 09:05:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:39:56.795 09:05:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:39:56.795 09:05:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:39:56.795 09:05:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:39:56.795 09:05:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:39:57.053 09:05:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:39:57.053 09:05:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:39:57.053 09:05:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:39:57.312 09:05:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:39:57.570 09:05:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:39:58.516 09:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:39:58.516 09:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:39:58.516 09:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:39:58.517 09:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:39:58.781 09:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:39:58.781 09:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:39:58.781 09:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:39:58.781 09:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:39:59.040 09:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:39:59.040 09:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:39:59.040 09:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:39:59.040 09:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:39:59.298 09:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:39:59.298 09:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:39:59.298 09:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:39:59.298 09:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:39:59.555 09:05:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:39:59.555 09:05:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:39:59.555 09:05:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:39:59.555 09:05:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:39:59.814 09:05:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:39:59.814 09:05:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:39:59.814 09:05:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:39:59.814 09:05:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:40:00.072 09:05:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:00.072 09:05:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:40:00.072 09:05:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:40:00.330 09:05:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:40:00.588 09:05:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:40:01.522 09:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:40:01.522 09:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:40:01.522 09:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:01.522 09:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:40:01.780 09:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:01.780 09:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:40:01.780 09:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:01.780 09:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:40:02.037 09:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:02.037 09:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:40:02.037 09:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:02.037 09:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:40:02.295 09:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:02.295 09:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:40:02.295 09:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:02.295 09:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:40:02.553 09:05:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:02.553 09:05:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:40:02.553 09:05:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:02.553 09:05:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:40:02.812 09:05:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:02.812 09:05:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:40:02.812 09:05:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:02.812 09:05:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:40:03.070 09:05:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:03.070 09:05:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:40:03.070 09:05:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:40:03.328 09:05:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:40:03.586 09:05:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:40:04.517 09:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:40:04.517 09:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:40:04.517 09:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:04.517 09:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:40:04.774 09:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:04.774 09:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:40:04.774 09:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:04.774 09:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:40:05.031 09:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:05.031 09:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:40:05.031 09:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:05.031 09:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:40:05.289 09:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:05.289 09:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:40:05.289 09:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:05.289 09:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:40:05.547 09:06:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:05.547 09:06:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:40:05.547 09:06:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:05.547 09:06:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:40:05.805 09:06:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:05.805 09:06:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:40:05.805 09:06:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:05.805 09:06:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:40:06.062 09:06:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:06.062 09:06:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:40:06.062 09:06:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:40:06.319 09:06:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:40:06.576 09:06:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:40:07.507 09:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:40:07.507 09:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:40:07.507 09:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:07.507 09:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:40:07.768 09:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:07.768 09:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:40:07.768 09:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:07.768 09:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:40:08.119 09:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:08.119 09:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:40:08.120 09:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:08.120 09:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:40:08.378 09:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:08.378 09:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:40:08.378 09:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:08.378 09:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:40:08.378 09:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:08.378 09:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:40:08.378 09:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:08.378 09:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:40:08.635 09:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:08.635 09:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:40:08.635 09:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:08.635 09:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:40:08.892 09:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:08.892 09:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:40:08.892 09:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:40:09.149 09:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:40:09.442 09:06:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:40:10.373 09:06:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:40:10.373 09:06:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:40:10.373 09:06:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:10.373 09:06:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:40:10.630 09:06:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:10.630 09:06:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:40:10.630 09:06:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:10.630 09:06:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:40:10.887 09:06:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:10.887 09:06:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:40:10.887 09:06:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:10.887 09:06:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:40:11.145 09:06:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:11.145 09:06:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:40:11.145 09:06:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:11.145 09:06:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:40:11.403 09:06:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:11.403 09:06:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:40:11.403 09:06:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:11.403 09:06:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:40:11.659 09:06:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:11.659 09:06:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:40:11.659 09:06:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:11.659 09:06:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:40:11.916 09:06:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:11.916 09:06:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:40:12.174 09:06:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:40:12.174 09:06:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:40:12.431 09:06:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:40:12.688 09:06:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:40:13.619 09:06:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:40:13.619 09:06:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:40:13.619 09:06:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:13.619 09:06:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:40:13.876 09:06:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:13.876 09:06:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:40:13.876 09:06:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:13.876 09:06:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:40:14.134 09:06:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:14.134 09:06:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:40:14.134 09:06:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:14.134 09:06:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:40:14.392 09:06:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:14.392 09:06:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:40:14.392 09:06:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:14.392 09:06:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:40:14.649 09:06:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:14.649 09:06:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:40:14.649 09:06:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:14.649 09:06:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:40:14.907 09:06:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:14.907 09:06:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:40:15.164 09:06:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:15.164 09:06:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:40:15.164 09:06:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:15.164 09:06:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:40:15.164 09:06:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:40:15.423 09:06:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:40:15.681 09:06:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:40:17.052 09:06:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:40:17.052 09:06:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:40:17.052 09:06:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:17.052 09:06:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:40:17.052 09:06:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:17.052 09:06:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:40:17.052 09:06:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:17.052 09:06:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:40:17.309 09:06:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:17.309 09:06:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:40:17.309 09:06:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:17.309 09:06:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:40:17.567 09:06:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:17.567 09:06:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:40:17.567 09:06:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:17.567 09:06:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:40:17.824 09:06:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:17.824 09:06:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:40:17.824 09:06:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:17.824 09:06:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:40:18.081 09:06:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:18.081 09:06:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:40:18.081 09:06:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:18.081 09:06:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:40:18.338 09:06:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:18.338 09:06:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:40:18.338 09:06:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:40:18.595 09:06:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:40:18.852 09:06:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:40:19.784 09:06:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:40:19.784 09:06:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:40:19.784 09:06:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:19.784 09:06:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:40:20.041 09:06:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:20.041 09:06:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:40:20.041 09:06:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:20.041 09:06:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:40:20.299 09:06:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:20.299 09:06:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:40:20.299 09:06:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:20.299 09:06:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:40:20.557 09:06:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:20.557 09:06:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:40:20.557 09:06:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:20.557 09:06:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:40:20.815 09:06:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:20.815 09:06:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:40:20.815 09:06:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:20.815 09:06:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:40:21.075 09:06:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:21.075 09:06:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:40:21.075 09:06:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:21.075 09:06:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:40:21.388 09:06:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:21.388 09:06:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:40:21.388 09:06:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:40:21.645 09:06:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:40:21.903 09:06:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:40:22.833 09:06:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:40:22.833 09:06:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:40:22.833 09:06:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:22.833 09:06:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:40:23.090 09:06:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:23.090 09:06:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:40:23.090 09:06:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:23.090 09:06:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:40:23.347 09:06:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:23.347 09:06:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:40:23.347 09:06:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:23.347 09:06:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:40:23.604 09:06:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:23.604 09:06:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:40:23.604 09:06:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:23.604 09:06:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:40:23.861 09:06:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:23.862 09:06:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:40:23.862 09:06:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:23.862 09:06:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:40:24.119 09:06:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:24.119 09:06:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:40:24.119 09:06:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:24.119 09:06:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:40:24.376 09:06:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:24.376 09:06:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2406321 00:40:24.376 09:06:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@947 -- # '[' -z 2406321 ']' 00:40:24.376 09:06:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # kill -0 2406321 00:40:24.376 09:06:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # uname 00:40:24.376 09:06:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:40:24.376 09:06:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2406321 00:40:24.376 09:06:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:40:24.376 09:06:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:40:24.376 09:06:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2406321' 00:40:24.376 killing process with pid 2406321 00:40:24.376 09:06:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # kill 2406321 00:40:24.376 09:06:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # wait 2406321 00:40:24.376 Connection closed with partial response: 00:40:24.376 00:40:24.376 00:40:24.646 09:06:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2406321 00:40:24.646 09:06:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:40:24.646 [2024-05-15 09:05:45.173404] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:40:24.646 [2024-05-15 09:05:45.173498] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2406321 ] 00:40:24.646 EAL: No free 2048 kB hugepages reported on node 1 00:40:24.646 [2024-05-15 09:05:45.242565] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:24.646 [2024-05-15 09:05:45.323583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:40:24.646 Running I/O for 90 seconds... 00:40:24.646 [2024-05-15 09:06:00.899030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:56512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.646 [2024-05-15 09:06:00.899087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:40:24.646 [2024-05-15 09:06:00.899164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:55936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.646 [2024-05-15 09:06:00.899201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:24.646 [2024-05-15 09:06:00.899233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:55944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.646 [2024-05-15 09:06:00.899276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:24.646 [2024-05-15 09:06:00.899300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:55952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.646 [2024-05-15 09:06:00.899317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:24.646 [2024-05-15 09:06:00.899340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:55960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.646 [2024-05-15 09:06:00.899356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:24.646 [2024-05-15 09:06:00.899378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:55968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.646 [2024-05-15 09:06:00.899395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:40:24.646 [2024-05-15 09:06:00.899418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:55976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.646 [2024-05-15 09:06:00.899434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:40:24.646 [2024-05-15 09:06:00.899457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:55984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.646 [2024-05-15 09:06:00.899473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:40:24.646 [2024-05-15 09:06:00.899495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:55992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.646 [2024-05-15 09:06:00.899523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:40:24.646 [2024-05-15 09:06:00.899952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:56000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.646 [2024-05-15 09:06:00.899976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:40:24.646 [2024-05-15 09:06:00.900005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:56008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.646 [2024-05-15 09:06:00.900037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:40:24.646 [2024-05-15 09:06:00.900062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:56016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.646 [2024-05-15 09:06:00.900080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:40:24.646 [2024-05-15 09:06:00.900104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:56024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.646 [2024-05-15 09:06:00.900122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:40:24.646 [2024-05-15 09:06:00.900146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:56032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.646 [2024-05-15 09:06:00.900164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:40:24.647 [2024-05-15 09:06:00.900188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:56040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.647 [2024-05-15 09:06:00.900205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:40:24.647 [2024-05-15 09:06:00.900240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:56048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.647 [2024-05-15 09:06:00.900261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:40:24.647 [2024-05-15 09:06:00.900284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:56056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.647 [2024-05-15 09:06:00.900302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:40:24.647 [2024-05-15 09:06:00.900325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:56064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.647 [2024-05-15 09:06:00.900343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:40:24.647 [2024-05-15 09:06:00.900365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:56072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.647 [2024-05-15 09:06:00.900382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:40:24.647 [2024-05-15 09:06:00.900405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:56080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.647 [2024-05-15 09:06:00.900424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:40:24.647 [2024-05-15 09:06:00.900447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:56088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.647 [2024-05-15 09:06:00.900464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:40:24.647 [2024-05-15 09:06:00.900488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:56096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.647 [2024-05-15 09:06:00.900511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:40:24.647 [2024-05-15 09:06:00.900550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:56104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.647 [2024-05-15 09:06:00.900567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:40:24.647 [2024-05-15 09:06:00.900595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:56112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.647 [2024-05-15 09:06:00.900612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:40:24.647 [2024-05-15 09:06:00.900635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:56120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.647 [2024-05-15 09:06:00.900651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:40:24.647 [2024-05-15 09:06:00.900673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:56128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.647 [2024-05-15 09:06:00.900690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:40:24.647 [2024-05-15 09:06:00.900712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:56136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.647 [2024-05-15 09:06:00.900729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:40:24.647 [2024-05-15 09:06:00.900751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:56144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.647 [2024-05-15 09:06:00.900767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:40:24.647 [2024-05-15 09:06:00.900805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:56152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.647 [2024-05-15 09:06:00.900822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:40:24.647 [2024-05-15 09:06:00.900869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:56160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.647 [2024-05-15 09:06:00.900886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:40:24.647 [2024-05-15 09:06:00.900909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:56168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.647 [2024-05-15 09:06:00.900925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:40:24.647 [2024-05-15 09:06:00.900948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:56176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.647 [2024-05-15 09:06:00.900964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:40:24.647 [2024-05-15 09:06:00.900987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:56184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.647 [2024-05-15 09:06:00.901004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:40:24.647 [2024-05-15 09:06:00.901026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:56192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.647 [2024-05-15 09:06:00.901042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:40:24.647 [2024-05-15 09:06:00.901065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:56200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.647 [2024-05-15 09:06:00.901081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:40:24.647 [2024-05-15 09:06:00.901108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:56208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.647 [2024-05-15 09:06:00.901125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:40:24.647 [2024-05-15 09:06:00.901147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:56216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.647 [2024-05-15 09:06:00.901164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:24.647 [2024-05-15 09:06:00.901186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:56224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.647 [2024-05-15 09:06:00.901226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:40:24.647 [2024-05-15 09:06:00.901252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:56232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.647 [2024-05-15 09:06:00.901270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:40:24.647 [2024-05-15 09:06:00.901293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:56240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.647 [2024-05-15 09:06:00.901310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:40:24.647 [2024-05-15 09:06:00.901333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:56248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.647 [2024-05-15 09:06:00.901349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:40:24.647 [2024-05-15 09:06:00.901373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:56256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.647 [2024-05-15 09:06:00.901389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:40:24.647 [2024-05-15 09:06:00.901411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:56264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.647 [2024-05-15 09:06:00.901428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:40:24.648 [2024-05-15 09:06:00.901451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:56272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.648 [2024-05-15 09:06:00.901468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:40:24.648 [2024-05-15 09:06:00.901490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:56280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.648 [2024-05-15 09:06:00.901518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:40:24.648 [2024-05-15 09:06:00.901557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:56288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.648 [2024-05-15 09:06:00.901573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:40:24.648 [2024-05-15 09:06:00.901595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:56296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.648 [2024-05-15 09:06:00.901610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:40:24.648 [2024-05-15 09:06:00.901632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:56304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.648 [2024-05-15 09:06:00.901660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:40:24.648 [2024-05-15 09:06:00.901683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:56312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.648 [2024-05-15 09:06:00.901714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:40:24.648 [2024-05-15 09:06:00.901737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:56320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.648 [2024-05-15 09:06:00.901753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:40:24.648 [2024-05-15 09:06:00.901774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:56328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.648 [2024-05-15 09:06:00.901789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:40:24.648 [2024-05-15 09:06:00.901810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:56336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.648 [2024-05-15 09:06:00.901826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:40:24.648 [2024-05-15 09:06:00.901847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:56344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.648 [2024-05-15 09:06:00.901863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:40:24.648 [2024-05-15 09:06:00.901886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:56352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.648 [2024-05-15 09:06:00.901901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:40:24.648 [2024-05-15 09:06:00.901923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:56360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.648 [2024-05-15 09:06:00.901939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:40:24.648 [2024-05-15 09:06:00.901960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:56368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.648 [2024-05-15 09:06:00.901976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:40:24.648 [2024-05-15 09:06:00.901997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:56376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.648 [2024-05-15 09:06:00.902013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:40:24.648 [2024-05-15 09:06:00.902034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:56384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.648 [2024-05-15 09:06:00.902050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:40:24.648 [2024-05-15 09:06:00.902071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:56392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.648 [2024-05-15 09:06:00.902086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:40:24.648 [2024-05-15 09:06:00.902108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:56400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.648 [2024-05-15 09:06:00.902127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:40:24.648 [2024-05-15 09:06:00.902149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:56408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.648 [2024-05-15 09:06:00.902165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:40:24.648 [2024-05-15 09:06:00.902187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:56416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.648 [2024-05-15 09:06:00.902227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:40:24.648 [2024-05-15 09:06:00.902253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:56424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.648 [2024-05-15 09:06:00.902270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:40:24.648 [2024-05-15 09:06:00.902292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:56432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.648 [2024-05-15 09:06:00.902307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:40:24.648 [2024-05-15 09:06:00.902329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:56440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.648 [2024-05-15 09:06:00.902345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:40:24.648 [2024-05-15 09:06:00.902367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:56448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.648 [2024-05-15 09:06:00.902383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:40:24.648 [2024-05-15 09:06:00.902405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:56456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.648 [2024-05-15 09:06:00.902421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:40:24.648 [2024-05-15 09:06:00.902443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:56464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.648 [2024-05-15 09:06:00.902459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:40:24.648 [2024-05-15 09:06:00.902481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:56472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.648 [2024-05-15 09:06:00.902506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:24.648 [2024-05-15 09:06:00.902529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:56480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.648 [2024-05-15 09:06:00.902545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:40:24.648 [2024-05-15 09:06:00.902582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:56488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.648 [2024-05-15 09:06:00.902598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:40:24.648 [2024-05-15 09:06:00.902619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:56496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.648 [2024-05-15 09:06:00.902639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:40:24.649 [2024-05-15 09:06:00.902661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:56504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.649 [2024-05-15 09:06:00.902676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:40:24.649 [2024-05-15 09:06:00.902698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:56520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.649 [2024-05-15 09:06:00.902713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:40:24.649 [2024-05-15 09:06:00.902739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:56528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.649 [2024-05-15 09:06:00.902754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:40:24.649 [2024-05-15 09:06:00.902776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:56536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.649 [2024-05-15 09:06:00.902791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:40:24.649 [2024-05-15 09:06:00.902812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:56544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.649 [2024-05-15 09:06:00.902828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:40:24.649 [2024-05-15 09:06:00.902864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:56552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.649 [2024-05-15 09:06:00.902881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:40:24.649 [2024-05-15 09:06:00.902903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:56560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.649 [2024-05-15 09:06:00.902918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:40:24.649 [2024-05-15 09:06:00.902941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:56568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.649 [2024-05-15 09:06:00.902957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:40:24.649 [2024-05-15 09:06:00.903225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:56576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.649 [2024-05-15 09:06:00.903249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:40:24.649 [2024-05-15 09:06:00.903282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:56584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.649 [2024-05-15 09:06:00.903301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:40:24.649 [2024-05-15 09:06:00.903329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:56592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.649 [2024-05-15 09:06:00.903347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:40:24.649 [2024-05-15 09:06:00.903374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:56600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.649 [2024-05-15 09:06:00.903391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:40:24.649 [2024-05-15 09:06:00.903424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:56608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.649 [2024-05-15 09:06:00.903442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:40:24.649 [2024-05-15 09:06:00.903470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:56616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.649 [2024-05-15 09:06:00.903488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:40:24.649 [2024-05-15 09:06:00.903515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:56624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.649 [2024-05-15 09:06:00.903547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:40:24.649 [2024-05-15 09:06:00.903575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:56632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.649 [2024-05-15 09:06:00.903591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:40:24.649 [2024-05-15 09:06:00.903618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:56640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.649 [2024-05-15 09:06:00.903634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:40:24.649 [2024-05-15 09:06:00.903672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:56648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.649 [2024-05-15 09:06:00.903688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:40:24.649 [2024-05-15 09:06:00.903715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:56656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.649 [2024-05-15 09:06:00.903731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:40:24.649 [2024-05-15 09:06:00.903757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:56664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.649 [2024-05-15 09:06:00.903774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:40:24.649 [2024-05-15 09:06:00.903800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:56672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.649 [2024-05-15 09:06:00.903816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:40:24.649 [2024-05-15 09:06:00.903843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:56680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.649 [2024-05-15 09:06:00.903860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:40:24.649 [2024-05-15 09:06:00.903887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:56688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.649 [2024-05-15 09:06:00.903903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:40:24.649 [2024-05-15 09:06:00.903929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:56696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.649 [2024-05-15 09:06:00.903945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:40:24.649 [2024-05-15 09:06:00.903987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:56704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.649 [2024-05-15 09:06:00.904005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:40:24.649 [2024-05-15 09:06:00.904031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:56712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.649 [2024-05-15 09:06:00.904048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:40:24.649 [2024-05-15 09:06:00.904074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:56720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.649 [2024-05-15 09:06:00.904091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:40:24.649 [2024-05-15 09:06:00.904117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:56728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.649 [2024-05-15 09:06:00.904134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:40:24.649 [2024-05-15 09:06:00.904160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:56736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.649 [2024-05-15 09:06:00.904177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:24.650 [2024-05-15 09:06:00.904237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:56744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.650 [2024-05-15 09:06:00.904256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:40:24.650 [2024-05-15 09:06:00.904285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:56752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.650 [2024-05-15 09:06:00.904302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:40:24.650 [2024-05-15 09:06:00.904329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:56760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.650 [2024-05-15 09:06:00.904346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:40:24.650 [2024-05-15 09:06:00.904373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:56768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.650 [2024-05-15 09:06:00.904390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:40:24.650 [2024-05-15 09:06:00.904418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:56776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.650 [2024-05-15 09:06:00.904434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:40:24.650 [2024-05-15 09:06:00.904462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:56784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.650 [2024-05-15 09:06:00.904479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:40:24.650 [2024-05-15 09:06:00.904532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:56792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.650 [2024-05-15 09:06:00.904549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:40:24.650 [2024-05-15 09:06:00.904576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:56800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.650 [2024-05-15 09:06:00.904600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:40:24.650 [2024-05-15 09:06:00.904628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.650 [2024-05-15 09:06:00.904645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:40:24.650 [2024-05-15 09:06:00.904671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:56816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.650 [2024-05-15 09:06:00.904688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:40:24.650 [2024-05-15 09:06:00.904714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:56824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.650 [2024-05-15 09:06:00.904731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:40:24.650 [2024-05-15 09:06:00.904757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:56832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.650 [2024-05-15 09:06:00.904774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:40:24.650 [2024-05-15 09:06:00.904801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:56840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.650 [2024-05-15 09:06:00.904818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:40:24.650 [2024-05-15 09:06:00.904844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:56848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.650 [2024-05-15 09:06:00.904861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:40:24.650 [2024-05-15 09:06:00.904887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:56856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.650 [2024-05-15 09:06:00.904904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:40:24.650 [2024-05-15 09:06:00.904931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:56864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.650 [2024-05-15 09:06:00.904948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:40:24.650 [2024-05-15 09:06:00.904975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:56872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.650 [2024-05-15 09:06:00.904992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:40:24.650 [2024-05-15 09:06:00.905018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:56880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.650 [2024-05-15 09:06:00.905034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:40:24.650 [2024-05-15 09:06:00.905060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:56888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.650 [2024-05-15 09:06:00.905076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:40:24.650 [2024-05-15 09:06:00.905102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:56896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.650 [2024-05-15 09:06:00.905122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:40:24.650 [2024-05-15 09:06:00.905150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:56904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.650 [2024-05-15 09:06:00.905166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:40:24.650 [2024-05-15 09:06:00.905193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:56912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.650 [2024-05-15 09:06:00.905233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:40:24.650 [2024-05-15 09:06:00.905263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:56920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.650 [2024-05-15 09:06:00.905280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:40:24.650 [2024-05-15 09:06:00.905307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:56928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.650 [2024-05-15 09:06:00.905325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:40:24.650 [2024-05-15 09:06:00.905352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:56936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.650 [2024-05-15 09:06:00.905369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:40:24.650 [2024-05-15 09:06:00.905397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:56944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.650 [2024-05-15 09:06:00.905414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:40:24.650 [2024-05-15 09:06:00.905443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:56952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.650 [2024-05-15 09:06:00.905460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:40:24.650 [2024-05-15 09:06:16.446351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.650 [2024-05-15 09:06:16.446409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:40:24.650 [2024-05-15 09:06:16.446447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:32032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.650 [2024-05-15 09:06:16.446467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:40:24.650 [2024-05-15 09:06:16.446491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:32048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.651 [2024-05-15 09:06:16.446520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:40:24.651 [2024-05-15 09:06:16.446543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:32064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.651 [2024-05-15 09:06:16.446562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:40:24.651 [2024-05-15 09:06:16.446599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:32080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.651 [2024-05-15 09:06:16.446616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:40:24.651 [2024-05-15 09:06:16.446666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:32096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.651 [2024-05-15 09:06:16.446684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:40:24.651 [2024-05-15 09:06:16.446707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:32112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.651 [2024-05-15 09:06:16.446737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:40:24.651 [2024-05-15 09:06:16.446758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:32128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.651 [2024-05-15 09:06:16.446774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:40:24.651 [2024-05-15 09:06:16.446810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:32144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.651 [2024-05-15 09:06:16.446825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:40:24.651 [2024-05-15 09:06:16.446846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:32160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.651 [2024-05-15 09:06:16.446862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:40:24.651 [2024-05-15 09:06:16.446883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:32176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.651 [2024-05-15 09:06:16.446899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:40:24.651 [2024-05-15 09:06:16.446920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:32192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.651 [2024-05-15 09:06:16.446938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:40:24.651 [2024-05-15 09:06:16.446959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:32208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.651 [2024-05-15 09:06:16.446975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:40:24.651 [2024-05-15 09:06:16.446996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:32224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.651 [2024-05-15 09:06:16.447012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:40:24.651 [2024-05-15 09:06:16.447033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.651 [2024-05-15 09:06:16.447049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:40:24.651 [2024-05-15 09:06:16.447070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:32256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.651 [2024-05-15 09:06:16.447087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:40:24.651 [2024-05-15 09:06:16.447109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:32272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.651 [2024-05-15 09:06:16.447125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:24.651 [2024-05-15 09:06:16.447390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:32288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.651 [2024-05-15 09:06:16.447414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:40:24.651 [2024-05-15 09:06:16.447442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:32304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.651 [2024-05-15 09:06:16.447461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:40:24.651 [2024-05-15 09:06:16.447485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:32320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.651 [2024-05-15 09:06:16.447502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:40:24.651 [2024-05-15 09:06:16.447531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:32336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.651 [2024-05-15 09:06:16.447548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:40:24.651 [2024-05-15 09:06:16.447571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:32352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.651 [2024-05-15 09:06:16.447589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:40:24.651 [2024-05-15 09:06:16.447612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:31456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.651 [2024-05-15 09:06:16.447629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:40:24.651 [2024-05-15 09:06:16.447652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:31488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.651 [2024-05-15 09:06:16.447670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:40:24.651 [2024-05-15 09:06:16.447692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:31520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.651 [2024-05-15 09:06:16.447710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:40:24.651 [2024-05-15 09:06:16.447733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:31552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.651 [2024-05-15 09:06:16.447749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:40:24.651 [2024-05-15 09:06:16.447773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:31584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.651 [2024-05-15 09:06:16.447791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:40:24.651 [2024-05-15 09:06:16.447813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:31616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.651 [2024-05-15 09:06:16.447830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:40:24.651 [2024-05-15 09:06:16.447853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.651 [2024-05-15 09:06:16.447870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:40:24.651 [2024-05-15 09:06:16.447893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:31680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.651 [2024-05-15 09:06:16.447915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:40:24.651 [2024-05-15 09:06:16.447954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:31712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.651 [2024-05-15 09:06:16.447972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:40:24.651 [2024-05-15 09:06:16.447993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:32360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.651 [2024-05-15 09:06:16.448027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:40:24.652 [2024-05-15 09:06:16.448051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:31448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.652 [2024-05-15 09:06:16.448069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:40:24.652 [2024-05-15 09:06:16.448091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:31480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.652 [2024-05-15 09:06:16.448108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:40:24.652 [2024-05-15 09:06:16.448147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:31512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.652 [2024-05-15 09:06:16.448164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:40:24.652 [2024-05-15 09:06:16.448186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:31544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.652 [2024-05-15 09:06:16.448210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:40:24.652 [2024-05-15 09:06:16.448492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:32368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.652 [2024-05-15 09:06:16.448527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:40:24.652 [2024-05-15 09:06:16.448555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:32384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.652 [2024-05-15 09:06:16.448574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:40:24.652 [2024-05-15 09:06:16.448598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:32400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.652 [2024-05-15 09:06:16.448615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:40:24.652 [2024-05-15 09:06:16.448638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:32416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.652 [2024-05-15 09:06:16.448655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:40:24.652 [2024-05-15 09:06:16.448677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:32432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.652 [2024-05-15 09:06:16.448694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:40:24.652 [2024-05-15 09:06:16.448717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:31592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.652 [2024-05-15 09:06:16.448738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:40:24.652 [2024-05-15 09:06:16.448763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:31624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.652 [2024-05-15 09:06:16.448781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:40:24.652 [2024-05-15 09:06:16.448804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:31656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.652 [2024-05-15 09:06:16.448836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:40:24.652 [2024-05-15 09:06:16.448860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:31688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.652 [2024-05-15 09:06:16.448877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:40:24.652 [2024-05-15 09:06:16.448914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:31720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.652 [2024-05-15 09:06:16.448931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:24.652 [2024-05-15 09:06:16.448952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:32448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.653 [2024-05-15 09:06:16.448968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:24.653 [2024-05-15 09:06:16.448990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.653 [2024-05-15 09:06:16.449006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:24.653 [2024-05-15 09:06:16.449027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:31792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.653 [2024-05-15 09:06:16.449043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:24.653 [2024-05-15 09:06:16.449065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.653 [2024-05-15 09:06:16.449081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:40:24.653 [2024-05-15 09:06:16.449102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:31856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.653 [2024-05-15 09:06:16.449118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:40:24.653 [2024-05-15 09:06:16.449139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:31888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.653 [2024-05-15 09:06:16.449155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:40:24.653 [2024-05-15 09:06:16.449176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:31920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.653 [2024-05-15 09:06:16.449192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:40:24.653 [2024-05-15 09:06:16.449245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.653 [2024-05-15 09:06:16.449265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:40:24.653 [2024-05-15 09:06:16.449309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:31984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.653 [2024-05-15 09:06:16.449328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:40:24.653 [2024-05-15 09:06:16.449351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:31736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.653 [2024-05-15 09:06:16.449368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:40:24.653 [2024-05-15 09:06:16.449390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:31768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.653 [2024-05-15 09:06:16.449407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:40:24.653 [2024-05-15 09:06:16.449429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:31800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.653 [2024-05-15 09:06:16.449447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:40:24.653 [2024-05-15 09:06:16.449470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:31832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.653 [2024-05-15 09:06:16.449487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:40:24.653 [2024-05-15 09:06:16.449513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:31864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.653 [2024-05-15 09:06:16.449530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:40:24.653 [2024-05-15 09:06:16.449553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:31896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.653 [2024-05-15 09:06:16.449569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:40:24.653 [2024-05-15 09:06:16.449592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.653 [2024-05-15 09:06:16.449608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:40:24.653 [2024-05-15 09:06:16.449631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:31960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.653 [2024-05-15 09:06:16.449647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:40:24.653 [2024-05-15 09:06:16.449685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:31992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.653 [2024-05-15 09:06:16.449701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:40:24.653 [2024-05-15 09:06:16.449723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:32032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.653 [2024-05-15 09:06:16.449739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:40:24.653 [2024-05-15 09:06:16.449760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:32064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.653 [2024-05-15 09:06:16.449776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:40:24.653 [2024-05-15 09:06:16.449818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.653 [2024-05-15 09:06:16.449836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:40:24.653 [2024-05-15 09:06:16.449857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:32128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.653 [2024-05-15 09:06:16.449873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:40:24.653 [2024-05-15 09:06:16.449894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:32160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.653 [2024-05-15 09:06:16.449909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:40:24.653 [2024-05-15 09:06:16.449931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:32192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.653 [2024-05-15 09:06:16.449946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:40:24.653 [2024-05-15 09:06:16.449967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:32224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.653 [2024-05-15 09:06:16.449983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:40:24.653 [2024-05-15 09:06:16.450004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:32256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.653 [2024-05-15 09:06:16.450020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:40:24.653 [2024-05-15 09:06:16.451233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:32304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.653 [2024-05-15 09:06:16.451259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:40:24.653 [2024-05-15 09:06:16.451287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:32336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.653 [2024-05-15 09:06:16.451306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:40:24.653 [2024-05-15 09:06:16.451329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:31456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.653 [2024-05-15 09:06:16.451346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:40:24.653 [2024-05-15 09:06:16.451369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:31520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.654 [2024-05-15 09:06:16.451387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:40:24.654 [2024-05-15 09:06:16.451410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:31584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.654 [2024-05-15 09:06:16.451430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:40:24.654 [2024-05-15 09:06:16.451453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:31648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.654 [2024-05-15 09:06:16.451470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:40:24.654 [2024-05-15 09:06:16.451507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:31712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.654 [2024-05-15 09:06:16.451529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:40:24.654 [2024-05-15 09:06:16.451553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.654 [2024-05-15 09:06:16.451584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:40:24.654 [2024-05-15 09:06:16.451607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:31512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.654 [2024-05-15 09:06:16.451624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:24.654 [2024-05-15 09:06:16.454419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:32040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.654 [2024-05-15 09:06:16.454446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:40:24.654 [2024-05-15 09:06:16.454474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:32072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.654 [2024-05-15 09:06:16.454493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:40:24.654 [2024-05-15 09:06:16.454516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:32104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.654 [2024-05-15 09:06:16.454534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:40:24.654 [2024-05-15 09:06:16.454557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:32136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.654 [2024-05-15 09:06:16.454574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:40:24.654 [2024-05-15 09:06:16.454597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:32168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.654 [2024-05-15 09:06:16.454614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:40:24.654 [2024-05-15 09:06:16.454637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.654 [2024-05-15 09:06:16.454654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:40:24.654 [2024-05-15 09:06:16.454676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:32232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.654 [2024-05-15 09:06:16.454694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:40:24.654 [2024-05-15 09:06:16.454717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:32264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.654 [2024-05-15 09:06:16.454734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:40:24.654 [2024-05-15 09:06:16.454757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:32296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.654 [2024-05-15 09:06:16.454788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:40:24.654 [2024-05-15 09:06:16.454812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:32328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.654 [2024-05-15 09:06:16.454834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:40:24.654 [2024-05-15 09:06:16.454873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.654 [2024-05-15 09:06:16.454890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:40:24.654 [2024-05-15 09:06:16.454912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:32400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.654 [2024-05-15 09:06:16.454928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:40:24.654 [2024-05-15 09:06:16.454949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:32432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.654 [2024-05-15 09:06:16.454965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:40:24.654 [2024-05-15 09:06:16.454986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:31624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.654 [2024-05-15 09:06:16.455002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:40:24.654 [2024-05-15 09:06:16.455024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:31688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.654 [2024-05-15 09:06:16.455040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:40:24.654 [2024-05-15 09:06:16.455061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:32448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.654 [2024-05-15 09:06:16.455076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:40:24.654 [2024-05-15 09:06:16.455098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:31792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.654 [2024-05-15 09:06:16.455114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:40:24.654 [2024-05-15 09:06:16.455136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:31856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.654 [2024-05-15 09:06:16.455152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:40:24.654 [2024-05-15 09:06:16.455173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.654 [2024-05-15 09:06:16.455189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:40:24.654 [2024-05-15 09:06:16.455233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:31984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.654 [2024-05-15 09:06:16.455251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:40:24.654 [2024-05-15 09:06:16.455290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:31768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.654 [2024-05-15 09:06:16.455307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:40:24.654 [2024-05-15 09:06:16.455330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:31832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.654 [2024-05-15 09:06:16.455346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:40:24.654 [2024-05-15 09:06:16.455375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.654 [2024-05-15 09:06:16.455392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:40:24.654 [2024-05-15 09:06:16.455415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:31960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.654 [2024-05-15 09:06:16.455431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:40:24.654 [2024-05-15 09:06:16.455454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:32032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.654 [2024-05-15 09:06:16.455471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:40:24.654 [2024-05-15 09:06:16.455494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:32096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.654 [2024-05-15 09:06:16.455526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:40:24.654 [2024-05-15 09:06:16.455549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.654 [2024-05-15 09:06:16.455565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:40:24.654 [2024-05-15 09:06:16.455602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:32224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.654 [2024-05-15 09:06:16.455619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:40:24.654 [2024-05-15 09:06:16.455641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:32376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.654 [2024-05-15 09:06:16.455656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:40:24.654 [2024-05-15 09:06:16.455677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.654 [2024-05-15 09:06:16.455693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:40:24.654 [2024-05-15 09:06:16.455714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:32336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.654 [2024-05-15 09:06:16.455730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:40:24.654 [2024-05-15 09:06:16.455751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.655 [2024-05-15 09:06:16.455767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:24.655 [2024-05-15 09:06:16.455789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:31648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.655 [2024-05-15 09:06:16.455805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:40:24.655 [2024-05-15 09:06:16.455826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:31448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.655 [2024-05-15 09:06:16.455842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:40:24.655 [2024-05-15 09:06:16.455868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:32440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.655 [2024-05-15 09:06:16.455884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:40:24.655 [2024-05-15 09:06:16.455906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:32048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.655 [2024-05-15 09:06:16.455922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:40:24.655 [2024-05-15 09:06:16.455943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:32112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.655 [2024-05-15 09:06:16.455959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:40:24.655 [2024-05-15 09:06:16.455980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:32456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.655 [2024-05-15 09:06:16.455996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:40:24.655 [2024-05-15 09:06:16.456018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:32472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.655 [2024-05-15 09:06:16.456033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:40:24.655 [2024-05-15 09:06:16.456055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:32488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.655 [2024-05-15 09:06:16.456071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:40:24.655 [2024-05-15 09:06:16.456092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:32504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.655 [2024-05-15 09:06:16.456108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:40:24.655 [2024-05-15 09:06:16.456129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:32520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.655 [2024-05-15 09:06:16.456145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:40:24.655 [2024-05-15 09:06:16.456167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:32536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.655 [2024-05-15 09:06:16.456183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:40:24.655 [2024-05-15 09:06:16.456204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:32552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.655 [2024-05-15 09:06:16.456244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:40:24.655 [2024-05-15 09:06:16.456270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:32568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.655 [2024-05-15 09:06:16.456287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:40:24.655 [2024-05-15 09:06:16.456309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:32208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.655 [2024-05-15 09:06:16.456326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:40:24.655 [2024-05-15 09:06:16.456347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:32272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.655 [2024-05-15 09:06:16.456367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:40:24.655 [2024-05-15 09:06:16.456391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:32320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.655 [2024-05-15 09:06:16.456407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:40:24.655 [2024-05-15 09:06:16.458980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:32360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.655 [2024-05-15 09:06:16.459021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:40:24.655 [2024-05-15 09:06:16.459050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:32584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.655 [2024-05-15 09:06:16.459069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:40:24.655 [2024-05-15 09:06:16.459092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:32600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.655 [2024-05-15 09:06:16.459109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:40:24.655 [2024-05-15 09:06:16.459131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:32616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.655 [2024-05-15 09:06:16.459148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:40:24.655 [2024-05-15 09:06:16.459170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:32632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.655 [2024-05-15 09:06:16.459187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:40:24.655 [2024-05-15 09:06:16.459209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:32648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.655 [2024-05-15 09:06:16.459235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:40:24.655 [2024-05-15 09:06:16.459259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:32664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.655 [2024-05-15 09:06:16.459276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:40:24.655 [2024-05-15 09:06:16.459298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:32680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.655 [2024-05-15 09:06:16.459315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:40:24.655 [2024-05-15 09:06:16.459338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:32696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.655 [2024-05-15 09:06:16.459354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:40:24.655 [2024-05-15 09:06:16.459377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:32712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.655 [2024-05-15 09:06:16.459393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:40:24.655 [2024-05-15 09:06:16.459416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:32728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.655 [2024-05-15 09:06:16.459438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:40:24.655 [2024-05-15 09:06:16.459462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:32744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.655 [2024-05-15 09:06:16.459479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:40:24.655 [2024-05-15 09:06:16.459501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:32760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.655 [2024-05-15 09:06:16.459518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:40:24.655 [2024-05-15 09:06:16.459540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:32776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.655 [2024-05-15 09:06:16.459557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:40:24.655 [2024-05-15 09:06:16.459580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:32792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.655 [2024-05-15 09:06:16.459597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:40:24.655 [2024-05-15 09:06:16.459619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:32072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.655 [2024-05-15 09:06:16.459636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:24.655 [2024-05-15 09:06:16.459658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:32136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.655 [2024-05-15 09:06:16.459675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:40:24.655 [2024-05-15 09:06:16.459697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:32200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.655 [2024-05-15 09:06:16.459714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:40:24.655 [2024-05-15 09:06:16.459736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:32264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.655 [2024-05-15 09:06:16.459753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:40:24.655 [2024-05-15 09:06:16.459775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:32328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.655 [2024-05-15 09:06:16.459792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:40:24.655 [2024-05-15 09:06:16.459815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:32400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.655 [2024-05-15 09:06:16.459832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:40:24.655 [2024-05-15 09:06:16.459855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:31624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.655 [2024-05-15 09:06:16.459871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:40:24.656 [2024-05-15 09:06:16.459894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:32448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.656 [2024-05-15 09:06:16.459911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:40:24.656 [2024-05-15 09:06:16.459938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:31856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.656 [2024-05-15 09:06:16.459956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:40:24.656 [2024-05-15 09:06:16.459979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:31984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.656 [2024-05-15 09:06:16.459996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:40:24.656 [2024-05-15 09:06:16.460019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:31832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.656 [2024-05-15 09:06:16.460035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:40:24.656 [2024-05-15 09:06:16.460058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:31960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.656 [2024-05-15 09:06:16.460090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:40:24.656 [2024-05-15 09:06:16.460112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:32096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.656 [2024-05-15 09:06:16.460143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:40:24.656 [2024-05-15 09:06:16.460166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:32224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.656 [2024-05-15 09:06:16.460182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:40:24.656 [2024-05-15 09:06:16.460228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:32408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.656 [2024-05-15 09:06:16.460247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:40:24.656 [2024-05-15 09:06:16.460286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:31520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.656 [2024-05-15 09:06:16.460303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:40:24.656 [2024-05-15 09:06:16.460326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.656 [2024-05-15 09:06:16.460344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:40:24.656 [2024-05-15 09:06:16.460366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:32048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.656 [2024-05-15 09:06:16.460384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:40:24.656 [2024-05-15 09:06:16.460406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.656 [2024-05-15 09:06:16.460423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:40:24.656 [2024-05-15 09:06:16.460445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:32488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.656 [2024-05-15 09:06:16.460462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:40:24.656 [2024-05-15 09:06:16.460492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:32520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.656 [2024-05-15 09:06:16.460525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:40:24.656 [2024-05-15 09:06:16.460548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.656 [2024-05-15 09:06:16.460580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:40:24.656 [2024-05-15 09:06:16.460603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:32208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.656 [2024-05-15 09:06:16.460619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:40:24.656 [2024-05-15 09:06:16.460640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:32320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.656 [2024-05-15 09:06:16.460656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:40:24.656 [2024-05-15 09:06:16.460677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:32416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.656 [2024-05-15 09:06:16.460694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:40:24.656 [2024-05-15 09:06:16.460715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:32128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.656 [2024-05-15 09:06:16.460731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:40:24.656 [2024-05-15 09:06:16.460752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:32256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.656 [2024-05-15 09:06:16.460768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:40:24.656 [2024-05-15 09:06:16.463308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:32808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.656 [2024-05-15 09:06:16.463335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:40:24.656 [2024-05-15 09:06:16.463363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:32824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.656 [2024-05-15 09:06:16.463382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:40:24.656 [2024-05-15 09:06:16.463406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.656 [2024-05-15 09:06:16.463423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:24.656 [2024-05-15 09:06:16.463446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:32856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.656 [2024-05-15 09:06:16.463463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:24.656 [2024-05-15 09:06:16.463485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:32872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.656 [2024-05-15 09:06:16.463501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:24.656 [2024-05-15 09:06:16.463524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:32888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.656 [2024-05-15 09:06:16.463547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:24.656 [2024-05-15 09:06:16.463570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:32904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.656 [2024-05-15 09:06:16.463587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:40:24.656 [2024-05-15 09:06:16.463610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:32920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.656 [2024-05-15 09:06:16.463627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:40:24.656 [2024-05-15 09:06:16.463649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:32936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.656 [2024-05-15 09:06:16.463666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:40:24.656 [2024-05-15 09:06:16.463689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:32952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.656 [2024-05-15 09:06:16.463706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:40:24.656 [2024-05-15 09:06:16.463728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.656 [2024-05-15 09:06:16.463761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:40:24.656 [2024-05-15 09:06:16.463784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:32984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.656 [2024-05-15 09:06:16.463815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:40:24.656 [2024-05-15 09:06:16.463839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:33000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.656 [2024-05-15 09:06:16.463857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:40:24.656 [2024-05-15 09:06:16.463879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:33016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.656 [2024-05-15 09:06:16.463896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:40:24.656 [2024-05-15 09:06:16.463918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:33032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.656 [2024-05-15 09:06:16.463935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:40:24.656 [2024-05-15 09:06:16.463974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.656 [2024-05-15 09:06:16.463991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:40:24.656 [2024-05-15 09:06:16.464013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:32512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.656 [2024-05-15 09:06:16.464029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:40:24.656 [2024-05-15 09:06:16.464051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:32544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.657 [2024-05-15 09:06:16.464071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:40:24.657 [2024-05-15 09:06:16.464094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:32584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.657 [2024-05-15 09:06:16.464111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:40:24.657 [2024-05-15 09:06:16.464133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.657 [2024-05-15 09:06:16.464149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:40:24.657 [2024-05-15 09:06:16.464170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:32648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.657 [2024-05-15 09:06:16.464187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:40:24.657 [2024-05-15 09:06:16.464209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:32680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.657 [2024-05-15 09:06:16.464252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:40:24.657 [2024-05-15 09:06:16.464279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:32712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.657 [2024-05-15 09:06:16.464297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:40:24.657 [2024-05-15 09:06:16.464319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.657 [2024-05-15 09:06:16.464336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:40:24.657 [2024-05-15 09:06:16.464359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:32776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.657 [2024-05-15 09:06:16.464376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:40:24.657 [2024-05-15 09:06:16.464398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:32072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.657 [2024-05-15 09:06:16.464415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:40:24.657 [2024-05-15 09:06:16.464438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.657 [2024-05-15 09:06:16.464456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:40:24.657 [2024-05-15 09:06:16.464478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:32328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.657 [2024-05-15 09:06:16.464511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:40:24.657 [2024-05-15 09:06:16.464534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.657 [2024-05-15 09:06:16.464550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:40:24.657 [2024-05-15 09:06:16.464587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:31856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.657 [2024-05-15 09:06:16.464604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:40:24.657 [2024-05-15 09:06:16.464630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:31832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.657 [2024-05-15 09:06:16.464661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:40:24.657 [2024-05-15 09:06:16.464684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:32096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.657 [2024-05-15 09:06:16.464701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:40:24.657 [2024-05-15 09:06:16.464723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:32408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.657 [2024-05-15 09:06:16.464755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:40:24.657 [2024-05-15 09:06:16.464779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:31448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.657 [2024-05-15 09:06:16.464796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:40:24.657 [2024-05-15 09:06:16.465831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:32456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.657 [2024-05-15 09:06:16.465856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:40:24.657 [2024-05-15 09:06:16.465889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:32520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.657 [2024-05-15 09:06:16.465906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:40:24.657 [2024-05-15 09:06:16.465928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:32208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.657 [2024-05-15 09:06:16.465944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:40:24.657 [2024-05-15 09:06:16.465965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:32416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.657 [2024-05-15 09:06:16.465981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:24.657 [2024-05-15 09:06:16.466002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:32256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.657 [2024-05-15 09:06:16.466029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:40:24.657 [2024-05-15 09:06:16.466065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:33048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.657 [2024-05-15 09:06:16.466080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:40:24.657 [2024-05-15 09:06:16.466101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:33064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.657 [2024-05-15 09:06:16.466116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:40:24.657 [2024-05-15 09:06:16.466152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:33080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.657 [2024-05-15 09:06:16.466168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:40:24.657 [2024-05-15 09:06:16.466210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:32576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.657 [2024-05-15 09:06:16.466241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:40:24.657 [2024-05-15 09:06:16.466266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:32608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.657 [2024-05-15 09:06:16.466283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:40:24.657 [2024-05-15 09:06:16.466305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:32640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.657 [2024-05-15 09:06:16.466322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:40:24.657 [2024-05-15 09:06:16.466344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.657 [2024-05-15 09:06:16.466361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:40:24.657 [2024-05-15 09:06:16.466383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:32704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.657 [2024-05-15 09:06:16.466400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:40:24.657 [2024-05-15 09:06:16.466423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:32736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.657 [2024-05-15 09:06:16.466439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:40:24.658 [2024-05-15 09:06:16.466461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:32768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.658 [2024-05-15 09:06:16.466478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:40:24.658 [2024-05-15 09:06:16.466515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:32800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.658 [2024-05-15 09:06:16.466533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:40:24.658 [2024-05-15 09:06:16.466555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:32432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.658 [2024-05-15 09:06:16.466586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:40:24.658 [2024-05-15 09:06:16.466609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:32160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.658 [2024-05-15 09:06:16.466624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:40:24.658 [2024-05-15 09:06:16.466662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:32472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.658 [2024-05-15 09:06:16.466679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:40:24.658 [2024-05-15 09:06:16.466701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:32536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.658 [2024-05-15 09:06:16.466718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:40:24.658 [2024-05-15 09:06:16.467649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:33096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.658 [2024-05-15 09:06:16.467673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:40:24.658 [2024-05-15 09:06:16.467700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:33112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.658 [2024-05-15 09:06:16.467718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:40:24.658 [2024-05-15 09:06:16.467740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:33128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.658 [2024-05-15 09:06:16.467757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:40:24.658 [2024-05-15 09:06:16.467779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:33144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.658 [2024-05-15 09:06:16.467795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:40:24.658 [2024-05-15 09:06:16.467816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:33160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.658 [2024-05-15 09:06:16.467832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:40:24.658 [2024-05-15 09:06:16.467854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:33176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.658 [2024-05-15 09:06:16.467870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:40:24.658 [2024-05-15 09:06:16.467907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:33192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.658 [2024-05-15 09:06:16.467924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:40:24.658 [2024-05-15 09:06:16.467947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:33208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.658 [2024-05-15 09:06:16.467979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:40:24.658 [2024-05-15 09:06:16.468003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:33224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.658 [2024-05-15 09:06:16.468020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:40:24.658 [2024-05-15 09:06:16.468043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.658 [2024-05-15 09:06:16.468060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:40:24.658 [2024-05-15 09:06:16.468083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:32824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.658 [2024-05-15 09:06:16.468100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:40:24.658 [2024-05-15 09:06:16.468122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:32856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.658 [2024-05-15 09:06:16.468139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:40:24.658 [2024-05-15 09:06:16.468162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:32888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.658 [2024-05-15 09:06:16.468183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:40:24.658 [2024-05-15 09:06:16.468207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:32920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.658 [2024-05-15 09:06:16.468234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:40:24.658 [2024-05-15 09:06:16.468258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:32952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.658 [2024-05-15 09:06:16.468278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:40:24.658 [2024-05-15 09:06:16.468301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:32984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.658 [2024-05-15 09:06:16.468317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:24.658 [2024-05-15 09:06:16.468340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:33016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.658 [2024-05-15 09:06:16.468356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:40:24.658 [2024-05-15 09:06:16.468379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.658 [2024-05-15 09:06:16.468396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:40:24.658 [2024-05-15 09:06:16.468418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:32544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.658 [2024-05-15 09:06:16.468435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:40:24.658 [2024-05-15 09:06:16.468457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:32616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.658 [2024-05-15 09:06:16.468473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:40:24.658 [2024-05-15 09:06:16.468496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.658 [2024-05-15 09:06:16.468516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:40:24.658 [2024-05-15 09:06:16.468538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:32744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.658 [2024-05-15 09:06:16.468555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:40:24.658 [2024-05-15 09:06:16.468588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:32072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.658 [2024-05-15 09:06:16.468604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:40:24.658 [2024-05-15 09:06:16.468626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:32328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.658 [2024-05-15 09:06:16.468643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:40:24.658 [2024-05-15 09:06:16.468666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.658 [2024-05-15 09:06:16.468686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:40:24.658 [2024-05-15 09:06:16.468711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:32096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.658 [2024-05-15 09:06:16.468727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:40:24.658 [2024-05-15 09:06:16.468750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:31448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.658 [2024-05-15 09:06:16.468767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:40:24.658 [2024-05-15 09:06:16.469250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:32832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.658 [2024-05-15 09:06:16.469275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:40:24.658 [2024-05-15 09:06:16.469302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.658 [2024-05-15 09:06:16.469321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:40:24.658 [2024-05-15 09:06:16.469344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:32896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.658 [2024-05-15 09:06:16.469361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:40:24.658 [2024-05-15 09:06:16.469383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:32928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.658 [2024-05-15 09:06:16.469399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:40:24.658 [2024-05-15 09:06:16.469422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:32960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.658 [2024-05-15 09:06:16.469438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:40:24.659 [2024-05-15 09:06:16.469461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:32992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.659 [2024-05-15 09:06:16.469477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:40:24.659 [2024-05-15 09:06:16.469499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:33024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.659 [2024-05-15 09:06:16.469523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:40:24.659 [2024-05-15 09:06:16.469545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:32632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.659 [2024-05-15 09:06:16.469562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:40:24.659 [2024-05-15 09:06:16.469608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:32696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.659 [2024-05-15 09:06:16.469624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:40:24.659 [2024-05-15 09:06:16.469645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:32760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.659 [2024-05-15 09:06:16.469660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:40:24.659 [2024-05-15 09:06:16.469686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:32400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.659 [2024-05-15 09:06:16.469703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:40:24.659 [2024-05-15 09:06:16.469724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:32520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.659 [2024-05-15 09:06:16.469739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:40:24.659 [2024-05-15 09:06:16.469760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:32416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.659 [2024-05-15 09:06:16.469775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:40:24.659 [2024-05-15 09:06:16.469812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:33048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.659 [2024-05-15 09:06:16.469829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:40:24.659 [2024-05-15 09:06:16.469851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.659 [2024-05-15 09:06:16.469883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:40:24.659 [2024-05-15 09:06:16.469907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:32608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.659 [2024-05-15 09:06:16.469924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:40:24.659 [2024-05-15 09:06:16.469947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:32672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.659 [2024-05-15 09:06:16.469963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:40:24.659 [2024-05-15 09:06:16.469986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:32736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.659 [2024-05-15 09:06:16.470002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:40:24.659 [2024-05-15 09:06:16.470024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:32800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.659 [2024-05-15 09:06:16.470040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:40:24.659 [2024-05-15 09:06:16.470062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.659 [2024-05-15 09:06:16.470079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:40:24.659 [2024-05-15 09:06:16.470101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:32536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.659 [2024-05-15 09:06:16.470118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:24.659 [2024-05-15 09:06:16.471576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:33256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.659 [2024-05-15 09:06:16.471601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:40:24.659 [2024-05-15 09:06:16.471633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:33272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.659 [2024-05-15 09:06:16.471652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:40:24.659 [2024-05-15 09:06:16.471675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:33288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.659 [2024-05-15 09:06:16.471692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:40:24.659 [2024-05-15 09:06:16.471715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:33304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.659 [2024-05-15 09:06:16.471757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:40:24.659 [2024-05-15 09:06:16.471780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:32224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.659 [2024-05-15 09:06:16.471796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:40:24.659 [2024-05-15 09:06:16.471817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:32552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.659 [2024-05-15 09:06:16.471834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:40:24.659 [2024-05-15 09:06:16.471855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:33056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.659 [2024-05-15 09:06:16.471885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:40:24.659 [2024-05-15 09:06:16.471907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:33088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.659 [2024-05-15 09:06:16.471922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:40:24.659 [2024-05-15 09:06:16.471943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.659 [2024-05-15 09:06:16.471959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:40:24.659 [2024-05-15 09:06:16.471980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:33144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.659 [2024-05-15 09:06:16.471996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:40:24.659 [2024-05-15 09:06:16.472017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:33176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.659 [2024-05-15 09:06:16.472033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:40:24.659 [2024-05-15 09:06:16.472055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.659 [2024-05-15 09:06:16.472070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:40:24.659 [2024-05-15 09:06:16.472091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:33240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.659 [2024-05-15 09:06:16.472107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:40:24.659 [2024-05-15 09:06:16.472127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:32856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.659 [2024-05-15 09:06:16.472147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:40:24.659 [2024-05-15 09:06:16.472169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.659 [2024-05-15 09:06:16.472185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:40:24.659 [2024-05-15 09:06:16.472231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:32984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.659 [2024-05-15 09:06:16.472251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:40:24.659 [2024-05-15 09:06:16.472274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:32480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.659 [2024-05-15 09:06:16.472290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:40:24.659 [2024-05-15 09:06:16.472312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.659 [2024-05-15 09:06:16.472329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:40:24.659 [2024-05-15 09:06:16.472351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:32744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.660 [2024-05-15 09:06:16.472367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:40:24.660 [2024-05-15 09:06:16.472389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:32328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.660 [2024-05-15 09:06:16.472405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:40:24.660 [2024-05-15 09:06:16.472427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:32096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.660 [2024-05-15 09:06:16.472444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:40:24.660 [2024-05-15 09:06:16.472467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:32864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.660 [2024-05-15 09:06:16.472484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:40:24.660 [2024-05-15 09:06:16.472506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.660 [2024-05-15 09:06:16.472541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:40:24.660 [2024-05-15 09:06:16.472563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:32992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.660 [2024-05-15 09:06:16.472578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:40:24.660 [2024-05-15 09:06:16.472600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:32632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.660 [2024-05-15 09:06:16.472615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:40:24.660 [2024-05-15 09:06:16.472636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:32760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.660 [2024-05-15 09:06:16.472655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:40:24.660 [2024-05-15 09:06:16.472678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:32520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.660 [2024-05-15 09:06:16.472694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:40:24.660 [2024-05-15 09:06:16.472715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.660 [2024-05-15 09:06:16.472730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:40:24.660 [2024-05-15 09:06:16.472751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:32608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.660 [2024-05-15 09:06:16.472766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:24.660 [2024-05-15 09:06:16.472788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:32736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.660 [2024-05-15 09:06:16.472803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:24.660 [2024-05-15 09:06:16.472824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:32160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.660 [2024-05-15 09:06:16.472840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:24.660 [2024-05-15 09:06:16.475354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:33320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.660 [2024-05-15 09:06:16.475380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:24.660 [2024-05-15 09:06:16.475408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:33336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.660 [2024-05-15 09:06:16.475426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:40:24.660 [2024-05-15 09:06:16.475449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:33352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.660 [2024-05-15 09:06:16.475467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:40:24.660 [2024-05-15 09:06:16.475489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:33368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.660 [2024-05-15 09:06:16.475506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:40:24.660 [2024-05-15 09:06:16.475529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:33384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.660 [2024-05-15 09:06:16.475545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:40:24.660 [2024-05-15 09:06:16.475566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:33400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.660 [2024-05-15 09:06:16.475583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:40:24.660 [2024-05-15 09:06:16.475605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:33416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.660 [2024-05-15 09:06:16.475622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:40:24.660 [2024-05-15 09:06:16.475650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:33432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.660 [2024-05-15 09:06:16.475668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:40:24.660 [2024-05-15 09:06:16.475690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:33104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.660 [2024-05-15 09:06:16.475707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:40:24.660 [2024-05-15 09:06:16.475729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:33136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.660 [2024-05-15 09:06:16.475746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:40:24.660 [2024-05-15 09:06:16.475769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:33168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.660 [2024-05-15 09:06:16.475786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:40:24.660 [2024-05-15 09:06:16.475809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:33200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.660 [2024-05-15 09:06:16.475826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:40:24.660 [2024-05-15 09:06:16.475849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.660 [2024-05-15 09:06:16.475866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:40:24.660 [2024-05-15 09:06:16.475888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:32840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.660 [2024-05-15 09:06:16.475904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:40:24.660 [2024-05-15 09:06:16.475928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:32904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.660 [2024-05-15 09:06:16.475958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:40:24.660 [2024-05-15 09:06:16.475981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.660 [2024-05-15 09:06:16.475997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:40:24.660 [2024-05-15 09:06:16.476032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:33032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.660 [2024-05-15 09:06:16.476048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:40:24.660 [2024-05-15 09:06:16.476069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:32648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.660 [2024-05-15 09:06:16.476100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:40:24.660 [2024-05-15 09:06:16.476122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:32776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.660 [2024-05-15 09:06:16.476138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:40:24.660 [2024-05-15 09:06:16.476164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:33272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.660 [2024-05-15 09:06:16.476181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:40:24.660 [2024-05-15 09:06:16.476226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:33304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.660 [2024-05-15 09:06:16.476246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:40:24.660 [2024-05-15 09:06:16.476277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:32552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.660 [2024-05-15 09:06:16.476293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:40:24.660 [2024-05-15 09:06:16.476315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.660 [2024-05-15 09:06:16.476332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:40:24.660 [2024-05-15 09:06:16.476354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.660 [2024-05-15 09:06:16.476371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:40:24.660 [2024-05-15 09:06:16.476393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:33208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.660 [2024-05-15 09:06:16.476409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:40:24.661 [2024-05-15 09:06:16.476431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:32856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.661 [2024-05-15 09:06:16.476447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:40:24.661 [2024-05-15 09:06:16.476469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:32984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.661 [2024-05-15 09:06:16.476485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:40:24.661 [2024-05-15 09:06:16.476507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:32616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.661 [2024-05-15 09:06:16.476537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:40:24.661 [2024-05-15 09:06:16.476559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:32328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.661 [2024-05-15 09:06:16.476574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:40:24.661 [2024-05-15 09:06:16.476595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:32864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.661 [2024-05-15 09:06:16.476610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:40:24.661 [2024-05-15 09:06:16.476631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:32992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.661 [2024-05-15 09:06:16.476648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:40:24.661 [2024-05-15 09:06:16.476669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:32760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.661 [2024-05-15 09:06:16.476688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:40:24.661 [2024-05-15 09:06:16.476727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.661 [2024-05-15 09:06:16.476743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:24.661 [2024-05-15 09:06:16.476779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:32736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.661 [2024-05-15 09:06:16.476796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:40:24.661 [2024-05-15 09:06:16.476818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:32456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.661 [2024-05-15 09:06:16.476834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:40:24.661 [2024-05-15 09:06:16.476856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:33456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.661 [2024-05-15 09:06:16.476873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:40:24.661 [2024-05-15 09:06:16.476896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:33472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.661 [2024-05-15 09:06:16.476913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:40:24.661 [2024-05-15 09:06:16.476935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:33064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.661 [2024-05-15 09:06:16.476952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:40:24.661 [2024-05-15 09:06:16.478273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.661 [2024-05-15 09:06:16.478309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:40:24.661 [2024-05-15 09:06:16.478338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:33512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.661 [2024-05-15 09:06:16.478366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:40:24.661 [2024-05-15 09:06:16.478388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:33528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.661 [2024-05-15 09:06:16.478405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:40:24.661 [2024-05-15 09:06:16.478427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.661 [2024-05-15 09:06:16.478443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:40:24.661 [2024-05-15 09:06:16.478466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:33560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.661 [2024-05-15 09:06:16.478482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:40:24.661 [2024-05-15 09:06:16.478504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:33576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.661 [2024-05-15 09:06:16.478527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:40:24.661 [2024-05-15 09:06:16.478551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.661 [2024-05-15 09:06:16.478568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:40:24.661 [2024-05-15 09:06:16.478591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:33608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.661 [2024-05-15 09:06:16.478607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:40:24.661 [2024-05-15 09:06:16.478629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:33624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.661 [2024-05-15 09:06:16.478646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:40:24.661 [2024-05-15 09:06:16.478669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:33640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.661 [2024-05-15 09:06:16.478685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:40:24.661 [2024-05-15 09:06:16.478708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:33264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.661 [2024-05-15 09:06:16.478724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:40:24.661 [2024-05-15 09:06:16.478747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:33296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.661 [2024-05-15 09:06:16.478763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:40:24.661 [2024-05-15 09:06:16.478785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:33096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.661 [2024-05-15 09:06:16.478802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:40:24.661 [2024-05-15 09:06:16.478824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:33160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.661 [2024-05-15 09:06:16.478840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:40:24.661 [2024-05-15 09:06:16.478863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:33224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.661 [2024-05-15 09:06:16.478879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:40:24.661 [2024-05-15 09:06:16.478901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:32888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.661 [2024-05-15 09:06:16.478918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:40:24.661 [2024-05-15 09:06:16.478940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:33016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.661 [2024-05-15 09:06:16.478956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:40:24.661 [2024-05-15 09:06:16.478979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:33648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.661 [2024-05-15 09:06:16.478995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:40:24.661 [2024-05-15 09:06:16.479022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:33664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.661 [2024-05-15 09:06:16.479040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:40:24.661 [2024-05-15 09:06:16.479062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:33336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.661 [2024-05-15 09:06:16.479092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:40:24.661 [2024-05-15 09:06:16.479116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:33368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.661 [2024-05-15 09:06:16.479132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:40:24.661 [2024-05-15 09:06:16.479154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:33400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.661 [2024-05-15 09:06:16.479170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:40:24.661 [2024-05-15 09:06:16.479206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:33432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.661 [2024-05-15 09:06:16.479229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:40:24.662 [2024-05-15 09:06:16.479268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:33136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.662 [2024-05-15 09:06:16.479285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:40:24.662 [2024-05-15 09:06:16.479308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:33200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.662 [2024-05-15 09:06:16.479325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:40:24.662 [2024-05-15 09:06:16.479348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:32840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.662 [2024-05-15 09:06:16.479365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:40:24.662 [2024-05-15 09:06:16.479387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:32968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.662 [2024-05-15 09:06:16.479403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:24.662 [2024-05-15 09:06:16.479426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:32648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.662 [2024-05-15 09:06:16.479442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:40:24.662 [2024-05-15 09:06:16.479465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:33272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.662 [2024-05-15 09:06:16.479481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:40:24.662 [2024-05-15 09:06:16.479504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.662 [2024-05-15 09:06:16.479521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:40:24.662 [2024-05-15 09:06:16.480167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:33144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.662 [2024-05-15 09:06:16.480192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:40:24.662 [2024-05-15 09:06:16.480227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:32856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.662 [2024-05-15 09:06:16.480247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:40:24.662 [2024-05-15 09:06:16.480271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:32616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.662 [2024-05-15 09:06:16.480288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:40:24.662 [2024-05-15 09:06:16.480311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:32864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.662 [2024-05-15 09:06:16.480328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:40:24.662 [2024-05-15 09:06:16.480350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:32760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.662 [2024-05-15 09:06:16.480367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:40:24.662 [2024-05-15 09:06:16.480389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:32736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.662 [2024-05-15 09:06:16.480406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:40:24.662 [2024-05-15 09:06:16.480429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:33456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.662 [2024-05-15 09:06:16.480445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:40:24.662 [2024-05-15 09:06:16.480469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:33064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.662 [2024-05-15 09:06:16.480486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:40:24.662 [2024-05-15 09:06:16.481731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:33672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.662 [2024-05-15 09:06:16.481756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:40:24.662 [2024-05-15 09:06:16.481784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:33688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.662 [2024-05-15 09:06:16.481803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:40:24.662 [2024-05-15 09:06:16.481826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:33704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.662 [2024-05-15 09:06:16.481844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:40:24.662 [2024-05-15 09:06:16.481867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.662 [2024-05-15 09:06:16.481884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:40:24.662 [2024-05-15 09:06:16.481908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:33736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.662 [2024-05-15 09:06:16.481930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:40:24.662 [2024-05-15 09:06:16.481954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:33752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.662 [2024-05-15 09:06:16.481971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:40:24.662 [2024-05-15 09:06:16.481994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.662 [2024-05-15 09:06:16.482010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:40:24.662 [2024-05-15 09:06:16.482033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:33328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.662 [2024-05-15 09:06:16.482050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:40:24.662 [2024-05-15 09:06:16.482073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:33360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.662 [2024-05-15 09:06:16.482090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:40:24.662 [2024-05-15 09:06:16.482128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:33392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.662 [2024-05-15 09:06:16.482144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:40:24.662 [2024-05-15 09:06:16.482166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:33424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.662 [2024-05-15 09:06:16.482182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:40:24.662 [2024-05-15 09:06:16.482235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:33512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.662 [2024-05-15 09:06:16.482255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:40:24.662 [2024-05-15 09:06:16.482279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:33544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.662 [2024-05-15 09:06:16.482296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:40:24.662 [2024-05-15 09:06:16.482319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:33576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.662 [2024-05-15 09:06:16.482335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:40:24.662 [2024-05-15 09:06:16.482358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:33608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.662 [2024-05-15 09:06:16.482374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:40:24.662 [2024-05-15 09:06:16.482397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:33640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.662 [2024-05-15 09:06:16.482413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:40:24.662 [2024-05-15 09:06:16.482436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:33296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.663 [2024-05-15 09:06:16.482456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:40:24.663 [2024-05-15 09:06:16.482481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:33160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.663 [2024-05-15 09:06:16.482498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:40:24.663 [2024-05-15 09:06:16.482520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:32888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.663 [2024-05-15 09:06:16.482538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:40:24.663 [2024-05-15 09:06:16.482560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:33648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.663 [2024-05-15 09:06:16.482577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:40:24.663 [2024-05-15 09:06:16.482599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:33336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.663 [2024-05-15 09:06:16.482615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:24.663 [2024-05-15 09:06:16.482637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:33400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.663 [2024-05-15 09:06:16.482654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:40:24.663 [2024-05-15 09:06:16.482692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:33136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.663 [2024-05-15 09:06:16.482709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:40:24.663 [2024-05-15 09:06:16.482731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:32840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.663 [2024-05-15 09:06:16.482748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:40:24.663 [2024-05-15 09:06:16.482770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:32648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.663 [2024-05-15 09:06:16.482785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:40:24.663 [2024-05-15 09:06:16.482823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:32552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.663 [2024-05-15 09:06:16.482841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:40:24.663 [2024-05-15 09:06:16.482863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:33288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.663 [2024-05-15 09:06:16.482879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:40:24.663 [2024-05-15 09:06:16.482900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:33176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.663 [2024-05-15 09:06:16.482915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:40:24.663 [2024-05-15 09:06:16.482936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:32920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.663 [2024-05-15 09:06:16.482952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:40:24.663 [2024-05-15 09:06:16.482977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:32856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.663 [2024-05-15 09:06:16.482994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:40:24.663 [2024-05-15 09:06:16.483015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:32864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.663 [2024-05-15 09:06:16.483031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:40:24.663 [2024-05-15 09:06:16.483052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:32736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.663 [2024-05-15 09:06:16.483067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:40:24.663 [2024-05-15 09:06:16.483088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.663 [2024-05-15 09:06:16.483104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:40:24.663 [2024-05-15 09:06:16.485088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:32096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.663 [2024-05-15 09:06:16.485114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:40:24.663 [2024-05-15 09:06:16.485142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:33448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.663 [2024-05-15 09:06:16.485160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:40:24.663 [2024-05-15 09:06:16.485184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:33480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.663 [2024-05-15 09:06:16.485201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:40:24.663 [2024-05-15 09:06:16.485232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:33792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.663 [2024-05-15 09:06:16.485250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:40:24.663 [2024-05-15 09:06:16.485280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:33808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.663 [2024-05-15 09:06:16.485296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:40:24.663 [2024-05-15 09:06:16.485319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:33824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.663 [2024-05-15 09:06:16.485335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:40:24.663 [2024-05-15 09:06:16.485358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:33840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.663 [2024-05-15 09:06:16.485374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:40:24.663 [2024-05-15 09:06:16.485396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:33856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.663 [2024-05-15 09:06:16.485412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:40:24.663 [2024-05-15 09:06:16.485440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:33872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.663 [2024-05-15 09:06:16.485458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:40:24.663 [2024-05-15 09:06:16.485480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.663 [2024-05-15 09:06:16.485508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:40:24.663 [2024-05-15 09:06:16.485546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:33520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.663 [2024-05-15 09:06:16.485563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:40:24.663 [2024-05-15 09:06:16.485584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:33552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.663 [2024-05-15 09:06:16.485616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:40:24.663 [2024-05-15 09:06:16.485639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.663 [2024-05-15 09:06:16.485655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:40:24.663 [2024-05-15 09:06:16.485678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:33616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.663 [2024-05-15 09:06:16.485694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:40:24.663 [2024-05-15 09:06:16.485718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.663 [2024-05-15 09:06:16.485735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:40:24.663 [2024-05-15 09:06:16.485757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:33720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.663 [2024-05-15 09:06:16.485774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:40:24.663 [2024-05-15 09:06:16.485796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:33752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.663 [2024-05-15 09:06:16.485812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:24.663 [2024-05-15 09:06:16.485835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:33328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.663 [2024-05-15 09:06:16.485851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:24.663 [2024-05-15 09:06:16.485874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:33392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.663 [2024-05-15 09:06:16.485891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:24.663 [2024-05-15 09:06:16.485913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:33512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.663 [2024-05-15 09:06:16.485930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:24.664 [2024-05-15 09:06:16.485952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:33576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.664 [2024-05-15 09:06:16.485973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:40:24.664 [2024-05-15 09:06:16.486011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:33640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.664 [2024-05-15 09:06:16.486029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:40:24.664 [2024-05-15 09:06:16.486051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:33160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.664 [2024-05-15 09:06:16.486082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:40:24.664 [2024-05-15 09:06:16.486105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:33648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.664 [2024-05-15 09:06:16.486122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:40:24.664 [2024-05-15 09:06:16.486143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:33400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.664 [2024-05-15 09:06:16.486160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:40:24.664 [2024-05-15 09:06:16.486183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:32840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.664 [2024-05-15 09:06:16.486199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:40:24.664 [2024-05-15 09:06:16.486228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:32552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.664 [2024-05-15 09:06:16.486258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:40:24.664 [2024-05-15 09:06:16.486281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.664 [2024-05-15 09:06:16.486298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:40:24.664 [2024-05-15 09:06:16.486320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:32856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.664 [2024-05-15 09:06:16.486336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:40:24.664 [2024-05-15 09:06:16.486359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:32736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.664 [2024-05-15 09:06:16.486375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:40:24.664 [2024-05-15 09:06:16.487061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.664 [2024-05-15 09:06:16.487085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:40:24.664 [2024-05-15 09:06:16.487128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:33904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.664 [2024-05-15 09:06:16.487147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:40:24.664 [2024-05-15 09:06:16.487170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:33920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.664 [2024-05-15 09:06:16.487191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:40:24.664 [2024-05-15 09:06:16.487241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:33936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.664 [2024-05-15 09:06:16.487272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:40:24.664 [2024-05-15 09:06:16.487295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:33952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.664 [2024-05-15 09:06:16.487312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:40:24.664 [2024-05-15 09:06:16.487334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:33968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.664 [2024-05-15 09:06:16.487350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:40:24.664 [2024-05-15 09:06:16.487373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:33656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.664 [2024-05-15 09:06:16.487390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:40:24.664 [2024-05-15 09:06:16.487412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:33352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.664 [2024-05-15 09:06:16.487428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:40:24.664 [2024-05-15 09:06:16.487451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:33416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.664 [2024-05-15 09:06:16.487468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:40:24.664 [2024-05-15 09:06:16.488492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:33208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.664 [2024-05-15 09:06:16.488521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:40:24.664 [2024-05-15 09:06:16.488548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.664 [2024-05-15 09:06:16.488567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:40:24.664 [2024-05-15 09:06:16.488590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:33984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.664 [2024-05-15 09:06:16.488607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:40:24.664 [2024-05-15 09:06:16.488630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:34000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.664 [2024-05-15 09:06:16.488647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:40:24.664 [2024-05-15 09:06:16.488669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:34016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.664 [2024-05-15 09:06:16.488685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:40:24.664 [2024-05-15 09:06:16.488722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:34032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.664 [2024-05-15 09:06:16.488740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:40:24.664 [2024-05-15 09:06:16.488767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:34048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.664 [2024-05-15 09:06:16.488784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:40:24.664 [2024-05-15 09:06:16.488821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:33680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.664 [2024-05-15 09:06:16.488837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:40:24.664 [2024-05-15 09:06:16.488874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:33712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.664 [2024-05-15 09:06:16.488891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:40:24.664 [2024-05-15 09:06:16.488912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.664 [2024-05-15 09:06:16.488944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:40:24.664 [2024-05-15 09:06:16.488969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:33776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.664 [2024-05-15 09:06:16.488986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:40:24.664 [2024-05-15 09:06:16.489009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:33448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.664 [2024-05-15 09:06:16.489026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:40:24.664 [2024-05-15 09:06:16.489048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:33792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.664 [2024-05-15 09:06:16.489064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:24.664 [2024-05-15 09:06:16.489087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:33824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.664 [2024-05-15 09:06:16.489104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:40:24.664 [2024-05-15 09:06:16.489126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:33856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.664 [2024-05-15 09:06:16.489143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:40:24.664 [2024-05-15 09:06:16.489166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:33488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.664 [2024-05-15 09:06:16.489182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:40:24.664 [2024-05-15 09:06:16.489204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:33552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.664 [2024-05-15 09:06:16.489231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:40:24.664 [2024-05-15 09:06:16.489256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:33616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.664 [2024-05-15 09:06:16.489273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:40:24.664 [2024-05-15 09:06:16.489300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:33720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.665 [2024-05-15 09:06:16.489318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:40:24.665 [2024-05-15 09:06:16.489340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:33328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.665 [2024-05-15 09:06:16.489357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:40:24.665 [2024-05-15 09:06:16.489379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:33512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.665 [2024-05-15 09:06:16.489396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:40:24.665 [2024-05-15 09:06:16.489418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:33640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.665 [2024-05-15 09:06:16.489435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:40:24.665 [2024-05-15 09:06:16.489457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:33648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.665 [2024-05-15 09:06:16.489473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:40:24.665 [2024-05-15 09:06:16.489496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:32840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.665 [2024-05-15 09:06:16.489536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:40:24.665 [2024-05-15 09:06:16.489557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:33176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.665 [2024-05-15 09:06:16.489573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:40:24.665 [2024-05-15 09:06:16.489593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:32736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.665 [2024-05-15 09:06:16.489608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:40:24.665 [2024-05-15 09:06:16.489629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.665 [2024-05-15 09:06:16.489643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:40:24.665 [2024-05-15 09:06:16.489664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:33592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.665 [2024-05-15 09:06:16.489679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:40:24.665 [2024-05-15 09:06:16.489700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.665 [2024-05-15 09:06:16.489714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:40:24.665 [2024-05-15 09:06:16.489734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:33432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.665 [2024-05-15 09:06:16.489749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:40:24.665 [2024-05-15 09:06:16.489769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:33904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.665 [2024-05-15 09:06:16.489788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:40:24.665 [2024-05-15 09:06:16.489809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.665 [2024-05-15 09:06:16.489825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:40:24.665 [2024-05-15 09:06:16.489845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:33968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.665 [2024-05-15 09:06:16.489861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:40:24.665 [2024-05-15 09:06:16.489881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:33352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.665 [2024-05-15 09:06:16.489896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:40:24.665 [2024-05-15 09:06:16.492154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:34064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.665 [2024-05-15 09:06:16.492179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:40:24.665 [2024-05-15 09:06:16.492207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:34080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.665 [2024-05-15 09:06:16.492237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:40:24.665 [2024-05-15 09:06:16.492272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:34096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.665 [2024-05-15 09:06:16.492290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:40:24.665 [2024-05-15 09:06:16.492313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:33144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.665 [2024-05-15 09:06:16.492329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:40:24.665 [2024-05-15 09:06:16.492352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:33456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.665 [2024-05-15 09:06:16.492368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:40:24.665 [2024-05-15 09:06:16.492390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:34112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.665 [2024-05-15 09:06:16.492407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:40:24.665 [2024-05-15 09:06:16.492429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.665 [2024-05-15 09:06:16.492446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:40:24.665 [2024-05-15 09:06:16.492468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:34144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.665 [2024-05-15 09:06:16.492484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:40:24.665 [2024-05-15 09:06:16.492522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:34160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.665 [2024-05-15 09:06:16.492542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:40:24.665 [2024-05-15 09:06:16.492564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:34176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.665 [2024-05-15 09:06:16.492580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:40:24.665 [2024-05-15 09:06:16.492600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:33048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.665 [2024-05-15 09:06:16.492616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:24.665 [2024-05-15 09:06:16.492636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:34000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.665 [2024-05-15 09:06:16.492652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:40:24.665 [2024-05-15 09:06:16.492672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:34032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.665 [2024-05-15 09:06:16.492687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:40:24.665 [2024-05-15 09:06:16.492707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:33680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.665 [2024-05-15 09:06:16.492722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:40:24.665 [2024-05-15 09:06:16.492743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:33744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.665 [2024-05-15 09:06:16.492758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:40:24.665 [2024-05-15 09:06:16.492778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.665 [2024-05-15 09:06:16.492793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:40:24.665 [2024-05-15 09:06:16.492813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:33824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.665 [2024-05-15 09:06:16.492844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:40:24.665 [2024-05-15 09:06:16.492866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.665 [2024-05-15 09:06:16.492882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:40:24.665 [2024-05-15 09:06:16.492921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:33616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.665 [2024-05-15 09:06:16.492939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:40:24.666 [2024-05-15 09:06:16.492963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:33328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.666 [2024-05-15 09:06:16.492979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:40:24.666 [2024-05-15 09:06:16.493002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:33640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.666 [2024-05-15 09:06:16.493018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:40:24.666 [2024-05-15 09:06:16.493046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:32840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.666 [2024-05-15 09:06:16.493063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:40:24.666 [2024-05-15 09:06:16.493086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:32736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.666 [2024-05-15 09:06:16.493102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:40:24.666 [2024-05-15 09:06:16.493124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:33592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.666 [2024-05-15 09:06:16.493142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:40:24.666 [2024-05-15 09:06:16.493164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:33432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.666 [2024-05-15 09:06:16.493181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:40:24.666 [2024-05-15 09:06:16.493203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:33936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.666 [2024-05-15 09:06:16.493227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:40:24.666 [2024-05-15 09:06:16.493251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:33352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.666 [2024-05-15 09:06:16.493273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:40:24.666 [2024-05-15 09:06:16.493296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:33800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.666 [2024-05-15 09:06:16.493313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:40:24.666 [2024-05-15 09:06:16.493335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:33832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.666 [2024-05-15 09:06:16.493352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:40:24.666 [2024-05-15 09:06:16.493374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:33864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.666 [2024-05-15 09:06:16.493390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:40:24.666 [2024-05-15 09:06:16.493413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.666 [2024-05-15 09:06:16.493429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:40:24.666 [2024-05-15 09:06:16.493451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:33736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.666 [2024-05-15 09:06:16.493468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:40:24.666 [2024-05-15 09:06:16.493490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:33544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.666 [2024-05-15 09:06:16.493532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:40:24.666 [2024-05-15 09:06:16.494207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:33336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.666 [2024-05-15 09:06:16.494239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:40:24.666 [2024-05-15 09:06:16.494272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.666 [2024-05-15 09:06:16.494291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:40:24.666 [2024-05-15 09:06:16.494314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:34200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.666 [2024-05-15 09:06:16.494331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:40:24.666 [2024-05-15 09:06:16.494354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:34216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.666 [2024-05-15 09:06:16.494385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:40:24.666 [2024-05-15 09:06:16.494409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:34232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.666 [2024-05-15 09:06:16.494425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:40:24.666 [2024-05-15 09:06:16.494447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:33912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.666 [2024-05-15 09:06:16.494462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:40:24.666 [2024-05-15 09:06:16.494484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:33944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.666 [2024-05-15 09:06:16.494522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:40:24.666 [2024-05-15 09:06:16.494545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:33976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.666 [2024-05-15 09:06:16.494560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:40:24.666 [2024-05-15 09:06:16.495995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:34256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.666 [2024-05-15 09:06:16.496021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:40:24.666 [2024-05-15 09:06:16.496049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.666 [2024-05-15 09:06:16.496067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:24.666 [2024-05-15 09:06:16.496090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:34288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.666 [2024-05-15 09:06:16.496107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:40:24.666 [2024-05-15 09:06:16.496130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:34304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.666 [2024-05-15 09:06:16.496146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:40:24.666 [2024-05-15 09:06:16.496168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:34320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.666 [2024-05-15 09:06:16.496192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:40:24.666 [2024-05-15 09:06:16.496222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:34336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.666 [2024-05-15 09:06:16.496242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:40:24.666 [2024-05-15 09:06:16.496275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:34008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.666 [2024-05-15 09:06:16.496292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:40:24.666 [2024-05-15 09:06:16.496314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:34040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.666 [2024-05-15 09:06:16.496330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:40:24.666 [2024-05-15 09:06:16.496352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:34080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.666 [2024-05-15 09:06:16.496368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:40:24.666 [2024-05-15 09:06:16.496391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:33144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.666 [2024-05-15 09:06:16.496407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:40:24.666 [2024-05-15 09:06:16.496430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:34112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.666 [2024-05-15 09:06:16.496446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:40:24.666 [2024-05-15 09:06:16.496469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:34144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.666 [2024-05-15 09:06:16.496485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:40:24.666 [2024-05-15 09:06:16.496514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:34176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.666 [2024-05-15 09:06:16.496530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:40:24.666 [2024-05-15 09:06:16.496571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:34000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.666 [2024-05-15 09:06:16.496588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:40:24.666 [2024-05-15 09:06:16.496610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:33680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.666 [2024-05-15 09:06:16.496640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:40:24.666 [2024-05-15 09:06:16.496662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:33448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.667 [2024-05-15 09:06:16.496677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:40:24.667 [2024-05-15 09:06:16.496697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:33488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.667 [2024-05-15 09:06:16.496717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:40:24.667 [2024-05-15 09:06:16.496739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:33328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.667 [2024-05-15 09:06:16.496754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:40:24.667 [2024-05-15 09:06:16.496774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:32840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.667 [2024-05-15 09:06:16.496790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:40:24.667 [2024-05-15 09:06:16.496810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:33592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.667 [2024-05-15 09:06:16.496825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:40:24.667 [2024-05-15 09:06:16.496846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:33936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.667 [2024-05-15 09:06:16.496861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:40:24.667 [2024-05-15 09:06:16.496881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:33800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.667 [2024-05-15 09:06:16.496896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:40:24.667 [2024-05-15 09:06:16.496917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:33864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.667 [2024-05-15 09:06:16.496932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:40:24.667 [2024-05-15 09:06:16.496952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.667 [2024-05-15 09:06:16.496967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:40:24.667 [2024-05-15 09:06:16.496988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:33808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.667 [2024-05-15 09:06:16.497003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:40:24.667 [2024-05-15 09:06:16.497023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.667 [2024-05-15 09:06:16.497038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:40:24.667 [2024-05-15 09:06:16.497058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:33752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.667 [2024-05-15 09:06:16.497074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:40:24.667 [2024-05-15 09:06:16.497094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:33400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.667 [2024-05-15 09:06:16.497110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:40:24.667 [2024-05-15 09:06:16.497131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:34184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.667 [2024-05-15 09:06:16.497149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:40:24.667 [2024-05-15 09:06:16.497171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:34216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.667 [2024-05-15 09:06:16.497187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:40:24.667 [2024-05-15 09:06:16.497230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:33912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.667 [2024-05-15 09:06:16.497275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:24.667 [2024-05-15 09:06:16.497311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:33976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.667 [2024-05-15 09:06:16.497335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:24.667 [2024-05-15 09:06:16.499289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:33920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.667 [2024-05-15 09:06:16.499331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:24.667 [2024-05-15 09:06:16.499373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:34352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.667 [2024-05-15 09:06:16.499395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:24.667 [2024-05-15 09:06:16.499419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.667 [2024-05-15 09:06:16.499436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:40:24.667 [2024-05-15 09:06:16.499458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:34384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.667 [2024-05-15 09:06:16.499475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:40:24.667 [2024-05-15 09:06:16.499498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:34400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.667 [2024-05-15 09:06:16.499514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:40:24.667 [2024-05-15 09:06:16.499537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:34408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.667 [2024-05-15 09:06:16.499554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:40:24.667 [2024-05-15 09:06:16.499576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:34424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.667 [2024-05-15 09:06:16.499593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:40:24.667 [2024-05-15 09:06:16.499615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:34440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.667 [2024-05-15 09:06:16.499632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:40:24.667 [2024-05-15 09:06:16.499654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:34456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.667 [2024-05-15 09:06:16.499671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:40:24.667 [2024-05-15 09:06:16.499699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:34472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.667 [2024-05-15 09:06:16.499718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:40:24.667 [2024-05-15 09:06:16.499740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.667 [2024-05-15 09:06:16.499758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:40:24.667 [2024-05-15 09:06:16.499780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:34088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.667 [2024-05-15 09:06:16.499798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:40:24.667 [2024-05-15 09:06:16.499820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:34120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.667 [2024-05-15 09:06:16.499837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:40:24.667 [2024-05-15 09:06:16.499859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:34152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.668 [2024-05-15 09:06:16.499876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:40:24.668 [2024-05-15 09:06:16.499914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:33984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.668 [2024-05-15 09:06:16.499930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:40:24.668 [2024-05-15 09:06:16.499967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:34048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.668 [2024-05-15 09:06:16.499983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:40:24.668 [2024-05-15 09:06:16.500005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:34272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.668 [2024-05-15 09:06:16.500020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:40:24.668 [2024-05-15 09:06:16.500041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:34304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.668 [2024-05-15 09:06:16.500056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:40:24.668 [2024-05-15 09:06:16.500077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:34336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.668 [2024-05-15 09:06:16.500093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:40:24.668 [2024-05-15 09:06:16.500129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:34040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.668 [2024-05-15 09:06:16.500145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:40:24.668 [2024-05-15 09:06:16.500168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:33144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.668 [2024-05-15 09:06:16.500184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:40:24.668 [2024-05-15 09:06:16.500238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:34144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.668 [2024-05-15 09:06:16.500268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:40:24.668 [2024-05-15 09:06:16.500291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.668 [2024-05-15 09:06:16.500309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:40:24.668 [2024-05-15 09:06:16.500331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:33448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.668 [2024-05-15 09:06:16.500348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:40:24.668 [2024-05-15 09:06:16.500371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.668 [2024-05-15 09:06:16.500388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:40:24.668 [2024-05-15 09:06:16.500410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:33592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.668 [2024-05-15 09:06:16.500426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:40:24.668 [2024-05-15 09:06:16.500448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:33800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.668 [2024-05-15 09:06:16.500465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:40:24.668 [2024-05-15 09:06:16.500488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:33736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.668 [2024-05-15 09:06:16.500514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:40:24.668 [2024-05-15 09:06:16.500536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:33872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.668 [2024-05-15 09:06:16.500553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:40:24.668 [2024-05-15 09:06:16.500576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:33400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.668 [2024-05-15 09:06:16.500607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:40:24.668 [2024-05-15 09:06:16.501317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:34216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.668 [2024-05-15 09:06:16.501343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:40:24.668 [2024-05-15 09:06:16.501371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:33976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.668 [2024-05-15 09:06:16.501405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:40:24.668 [2024-05-15 09:06:16.501428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:33856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.668 [2024-05-15 09:06:16.501445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:40:24.668 [2024-05-15 09:06:16.501483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:33512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.668 [2024-05-15 09:06:16.501512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:24.668 [2024-05-15 09:06:16.501551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:34496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.668 [2024-05-15 09:06:16.501567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:40:24.668 [2024-05-15 09:06:16.501603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:34512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.668 [2024-05-15 09:06:16.501620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:40:24.668 [2024-05-15 09:06:16.501643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:33968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.668 [2024-05-15 09:06:16.501675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:40:24.668 Received shutdown signal, test time was about 32.225581 seconds 00:40:24.668 00:40:24.668 Latency(us) 00:40:24.668 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:24.668 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:40:24.668 Verification LBA range: start 0x0 length 0x4000 00:40:24.668 Nvme0n1 : 32.22 7921.18 30.94 0.00 0.00 16130.29 259.41 4026531.84 00:40:24.668 =================================================================================================================== 00:40:24.668 Total : 7921.18 30.94 0.00 0.00 16130.29 259.41 4026531.84 00:40:24.668 09:06:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:24.925 09:06:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:40:24.925 09:06:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:40:24.925 09:06:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:40:24.925 09:06:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:40:24.925 09:06:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:40:24.925 09:06:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:40:24.925 09:06:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:40:24.925 09:06:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:40:24.925 09:06:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:40:24.925 rmmod nvme_tcp 00:40:24.925 rmmod nvme_fabrics 00:40:24.925 rmmod nvme_keyring 00:40:24.925 09:06:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:40:24.925 09:06:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:40:24.925 09:06:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:40:24.925 09:06:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 2406044 ']' 00:40:24.925 09:06:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 2406044 00:40:24.925 09:06:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@947 -- # '[' -z 2406044 ']' 00:40:24.925 09:06:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # kill -0 2406044 00:40:24.925 09:06:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # uname 00:40:24.925 09:06:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:40:24.925 09:06:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2406044 00:40:24.925 09:06:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:40:24.925 09:06:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:40:24.925 09:06:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2406044' 00:40:24.925 killing process with pid 2406044 00:40:24.925 09:06:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # kill 2406044 00:40:24.925 [2024-05-15 09:06:19.545102] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:40:24.925 09:06:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # wait 2406044 00:40:25.185 09:06:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:40:25.185 09:06:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:40:25.185 09:06:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:40:25.185 09:06:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:40:25.185 09:06:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:40:25.185 09:06:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:25.185 09:06:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:25.185 09:06:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:27.086 09:06:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:40:27.086 00:40:27.086 real 0m41.211s 00:40:27.086 user 2m3.265s 00:40:27.086 sys 0m10.524s 00:40:27.086 09:06:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # xtrace_disable 00:40:27.086 09:06:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:40:27.086 ************************************ 00:40:27.086 END TEST nvmf_host_multipath_status 00:40:27.086 ************************************ 00:40:27.086 09:06:21 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:40:27.086 09:06:21 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:40:27.086 09:06:21 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:40:27.086 09:06:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:27.086 ************************************ 00:40:27.086 START TEST nvmf_discovery_remove_ifc 00:40:27.086 ************************************ 00:40:27.086 09:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:40:27.344 * Looking for test storage... 00:40:27.344 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:40:27.344 09:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:27.344 09:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:40:27.344 09:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:27.344 09:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:27.344 09:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:27.344 09:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:27.344 09:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:27.344 09:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:27.344 09:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:27.344 09:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:27.344 09:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:27.344 09:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:27.344 09:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:40:27.344 09:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:40:27.344 09:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:27.344 09:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:27.344 09:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:27.345 09:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:27.345 09:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:27.345 09:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:27.345 09:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:27.345 09:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:27.345 09:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:27.345 09:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:27.345 09:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:27.345 09:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:40:27.345 09:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:27.345 09:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:40:27.345 09:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:40:27.345 09:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:40:27.345 09:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:27.345 09:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:27.345 09:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:27.345 09:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:40:27.345 09:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:40:27.345 09:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:40:27.345 09:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:40:27.345 09:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:40:27.345 09:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:40:27.345 09:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:40:27.345 09:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:40:27.345 09:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:40:27.345 09:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:40:27.345 09:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:40:27.345 09:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:27.345 09:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:40:27.345 09:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:40:27.345 09:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:40:27.345 09:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:27.345 09:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:27.345 09:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:27.345 09:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:40:27.345 09:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:40:27.345 09:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:40:27.345 09:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:40:29.872 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:29.872 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:40:29.872 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:40:29.872 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:40:29.872 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:40:29.872 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:40:29.872 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:40:29.872 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:40:29.872 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:40:29.872 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:40:29.872 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:40:29.872 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:40:29.872 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:40:29.872 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:40:29.872 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:40:29.872 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:29.872 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:29.872 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:29.872 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:29.872 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:29.872 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:29.872 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:29.872 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:29.872 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:29.872 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:29.872 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:29.872 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:40:29.872 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:40:29.873 Found 0000:09:00.0 (0x8086 - 0x159b) 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:40:29.873 Found 0000:09:00.1 (0x8086 - 0x159b) 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:40:29.873 Found net devices under 0000:09:00.0: cvl_0_0 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:40:29.873 Found net devices under 0000:09:00.1: cvl_0_1 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:40:29.873 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:29.873 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:40:29.873 00:40:29.873 --- 10.0.0.2 ping statistics --- 00:40:29.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:29.873 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:29.873 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:29.873 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:40:29.873 00:40:29.873 --- 10.0.0.1 ping statistics --- 00:40:29.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:29.873 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@721 -- # xtrace_disable 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=2413420 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 2413420 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@828 -- # '[' -z 2413420 ']' 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local max_retries=100 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:29.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # xtrace_disable 00:40:29.873 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:40:29.873 [2024-05-15 09:06:24.510915] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:40:29.873 [2024-05-15 09:06:24.511003] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:29.873 EAL: No free 2048 kB hugepages reported on node 1 00:40:29.873 [2024-05-15 09:06:24.584998] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:30.130 [2024-05-15 09:06:24.672007] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:30.130 [2024-05-15 09:06:24.672072] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:30.130 [2024-05-15 09:06:24.672086] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:30.130 [2024-05-15 09:06:24.672097] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:30.130 [2024-05-15 09:06:24.672106] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:30.130 [2024-05-15 09:06:24.672146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:40:30.130 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:40:30.130 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@861 -- # return 0 00:40:30.130 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:40:30.130 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@727 -- # xtrace_disable 00:40:30.130 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:40:30.130 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:30.130 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:40:30.130 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:30.130 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:40:30.130 [2024-05-15 09:06:24.825432] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:30.130 [2024-05-15 09:06:24.833404] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:40:30.130 [2024-05-15 09:06:24.833682] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:40:30.130 null0 00:40:30.131 [2024-05-15 09:06:24.865603] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:30.131 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:30.131 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2413447 00:40:30.131 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:40:30.131 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2413447 /tmp/host.sock 00:40:30.131 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@828 -- # '[' -z 2413447 ']' 00:40:30.131 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local rpc_addr=/tmp/host.sock 00:40:30.131 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local max_retries=100 00:40:30.131 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:40:30.131 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:40:30.131 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # xtrace_disable 00:40:30.131 09:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:40:30.388 [2024-05-15 09:06:24.928899] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:40:30.388 [2024-05-15 09:06:24.928974] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2413447 ] 00:40:30.388 EAL: No free 2048 kB hugepages reported on node 1 00:40:30.388 [2024-05-15 09:06:24.993110] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:30.388 [2024-05-15 09:06:25.073139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:30.388 09:06:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:40:30.388 09:06:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@861 -- # return 0 00:40:30.388 09:06:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:30.388 09:06:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:40:30.388 09:06:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:30.388 09:06:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:40:30.388 09:06:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:30.388 09:06:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:40:30.388 09:06:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:30.388 09:06:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:40:30.646 09:06:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:30.646 09:06:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:40:30.646 09:06:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:30.646 09:06:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:40:31.578 [2024-05-15 09:06:26.284332] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:40:31.578 [2024-05-15 09:06:26.284376] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:40:31.578 [2024-05-15 09:06:26.284402] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:40:31.578 [2024-05-15 09:06:26.370667] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:40:31.836 [2024-05-15 09:06:26.594603] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:40:31.836 [2024-05-15 09:06:26.594664] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:40:31.836 [2024-05-15 09:06:26.594700] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:40:31.836 [2024-05-15 09:06:26.594722] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:40:31.836 [2024-05-15 09:06:26.594752] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:40:31.836 09:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:31.836 09:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:40:31.836 09:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:40:31.836 09:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:40:31.836 09:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:40:31.836 09:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:31.836 09:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:40:31.836 09:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:40:31.836 09:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:40:31.836 [2024-05-15 09:06:26.601773] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x63d7c0 was disconnected and freed. delete nvme_qpair. 00:40:31.836 09:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:32.093 09:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:40:32.093 09:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:40:32.094 09:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:40:32.094 09:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:40:32.094 09:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:40:32.094 09:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:40:32.094 09:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:40:32.094 09:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:32.094 09:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:40:32.094 09:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:40:32.094 09:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:40:32.094 09:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:32.094 09:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:40:32.094 09:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:40:33.025 09:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:40:33.025 09:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:40:33.025 09:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:33.025 09:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:40:33.025 09:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:40:33.025 09:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:40:33.025 09:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:40:33.025 09:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:33.025 09:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:40:33.025 09:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:40:34.398 09:06:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:40:34.398 09:06:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:40:34.398 09:06:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:40:34.398 09:06:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:34.398 09:06:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:40:34.398 09:06:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:40:34.398 09:06:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:40:34.398 09:06:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:34.398 09:06:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:40:34.398 09:06:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:40:35.331 09:06:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:40:35.331 09:06:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:40:35.331 09:06:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:40:35.331 09:06:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:35.331 09:06:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:40:35.331 09:06:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:40:35.331 09:06:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:40:35.331 09:06:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:35.331 09:06:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:40:35.331 09:06:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:40:36.270 09:06:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:40:36.270 09:06:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:40:36.270 09:06:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:40:36.270 09:06:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:36.270 09:06:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:40:36.270 09:06:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:40:36.270 09:06:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:40:36.270 09:06:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:36.270 09:06:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:40:36.270 09:06:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:40:37.200 09:06:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:40:37.200 09:06:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:40:37.200 09:06:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:40:37.200 09:06:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:37.200 09:06:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:40:37.200 09:06:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:40:37.200 09:06:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:40:37.200 09:06:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:37.200 09:06:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:40:37.200 09:06:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:40:37.458 [2024-05-15 09:06:32.035870] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:40:37.458 [2024-05-15 09:06:32.035953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:40:37.458 [2024-05-15 09:06:32.035976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:37.458 [2024-05-15 09:06:32.035996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:40:37.458 [2024-05-15 09:06:32.036018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:37.458 [2024-05-15 09:06:32.036033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:40:37.458 [2024-05-15 09:06:32.036046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:37.458 [2024-05-15 09:06:32.036060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:40:37.458 [2024-05-15 09:06:32.036088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:37.458 [2024-05-15 09:06:32.036102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:40:37.458 [2024-05-15 09:06:32.036114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:37.458 [2024-05-15 09:06:32.036127] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x604850 is same with the state(5) to be set 00:40:37.458 [2024-05-15 09:06:32.045889] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x604850 (9): Bad file descriptor 00:40:37.458 [2024-05-15 09:06:32.055938] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:40:38.391 09:06:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:40:38.391 09:06:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:40:38.391 09:06:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:40:38.391 09:06:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:38.391 09:06:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:40:38.391 09:06:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:40:38.391 09:06:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:40:38.391 [2024-05-15 09:06:33.121271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:40:39.762 [2024-05-15 09:06:34.145262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:40:39.762 [2024-05-15 09:06:34.145325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x604850 with addr=10.0.0.2, port=4420 00:40:39.762 [2024-05-15 09:06:34.145354] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x604850 is same with the state(5) to be set 00:40:39.762 [2024-05-15 09:06:34.145830] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x604850 (9): Bad file descriptor 00:40:39.762 [2024-05-15 09:06:34.145879] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:39.762 [2024-05-15 09:06:34.145922] bdev_nvme.c:6718:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:40:39.762 [2024-05-15 09:06:34.145964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:40:39.762 [2024-05-15 09:06:34.145989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:39.762 [2024-05-15 09:06:34.146011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:40:39.762 [2024-05-15 09:06:34.146027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:39.762 [2024-05-15 09:06:34.146043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:40:39.762 [2024-05-15 09:06:34.146057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:39.762 [2024-05-15 09:06:34.146083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:40:39.762 [2024-05-15 09:06:34.146098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:39.762 [2024-05-15 09:06:34.146114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:40:39.762 [2024-05-15 09:06:34.146129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:39.762 [2024-05-15 09:06:34.146144] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:40:39.762 [2024-05-15 09:06:34.146393] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x603ca0 (9): Bad file descriptor 00:40:39.762 [2024-05-15 09:06:34.147414] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:40:39.762 [2024-05-15 09:06:34.147435] nvme_ctrlr.c:1149:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:40:39.762 09:06:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:39.762 09:06:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:40:39.762 09:06:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:40:40.696 09:06:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:40:40.696 09:06:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:40:40.696 09:06:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:40:40.696 09:06:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:40.696 09:06:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:40:40.696 09:06:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:40:40.696 09:06:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:40:40.696 09:06:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:40.696 09:06:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:40:40.696 09:06:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:40.696 09:06:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:40.696 09:06:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:40:40.696 09:06:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:40:40.696 09:06:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:40:40.696 09:06:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:40:40.696 09:06:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:40.696 09:06:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:40:40.696 09:06:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:40:40.696 09:06:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:40:40.696 09:06:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:40.696 09:06:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:40:40.696 09:06:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:40:41.629 [2024-05-15 09:06:36.159214] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:40:41.629 [2024-05-15 09:06:36.159279] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:40:41.629 [2024-05-15 09:06:36.159309] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:40:41.629 [2024-05-15 09:06:36.247552] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:40:41.629 09:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:40:41.629 09:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:40:41.629 09:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:40:41.629 09:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:41.629 09:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:40:41.629 09:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:40:41.629 09:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:40:41.629 09:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:41.629 09:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:40:41.629 09:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:40:41.887 [2024-05-15 09:06:36.430774] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:40:41.887 [2024-05-15 09:06:36.430823] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:40:41.887 [2024-05-15 09:06:36.430854] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:40:41.887 [2024-05-15 09:06:36.430875] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:40:41.887 [2024-05-15 09:06:36.430890] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:40:41.887 [2024-05-15 09:06:36.437907] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x61ee60 was disconnected and freed. delete nvme_qpair. 00:40:42.818 09:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:40:42.818 09:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:40:42.818 09:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:40:42.818 09:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:42.818 09:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:40:42.818 09:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:40:42.818 09:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:40:42.818 09:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:42.818 09:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:40:42.818 09:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:40:42.818 09:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2413447 00:40:42.818 09:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@947 -- # '[' -z 2413447 ']' 00:40:42.818 09:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # kill -0 2413447 00:40:42.818 09:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # uname 00:40:42.818 09:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:40:42.818 09:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2413447 00:40:42.818 09:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:40:42.818 09:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:40:42.818 09:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2413447' 00:40:42.818 killing process with pid 2413447 00:40:42.818 09:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # kill 2413447 00:40:42.818 09:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # wait 2413447 00:40:43.075 09:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:40:43.075 09:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:40:43.075 09:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:40:43.075 09:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:40:43.075 09:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:40:43.075 09:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:40:43.075 09:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:40:43.075 rmmod nvme_tcp 00:40:43.075 rmmod nvme_fabrics 00:40:43.075 rmmod nvme_keyring 00:40:43.075 09:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:40:43.075 09:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:40:43.075 09:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:40:43.075 09:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 2413420 ']' 00:40:43.075 09:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 2413420 00:40:43.075 09:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@947 -- # '[' -z 2413420 ']' 00:40:43.075 09:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # kill -0 2413420 00:40:43.075 09:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # uname 00:40:43.075 09:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:40:43.075 09:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2413420 00:40:43.075 09:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:40:43.075 09:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:40:43.075 09:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2413420' 00:40:43.075 killing process with pid 2413420 00:40:43.075 09:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # kill 2413420 00:40:43.075 [2024-05-15 09:06:37.713353] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:40:43.075 09:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # wait 2413420 00:40:43.333 09:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:40:43.333 09:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:40:43.333 09:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:40:43.333 09:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:40:43.333 09:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:40:43.333 09:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:43.333 09:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:43.333 09:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:45.236 09:06:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:40:45.236 00:40:45.236 real 0m18.123s 00:40:45.236 user 0m24.852s 00:40:45.236 sys 0m3.212s 00:40:45.236 09:06:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:40:45.236 09:06:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:40:45.236 ************************************ 00:40:45.236 END TEST nvmf_discovery_remove_ifc 00:40:45.236 ************************************ 00:40:45.236 09:06:40 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:40:45.493 09:06:40 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:40:45.493 09:06:40 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:40:45.493 09:06:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:45.493 ************************************ 00:40:45.493 START TEST nvmf_identify_kernel_target 00:40:45.493 ************************************ 00:40:45.493 09:06:40 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:40:45.493 * Looking for test storage... 00:40:45.493 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:40:45.493 09:06:40 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:45.493 09:06:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:40:45.493 09:06:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:45.493 09:06:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:45.493 09:06:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:45.493 09:06:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:45.493 09:06:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:45.493 09:06:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:45.493 09:06:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:45.493 09:06:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:45.493 09:06:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:45.493 09:06:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:45.493 09:06:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:40:45.493 09:06:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:40:45.493 09:06:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:45.493 09:06:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:45.493 09:06:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:45.493 09:06:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:45.493 09:06:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:45.493 09:06:40 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:45.493 09:06:40 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:45.493 09:06:40 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:45.493 09:06:40 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:45.493 09:06:40 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:45.493 09:06:40 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:45.493 09:06:40 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:40:45.493 09:06:40 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:45.493 09:06:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:40:45.493 09:06:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:40:45.493 09:06:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:40:45.493 09:06:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:45.493 09:06:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:45.493 09:06:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:45.493 09:06:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:40:45.493 09:06:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:40:45.493 09:06:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:40:45.493 09:06:40 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:40:45.493 09:06:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:40:45.494 09:06:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:45.494 09:06:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:40:45.494 09:06:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:40:45.494 09:06:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:40:45.494 09:06:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:45.494 09:06:40 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:45.494 09:06:40 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:45.494 09:06:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:40:45.494 09:06:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:40:45.494 09:06:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:40:45.494 09:06:40 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:40:48.020 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:48.020 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:40:48.020 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:40:48.020 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:40:48.020 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:40:48.020 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:40:48.020 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:40:48.020 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:40:48.020 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:40:48.020 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:40:48.020 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:40:48.020 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:40:48.020 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:40:48.020 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:40:48.020 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:40:48.020 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:48.020 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:48.020 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:48.020 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:48.020 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:48.020 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:48.020 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:48.020 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:48.020 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:48.021 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:48.021 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:48.021 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:40:48.021 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:40:48.021 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:40:48.021 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:40:48.021 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:40:48.021 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:40:48.021 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:48.021 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:40:48.021 Found 0000:09:00.0 (0x8086 - 0x159b) 00:40:48.021 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:40:48.021 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:40:48.021 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:48.021 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:48.021 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:40:48.021 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:48.021 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:40:48.021 Found 0000:09:00.1 (0x8086 - 0x159b) 00:40:48.021 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:40:48.021 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:40:48.021 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:48.021 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:48.021 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:40:48.021 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:40:48.021 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:40:48.021 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:40:48.021 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:48.021 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:48.021 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:40:48.021 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:48.021 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:40:48.021 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:48.021 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:48.021 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:40:48.021 Found net devices under 0000:09:00.0: cvl_0_0 00:40:48.021 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:48.021 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:48.021 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:48.021 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:40:48.021 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:48.021 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:40:48.021 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:48.021 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:48.021 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:40:48.021 Found net devices under 0000:09:00.1: cvl_0_1 00:40:48.021 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:48.021 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:40:48.021 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:40:48.021 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:40:48.021 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:40:48.021 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:40:48.021 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:48.021 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:48.021 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:48.021 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:40:48.021 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:48.021 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:48.021 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:40:48.021 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:48.021 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:48.021 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:40:48.021 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:40:48.021 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:40:48.021 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:48.021 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:48.021 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:48.021 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:40:48.021 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:48.279 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:48.279 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:48.279 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:40:48.279 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:48.279 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:40:48.279 00:40:48.279 --- 10.0.0.2 ping statistics --- 00:40:48.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:48.279 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:40:48.279 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:48.279 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:48.279 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:40:48.279 00:40:48.279 --- 10.0.0.1 ping statistics --- 00:40:48.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:48.279 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:40:48.279 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:48.279 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:40:48.279 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:40:48.279 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:48.279 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:40:48.279 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:40:48.279 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:48.279 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:40:48.279 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:40:48.279 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:40:48.279 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:40:48.279 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:40:48.279 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:40:48.279 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:40:48.279 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:48.279 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:48.279 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:40:48.279 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:48.279 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:40:48.279 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:40:48.279 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:40:48.279 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:40:48.280 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:40:48.280 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:40:48.280 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:40:48.280 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:48.280 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:40:48.280 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:40:48.280 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:40:48.280 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:40:48.280 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:40:48.280 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:40:48.280 09:06:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:40:49.655 Waiting for block devices as requested 00:40:49.655 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:40:49.655 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:40:49.655 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:40:49.655 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:40:49.655 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:40:49.912 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:40:49.912 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:40:49.912 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:40:49.912 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:40:50.169 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:40:50.169 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:40:50.169 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:40:50.426 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:40:50.426 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:40:50.426 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:40:50.426 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:40:50.684 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:40:50.684 09:06:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:40:50.684 09:06:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:40:50.684 09:06:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:40:50.684 09:06:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:40:50.684 09:06:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:40:50.684 09:06:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:40:50.684 09:06:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:40:50.684 09:06:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:40:50.684 09:06:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:40:50.684 No valid GPT data, bailing 00:40:50.684 09:06:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:40:50.684 09:06:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:40:50.684 09:06:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:40:50.684 09:06:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:40:50.684 09:06:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:40:50.684 09:06:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:50.684 09:06:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:40:50.684 09:06:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:40:50.684 09:06:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:40:50.684 09:06:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:40:50.684 09:06:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:40:50.684 09:06:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:40:50.684 09:06:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:40:50.684 09:06:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:40:50.684 09:06:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:40:50.684 09:06:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:40:50.684 09:06:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:40:50.684 09:06:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:40:50.684 00:40:50.684 Discovery Log Number of Records 2, Generation counter 2 00:40:50.684 =====Discovery Log Entry 0====== 00:40:50.684 trtype: tcp 00:40:50.684 adrfam: ipv4 00:40:50.684 subtype: current discovery subsystem 00:40:50.684 treq: not specified, sq flow control disable supported 00:40:50.684 portid: 1 00:40:50.684 trsvcid: 4420 00:40:50.684 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:40:50.684 traddr: 10.0.0.1 00:40:50.684 eflags: none 00:40:50.684 sectype: none 00:40:50.684 =====Discovery Log Entry 1====== 00:40:50.684 trtype: tcp 00:40:50.684 adrfam: ipv4 00:40:50.684 subtype: nvme subsystem 00:40:50.684 treq: not specified, sq flow control disable supported 00:40:50.684 portid: 1 00:40:50.684 trsvcid: 4420 00:40:50.684 subnqn: nqn.2016-06.io.spdk:testnqn 00:40:50.684 traddr: 10.0.0.1 00:40:50.684 eflags: none 00:40:50.684 sectype: none 00:40:50.685 09:06:45 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:40:50.685 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:40:50.945 EAL: No free 2048 kB hugepages reported on node 1 00:40:50.945 ===================================================== 00:40:50.945 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:40:50.945 ===================================================== 00:40:50.945 Controller Capabilities/Features 00:40:50.945 ================================ 00:40:50.945 Vendor ID: 0000 00:40:50.945 Subsystem Vendor ID: 0000 00:40:50.945 Serial Number: 3bdcf2fd29bf628d042b 00:40:50.945 Model Number: Linux 00:40:50.945 Firmware Version: 6.7.0-68 00:40:50.945 Recommended Arb Burst: 0 00:40:50.945 IEEE OUI Identifier: 00 00 00 00:40:50.945 Multi-path I/O 00:40:50.945 May have multiple subsystem ports: No 00:40:50.945 May have multiple controllers: No 00:40:50.945 Associated with SR-IOV VF: No 00:40:50.945 Max Data Transfer Size: Unlimited 00:40:50.945 Max Number of Namespaces: 0 00:40:50.945 Max Number of I/O Queues: 1024 00:40:50.945 NVMe Specification Version (VS): 1.3 00:40:50.945 NVMe Specification Version (Identify): 1.3 00:40:50.945 Maximum Queue Entries: 1024 00:40:50.945 Contiguous Queues Required: No 00:40:50.945 Arbitration Mechanisms Supported 00:40:50.945 Weighted Round Robin: Not Supported 00:40:50.945 Vendor Specific: Not Supported 00:40:50.945 Reset Timeout: 7500 ms 00:40:50.945 Doorbell Stride: 4 bytes 00:40:50.945 NVM Subsystem Reset: Not Supported 00:40:50.945 Command Sets Supported 00:40:50.945 NVM Command Set: Supported 00:40:50.945 Boot Partition: Not Supported 00:40:50.945 Memory Page Size Minimum: 4096 bytes 00:40:50.945 Memory Page Size Maximum: 4096 bytes 00:40:50.945 Persistent Memory Region: Not Supported 00:40:50.945 Optional Asynchronous Events Supported 00:40:50.945 Namespace Attribute Notices: Not Supported 00:40:50.945 Firmware Activation Notices: Not Supported 00:40:50.945 ANA Change Notices: Not Supported 00:40:50.945 PLE Aggregate Log Change Notices: Not Supported 00:40:50.945 LBA Status Info Alert Notices: Not Supported 00:40:50.945 EGE Aggregate Log Change Notices: Not Supported 00:40:50.945 Normal NVM Subsystem Shutdown event: Not Supported 00:40:50.945 Zone Descriptor Change Notices: Not Supported 00:40:50.945 Discovery Log Change Notices: Supported 00:40:50.945 Controller Attributes 00:40:50.945 128-bit Host Identifier: Not Supported 00:40:50.945 Non-Operational Permissive Mode: Not Supported 00:40:50.945 NVM Sets: Not Supported 00:40:50.945 Read Recovery Levels: Not Supported 00:40:50.945 Endurance Groups: Not Supported 00:40:50.945 Predictable Latency Mode: Not Supported 00:40:50.945 Traffic Based Keep ALive: Not Supported 00:40:50.945 Namespace Granularity: Not Supported 00:40:50.945 SQ Associations: Not Supported 00:40:50.945 UUID List: Not Supported 00:40:50.945 Multi-Domain Subsystem: Not Supported 00:40:50.945 Fixed Capacity Management: Not Supported 00:40:50.945 Variable Capacity Management: Not Supported 00:40:50.945 Delete Endurance Group: Not Supported 00:40:50.945 Delete NVM Set: Not Supported 00:40:50.945 Extended LBA Formats Supported: Not Supported 00:40:50.945 Flexible Data Placement Supported: Not Supported 00:40:50.945 00:40:50.945 Controller Memory Buffer Support 00:40:50.945 ================================ 00:40:50.945 Supported: No 00:40:50.945 00:40:50.945 Persistent Memory Region Support 00:40:50.945 ================================ 00:40:50.945 Supported: No 00:40:50.945 00:40:50.945 Admin Command Set Attributes 00:40:50.945 ============================ 00:40:50.945 Security Send/Receive: Not Supported 00:40:50.945 Format NVM: Not Supported 00:40:50.945 Firmware Activate/Download: Not Supported 00:40:50.945 Namespace Management: Not Supported 00:40:50.945 Device Self-Test: Not Supported 00:40:50.945 Directives: Not Supported 00:40:50.945 NVMe-MI: Not Supported 00:40:50.945 Virtualization Management: Not Supported 00:40:50.945 Doorbell Buffer Config: Not Supported 00:40:50.945 Get LBA Status Capability: Not Supported 00:40:50.945 Command & Feature Lockdown Capability: Not Supported 00:40:50.945 Abort Command Limit: 1 00:40:50.945 Async Event Request Limit: 1 00:40:50.945 Number of Firmware Slots: N/A 00:40:50.945 Firmware Slot 1 Read-Only: N/A 00:40:50.945 Firmware Activation Without Reset: N/A 00:40:50.945 Multiple Update Detection Support: N/A 00:40:50.945 Firmware Update Granularity: No Information Provided 00:40:50.945 Per-Namespace SMART Log: No 00:40:50.945 Asymmetric Namespace Access Log Page: Not Supported 00:40:50.945 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:40:50.945 Command Effects Log Page: Not Supported 00:40:50.945 Get Log Page Extended Data: Supported 00:40:50.945 Telemetry Log Pages: Not Supported 00:40:50.945 Persistent Event Log Pages: Not Supported 00:40:50.945 Supported Log Pages Log Page: May Support 00:40:50.945 Commands Supported & Effects Log Page: Not Supported 00:40:50.945 Feature Identifiers & Effects Log Page:May Support 00:40:50.945 NVMe-MI Commands & Effects Log Page: May Support 00:40:50.945 Data Area 4 for Telemetry Log: Not Supported 00:40:50.945 Error Log Page Entries Supported: 1 00:40:50.945 Keep Alive: Not Supported 00:40:50.945 00:40:50.945 NVM Command Set Attributes 00:40:50.945 ========================== 00:40:50.945 Submission Queue Entry Size 00:40:50.945 Max: 1 00:40:50.945 Min: 1 00:40:50.945 Completion Queue Entry Size 00:40:50.945 Max: 1 00:40:50.945 Min: 1 00:40:50.945 Number of Namespaces: 0 00:40:50.946 Compare Command: Not Supported 00:40:50.946 Write Uncorrectable Command: Not Supported 00:40:50.946 Dataset Management Command: Not Supported 00:40:50.946 Write Zeroes Command: Not Supported 00:40:50.946 Set Features Save Field: Not Supported 00:40:50.946 Reservations: Not Supported 00:40:50.946 Timestamp: Not Supported 00:40:50.946 Copy: Not Supported 00:40:50.946 Volatile Write Cache: Not Present 00:40:50.946 Atomic Write Unit (Normal): 1 00:40:50.946 Atomic Write Unit (PFail): 1 00:40:50.946 Atomic Compare & Write Unit: 1 00:40:50.946 Fused Compare & Write: Not Supported 00:40:50.946 Scatter-Gather List 00:40:50.946 SGL Command Set: Supported 00:40:50.946 SGL Keyed: Not Supported 00:40:50.946 SGL Bit Bucket Descriptor: Not Supported 00:40:50.946 SGL Metadata Pointer: Not Supported 00:40:50.946 Oversized SGL: Not Supported 00:40:50.946 SGL Metadata Address: Not Supported 00:40:50.946 SGL Offset: Supported 00:40:50.946 Transport SGL Data Block: Not Supported 00:40:50.946 Replay Protected Memory Block: Not Supported 00:40:50.946 00:40:50.946 Firmware Slot Information 00:40:50.946 ========================= 00:40:50.946 Active slot: 0 00:40:50.946 00:40:50.946 00:40:50.946 Error Log 00:40:50.946 ========= 00:40:50.946 00:40:50.946 Active Namespaces 00:40:50.946 ================= 00:40:50.946 Discovery Log Page 00:40:50.946 ================== 00:40:50.946 Generation Counter: 2 00:40:50.946 Number of Records: 2 00:40:50.946 Record Format: 0 00:40:50.946 00:40:50.946 Discovery Log Entry 0 00:40:50.946 ---------------------- 00:40:50.946 Transport Type: 3 (TCP) 00:40:50.946 Address Family: 1 (IPv4) 00:40:50.946 Subsystem Type: 3 (Current Discovery Subsystem) 00:40:50.946 Entry Flags: 00:40:50.946 Duplicate Returned Information: 0 00:40:50.946 Explicit Persistent Connection Support for Discovery: 0 00:40:50.946 Transport Requirements: 00:40:50.946 Secure Channel: Not Specified 00:40:50.946 Port ID: 1 (0x0001) 00:40:50.946 Controller ID: 65535 (0xffff) 00:40:50.946 Admin Max SQ Size: 32 00:40:50.946 Transport Service Identifier: 4420 00:40:50.946 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:40:50.946 Transport Address: 10.0.0.1 00:40:50.946 Discovery Log Entry 1 00:40:50.946 ---------------------- 00:40:50.946 Transport Type: 3 (TCP) 00:40:50.946 Address Family: 1 (IPv4) 00:40:50.946 Subsystem Type: 2 (NVM Subsystem) 00:40:50.946 Entry Flags: 00:40:50.946 Duplicate Returned Information: 0 00:40:50.946 Explicit Persistent Connection Support for Discovery: 0 00:40:50.946 Transport Requirements: 00:40:50.946 Secure Channel: Not Specified 00:40:50.946 Port ID: 1 (0x0001) 00:40:50.946 Controller ID: 65535 (0xffff) 00:40:50.946 Admin Max SQ Size: 32 00:40:50.946 Transport Service Identifier: 4420 00:40:50.946 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:40:50.946 Transport Address: 10.0.0.1 00:40:50.946 09:06:45 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:50.946 EAL: No free 2048 kB hugepages reported on node 1 00:40:50.946 get_feature(0x01) failed 00:40:50.946 get_feature(0x02) failed 00:40:50.946 get_feature(0x04) failed 00:40:50.946 ===================================================== 00:40:50.946 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:40:50.946 ===================================================== 00:40:50.946 Controller Capabilities/Features 00:40:50.946 ================================ 00:40:50.946 Vendor ID: 0000 00:40:50.946 Subsystem Vendor ID: 0000 00:40:50.946 Serial Number: 7207cceed5c85beca6d8 00:40:50.946 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:40:50.946 Firmware Version: 6.7.0-68 00:40:50.946 Recommended Arb Burst: 6 00:40:50.946 IEEE OUI Identifier: 00 00 00 00:40:50.946 Multi-path I/O 00:40:50.946 May have multiple subsystem ports: Yes 00:40:50.946 May have multiple controllers: Yes 00:40:50.946 Associated with SR-IOV VF: No 00:40:50.946 Max Data Transfer Size: Unlimited 00:40:50.946 Max Number of Namespaces: 1024 00:40:50.946 Max Number of I/O Queues: 128 00:40:50.946 NVMe Specification Version (VS): 1.3 00:40:50.946 NVMe Specification Version (Identify): 1.3 00:40:50.946 Maximum Queue Entries: 1024 00:40:50.946 Contiguous Queues Required: No 00:40:50.946 Arbitration Mechanisms Supported 00:40:50.946 Weighted Round Robin: Not Supported 00:40:50.946 Vendor Specific: Not Supported 00:40:50.946 Reset Timeout: 7500 ms 00:40:50.946 Doorbell Stride: 4 bytes 00:40:50.946 NVM Subsystem Reset: Not Supported 00:40:50.946 Command Sets Supported 00:40:50.946 NVM Command Set: Supported 00:40:50.946 Boot Partition: Not Supported 00:40:50.946 Memory Page Size Minimum: 4096 bytes 00:40:50.946 Memory Page Size Maximum: 4096 bytes 00:40:50.946 Persistent Memory Region: Not Supported 00:40:50.946 Optional Asynchronous Events Supported 00:40:50.946 Namespace Attribute Notices: Supported 00:40:50.946 Firmware Activation Notices: Not Supported 00:40:50.946 ANA Change Notices: Supported 00:40:50.946 PLE Aggregate Log Change Notices: Not Supported 00:40:50.946 LBA Status Info Alert Notices: Not Supported 00:40:50.946 EGE Aggregate Log Change Notices: Not Supported 00:40:50.946 Normal NVM Subsystem Shutdown event: Not Supported 00:40:50.946 Zone Descriptor Change Notices: Not Supported 00:40:50.946 Discovery Log Change Notices: Not Supported 00:40:50.946 Controller Attributes 00:40:50.946 128-bit Host Identifier: Supported 00:40:50.946 Non-Operational Permissive Mode: Not Supported 00:40:50.946 NVM Sets: Not Supported 00:40:50.946 Read Recovery Levels: Not Supported 00:40:50.946 Endurance Groups: Not Supported 00:40:50.946 Predictable Latency Mode: Not Supported 00:40:50.946 Traffic Based Keep ALive: Supported 00:40:50.946 Namespace Granularity: Not Supported 00:40:50.946 SQ Associations: Not Supported 00:40:50.946 UUID List: Not Supported 00:40:50.946 Multi-Domain Subsystem: Not Supported 00:40:50.946 Fixed Capacity Management: Not Supported 00:40:50.946 Variable Capacity Management: Not Supported 00:40:50.946 Delete Endurance Group: Not Supported 00:40:50.946 Delete NVM Set: Not Supported 00:40:50.946 Extended LBA Formats Supported: Not Supported 00:40:50.946 Flexible Data Placement Supported: Not Supported 00:40:50.946 00:40:50.946 Controller Memory Buffer Support 00:40:50.946 ================================ 00:40:50.946 Supported: No 00:40:50.946 00:40:50.946 Persistent Memory Region Support 00:40:50.946 ================================ 00:40:50.946 Supported: No 00:40:50.946 00:40:50.946 Admin Command Set Attributes 00:40:50.946 ============================ 00:40:50.946 Security Send/Receive: Not Supported 00:40:50.946 Format NVM: Not Supported 00:40:50.946 Firmware Activate/Download: Not Supported 00:40:50.946 Namespace Management: Not Supported 00:40:50.946 Device Self-Test: Not Supported 00:40:50.946 Directives: Not Supported 00:40:50.946 NVMe-MI: Not Supported 00:40:50.946 Virtualization Management: Not Supported 00:40:50.946 Doorbell Buffer Config: Not Supported 00:40:50.946 Get LBA Status Capability: Not Supported 00:40:50.946 Command & Feature Lockdown Capability: Not Supported 00:40:50.946 Abort Command Limit: 4 00:40:50.946 Async Event Request Limit: 4 00:40:50.946 Number of Firmware Slots: N/A 00:40:50.946 Firmware Slot 1 Read-Only: N/A 00:40:50.946 Firmware Activation Without Reset: N/A 00:40:50.946 Multiple Update Detection Support: N/A 00:40:50.946 Firmware Update Granularity: No Information Provided 00:40:50.946 Per-Namespace SMART Log: Yes 00:40:50.946 Asymmetric Namespace Access Log Page: Supported 00:40:50.946 ANA Transition Time : 10 sec 00:40:50.946 00:40:50.946 Asymmetric Namespace Access Capabilities 00:40:50.946 ANA Optimized State : Supported 00:40:50.946 ANA Non-Optimized State : Supported 00:40:50.946 ANA Inaccessible State : Supported 00:40:50.946 ANA Persistent Loss State : Supported 00:40:50.946 ANA Change State : Supported 00:40:50.946 ANAGRPID is not changed : No 00:40:50.946 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:40:50.946 00:40:50.946 ANA Group Identifier Maximum : 128 00:40:50.946 Number of ANA Group Identifiers : 128 00:40:50.946 Max Number of Allowed Namespaces : 1024 00:40:50.946 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:40:50.946 Command Effects Log Page: Supported 00:40:50.946 Get Log Page Extended Data: Supported 00:40:50.946 Telemetry Log Pages: Not Supported 00:40:50.946 Persistent Event Log Pages: Not Supported 00:40:50.946 Supported Log Pages Log Page: May Support 00:40:50.946 Commands Supported & Effects Log Page: Not Supported 00:40:50.946 Feature Identifiers & Effects Log Page:May Support 00:40:50.946 NVMe-MI Commands & Effects Log Page: May Support 00:40:50.946 Data Area 4 for Telemetry Log: Not Supported 00:40:50.946 Error Log Page Entries Supported: 128 00:40:50.946 Keep Alive: Supported 00:40:50.947 Keep Alive Granularity: 1000 ms 00:40:50.947 00:40:50.947 NVM Command Set Attributes 00:40:50.947 ========================== 00:40:50.947 Submission Queue Entry Size 00:40:50.947 Max: 64 00:40:50.947 Min: 64 00:40:50.947 Completion Queue Entry Size 00:40:50.947 Max: 16 00:40:50.947 Min: 16 00:40:50.947 Number of Namespaces: 1024 00:40:50.947 Compare Command: Not Supported 00:40:50.947 Write Uncorrectable Command: Not Supported 00:40:50.947 Dataset Management Command: Supported 00:40:50.947 Write Zeroes Command: Supported 00:40:50.947 Set Features Save Field: Not Supported 00:40:50.947 Reservations: Not Supported 00:40:50.947 Timestamp: Not Supported 00:40:50.947 Copy: Not Supported 00:40:50.947 Volatile Write Cache: Present 00:40:50.947 Atomic Write Unit (Normal): 1 00:40:50.947 Atomic Write Unit (PFail): 1 00:40:50.947 Atomic Compare & Write Unit: 1 00:40:50.947 Fused Compare & Write: Not Supported 00:40:50.947 Scatter-Gather List 00:40:50.947 SGL Command Set: Supported 00:40:50.947 SGL Keyed: Not Supported 00:40:50.947 SGL Bit Bucket Descriptor: Not Supported 00:40:50.947 SGL Metadata Pointer: Not Supported 00:40:50.947 Oversized SGL: Not Supported 00:40:50.947 SGL Metadata Address: Not Supported 00:40:50.947 SGL Offset: Supported 00:40:50.947 Transport SGL Data Block: Not Supported 00:40:50.947 Replay Protected Memory Block: Not Supported 00:40:50.947 00:40:50.947 Firmware Slot Information 00:40:50.947 ========================= 00:40:50.947 Active slot: 0 00:40:50.947 00:40:50.947 Asymmetric Namespace Access 00:40:50.947 =========================== 00:40:50.947 Change Count : 0 00:40:50.947 Number of ANA Group Descriptors : 1 00:40:50.947 ANA Group Descriptor : 0 00:40:50.947 ANA Group ID : 1 00:40:50.947 Number of NSID Values : 1 00:40:50.947 Change Count : 0 00:40:50.947 ANA State : 1 00:40:50.947 Namespace Identifier : 1 00:40:50.947 00:40:50.947 Commands Supported and Effects 00:40:50.947 ============================== 00:40:50.947 Admin Commands 00:40:50.947 -------------- 00:40:50.947 Get Log Page (02h): Supported 00:40:50.947 Identify (06h): Supported 00:40:50.947 Abort (08h): Supported 00:40:50.947 Set Features (09h): Supported 00:40:50.947 Get Features (0Ah): Supported 00:40:50.947 Asynchronous Event Request (0Ch): Supported 00:40:50.947 Keep Alive (18h): Supported 00:40:50.947 I/O Commands 00:40:50.947 ------------ 00:40:50.947 Flush (00h): Supported 00:40:50.947 Write (01h): Supported LBA-Change 00:40:50.947 Read (02h): Supported 00:40:50.947 Write Zeroes (08h): Supported LBA-Change 00:40:50.947 Dataset Management (09h): Supported 00:40:50.947 00:40:50.947 Error Log 00:40:50.947 ========= 00:40:50.947 Entry: 0 00:40:50.947 Error Count: 0x3 00:40:50.947 Submission Queue Id: 0x0 00:40:50.947 Command Id: 0x5 00:40:50.947 Phase Bit: 0 00:40:50.947 Status Code: 0x2 00:40:50.947 Status Code Type: 0x0 00:40:50.947 Do Not Retry: 1 00:40:50.947 Error Location: 0x28 00:40:50.947 LBA: 0x0 00:40:50.947 Namespace: 0x0 00:40:50.947 Vendor Log Page: 0x0 00:40:50.947 ----------- 00:40:50.947 Entry: 1 00:40:50.947 Error Count: 0x2 00:40:50.947 Submission Queue Id: 0x0 00:40:50.947 Command Id: 0x5 00:40:50.947 Phase Bit: 0 00:40:50.947 Status Code: 0x2 00:40:50.947 Status Code Type: 0x0 00:40:50.947 Do Not Retry: 1 00:40:50.947 Error Location: 0x28 00:40:50.947 LBA: 0x0 00:40:50.947 Namespace: 0x0 00:40:50.947 Vendor Log Page: 0x0 00:40:50.947 ----------- 00:40:50.947 Entry: 2 00:40:50.947 Error Count: 0x1 00:40:50.947 Submission Queue Id: 0x0 00:40:50.947 Command Id: 0x4 00:40:50.947 Phase Bit: 0 00:40:50.947 Status Code: 0x2 00:40:50.947 Status Code Type: 0x0 00:40:50.947 Do Not Retry: 1 00:40:50.947 Error Location: 0x28 00:40:50.947 LBA: 0x0 00:40:50.947 Namespace: 0x0 00:40:50.947 Vendor Log Page: 0x0 00:40:50.947 00:40:50.947 Number of Queues 00:40:50.947 ================ 00:40:50.947 Number of I/O Submission Queues: 128 00:40:50.947 Number of I/O Completion Queues: 128 00:40:50.947 00:40:50.947 ZNS Specific Controller Data 00:40:50.947 ============================ 00:40:50.947 Zone Append Size Limit: 0 00:40:50.947 00:40:50.947 00:40:50.947 Active Namespaces 00:40:50.947 ================= 00:40:50.947 get_feature(0x05) failed 00:40:50.947 Namespace ID:1 00:40:50.947 Command Set Identifier: NVM (00h) 00:40:50.947 Deallocate: Supported 00:40:50.947 Deallocated/Unwritten Error: Not Supported 00:40:50.947 Deallocated Read Value: Unknown 00:40:50.947 Deallocate in Write Zeroes: Not Supported 00:40:50.947 Deallocated Guard Field: 0xFFFF 00:40:50.947 Flush: Supported 00:40:50.947 Reservation: Not Supported 00:40:50.947 Namespace Sharing Capabilities: Multiple Controllers 00:40:50.947 Size (in LBAs): 1953525168 (931GiB) 00:40:50.947 Capacity (in LBAs): 1953525168 (931GiB) 00:40:50.947 Utilization (in LBAs): 1953525168 (931GiB) 00:40:50.947 UUID: 3c7fb0e4-465f-48d7-a3e0-74b8675ea6e5 00:40:50.947 Thin Provisioning: Not Supported 00:40:50.947 Per-NS Atomic Units: Yes 00:40:50.947 Atomic Boundary Size (Normal): 0 00:40:50.947 Atomic Boundary Size (PFail): 0 00:40:50.947 Atomic Boundary Offset: 0 00:40:50.947 NGUID/EUI64 Never Reused: No 00:40:50.947 ANA group ID: 1 00:40:50.947 Namespace Write Protected: No 00:40:50.947 Number of LBA Formats: 1 00:40:50.947 Current LBA Format: LBA Format #00 00:40:50.947 LBA Format #00: Data Size: 512 Metadata Size: 0 00:40:50.947 00:40:50.947 09:06:45 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:40:50.947 09:06:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:40:50.947 09:06:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:40:50.947 09:06:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:40:50.947 09:06:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:40:50.947 09:06:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:40:50.947 09:06:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:40:50.947 rmmod nvme_tcp 00:40:50.947 rmmod nvme_fabrics 00:40:50.947 09:06:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:40:50.947 09:06:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:40:50.947 09:06:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:40:50.947 09:06:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:40:50.947 09:06:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:40:50.947 09:06:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:40:50.947 09:06:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:40:50.947 09:06:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:40:50.947 09:06:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:40:50.947 09:06:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:50.947 09:06:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:50.947 09:06:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:52.888 09:06:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:40:52.888 09:06:47 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:40:52.888 09:06:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:40:52.888 09:06:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:40:52.888 09:06:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:52.888 09:06:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:40:52.888 09:06:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:40:52.888 09:06:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:52.888 09:06:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:40:52.888 09:06:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:40:52.888 09:06:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:40:54.261 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:40:54.261 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:40:54.261 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:40:54.261 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:40:54.261 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:40:54.261 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:40:54.261 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:40:54.261 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:40:54.261 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:40:54.261 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:40:54.261 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:40:54.261 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:40:54.261 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:40:54.261 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:40:54.261 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:40:54.261 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:40:55.192 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:40:55.450 00:40:55.450 real 0m10.025s 00:40:55.450 user 0m2.296s 00:40:55.450 sys 0m3.882s 00:40:55.450 09:06:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # xtrace_disable 00:40:55.450 09:06:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:40:55.450 ************************************ 00:40:55.450 END TEST nvmf_identify_kernel_target 00:40:55.450 ************************************ 00:40:55.450 09:06:50 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:40:55.450 09:06:50 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:40:55.450 09:06:50 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:40:55.450 09:06:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:55.450 ************************************ 00:40:55.450 START TEST nvmf_auth_host 00:40:55.450 ************************************ 00:40:55.450 09:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:40:55.450 * Looking for test storage... 00:40:55.450 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:40:55.450 09:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:55.450 09:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:40:55.450 09:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:55.450 09:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:55.450 09:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:55.450 09:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:55.450 09:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:55.450 09:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:55.450 09:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:55.450 09:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:55.450 09:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:55.450 09:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:55.450 09:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:40:55.450 09:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:40:55.450 09:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:55.450 09:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:55.450 09:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:55.450 09:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:55.450 09:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:55.450 09:06:50 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:55.450 09:06:50 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:55.450 09:06:50 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:55.450 09:06:50 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:55.450 09:06:50 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:55.450 09:06:50 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:55.450 09:06:50 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:40:55.450 09:06:50 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:55.450 09:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:40:55.450 09:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:40:55.450 09:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:40:55.450 09:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:55.450 09:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:55.450 09:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:55.450 09:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:40:55.450 09:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:40:55.450 09:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:40:55.450 09:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:40:55.450 09:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:40:55.450 09:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:40:55.450 09:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:40:55.450 09:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:40:55.450 09:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:40:55.450 09:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:40:55.450 09:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:40:55.450 09:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:40:55.450 09:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:40:55.450 09:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:55.450 09:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:40:55.450 09:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:40:55.450 09:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:40:55.450 09:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:55.450 09:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:55.450 09:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:55.451 09:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:40:55.451 09:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:40:55.451 09:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:40:55.451 09:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:57.977 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:57.977 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:40:57.977 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:40:57.977 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:40:57.977 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:40:57.977 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:40:57.977 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:40:57.977 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:40:57.977 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:40:57.977 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:40:57.977 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:40:57.977 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:40:57.977 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:40:57.977 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:40:57.977 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:40:57.977 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:57.977 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:57.977 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:57.977 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:57.977 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:57.977 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:57.977 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:57.977 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:57.977 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:57.977 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:57.977 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:57.977 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:40:57.977 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:40:57.977 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:40:57.977 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:40:57.977 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:40:57.977 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:40:57.977 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:57.977 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:40:57.977 Found 0000:09:00.0 (0x8086 - 0x159b) 00:40:57.977 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:40:57.977 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:40:57.977 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:57.977 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:57.977 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:40:57.977 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:40:57.978 Found 0000:09:00.1 (0x8086 - 0x159b) 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:40:57.978 Found net devices under 0000:09:00.0: cvl_0_0 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:40:57.978 Found net devices under 0000:09:00.1: cvl_0_1 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:40:57.978 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:57.978 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:40:57.978 00:40:57.978 --- 10.0.0.2 ping statistics --- 00:40:57.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:57.978 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:57.978 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:57.978 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:40:57.978 00:40:57.978 --- 10.0.0.1 ping statistics --- 00:40:57.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:57.978 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@721 -- # xtrace_disable 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=2421410 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 2421410 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@828 -- # '[' -z 2421410 ']' 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local max_retries=100 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@837 -- # xtrace_disable 00:40:57.978 09:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:58.545 09:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:40:58.545 09:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@861 -- # return 0 00:40:58.545 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:40:58.545 09:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@727 -- # xtrace_disable 00:40:58.545 09:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:58.545 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:58.545 09:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:40:58.545 09:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:40:58.545 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:40:58.545 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:40:58.545 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:40:58.545 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:40:58.545 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:40:58.545 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:40:58.545 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=71de72760ac944552acf75eb3419542c 00:40:58.545 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:40:58.545 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.GaF 00:40:58.545 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 71de72760ac944552acf75eb3419542c 0 00:40:58.545 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 71de72760ac944552acf75eb3419542c 0 00:40:58.545 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:40:58.545 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:40:58.545 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=71de72760ac944552acf75eb3419542c 00:40:58.545 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:40:58.545 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:40:58.545 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.GaF 00:40:58.545 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.GaF 00:40:58.545 09:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.GaF 00:40:58.545 09:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:40:58.545 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:40:58.545 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:40:58.545 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:40:58.545 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:40:58.545 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:40:58.545 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:40:58.545 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ab65cb7c1de5b1ebbf31da071dd643a4f97bec241040c035a67903ea3dea2d80 00:40:58.545 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:40:58.545 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.ItK 00:40:58.545 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ab65cb7c1de5b1ebbf31da071dd643a4f97bec241040c035a67903ea3dea2d80 3 00:40:58.545 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ab65cb7c1de5b1ebbf31da071dd643a4f97bec241040c035a67903ea3dea2d80 3 00:40:58.545 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:40:58.545 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:40:58.545 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ab65cb7c1de5b1ebbf31da071dd643a4f97bec241040c035a67903ea3dea2d80 00:40:58.545 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:40:58.545 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:40:58.545 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.ItK 00:40:58.545 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.ItK 00:40:58.545 09:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.ItK 00:40:58.545 09:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:40:58.545 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:40:58.545 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:40:58.545 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:40:58.545 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:40:58.545 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:40:58.545 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:40:58.545 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=eace98bd0e198b577c7ce439facc7823e7a8812a5906c622 00:40:58.545 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:40:58.545 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.BMS 00:40:58.545 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key eace98bd0e198b577c7ce439facc7823e7a8812a5906c622 0 00:40:58.545 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 eace98bd0e198b577c7ce439facc7823e7a8812a5906c622 0 00:40:58.545 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:40:58.546 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:40:58.546 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=eace98bd0e198b577c7ce439facc7823e7a8812a5906c622 00:40:58.546 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:40:58.546 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:40:58.546 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.BMS 00:40:58.546 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.BMS 00:40:58.546 09:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.BMS 00:40:58.546 09:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:40:58.546 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:40:58.546 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:40:58.546 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:40:58.546 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:40:58.546 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:40:58.546 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:40:58.546 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=68ce2eb9c9140e84d554f15d02b3c4da9a1f3299e0bcd51f 00:40:58.546 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:40:58.546 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.jH9 00:40:58.546 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 68ce2eb9c9140e84d554f15d02b3c4da9a1f3299e0bcd51f 2 00:40:58.546 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 68ce2eb9c9140e84d554f15d02b3c4da9a1f3299e0bcd51f 2 00:40:58.546 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:40:58.546 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:40:58.546 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=68ce2eb9c9140e84d554f15d02b3c4da9a1f3299e0bcd51f 00:40:58.546 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:40:58.546 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:40:58.546 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.jH9 00:40:58.546 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.jH9 00:40:58.546 09:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.jH9 00:40:58.546 09:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:40:58.546 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:40:58.546 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:40:58.546 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:40:58.546 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:40:58.546 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:40:58.546 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:40:58.546 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=417b253fd5bc4224b01f45d04d38cf9d 00:40:58.546 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:40:58.546 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.kvk 00:40:58.546 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 417b253fd5bc4224b01f45d04d38cf9d 1 00:40:58.546 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 417b253fd5bc4224b01f45d04d38cf9d 1 00:40:58.546 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:40:58.546 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:40:58.546 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=417b253fd5bc4224b01f45d04d38cf9d 00:40:58.546 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:40:58.546 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:40:58.804 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.kvk 00:40:58.804 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.kvk 00:40:58.804 09:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.kvk 00:40:58.804 09:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:40:58.804 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:40:58.804 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:40:58.804 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:40:58.804 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:40:58.804 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:40:58.804 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:40:58.804 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d3e3fd7b2427f37a7f9246cdc8a46e0d 00:40:58.804 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:40:58.804 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.YDg 00:40:58.804 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d3e3fd7b2427f37a7f9246cdc8a46e0d 1 00:40:58.804 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d3e3fd7b2427f37a7f9246cdc8a46e0d 1 00:40:58.804 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:40:58.804 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:40:58.804 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d3e3fd7b2427f37a7f9246cdc8a46e0d 00:40:58.804 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:40:58.804 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:40:58.804 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.YDg 00:40:58.804 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.YDg 00:40:58.804 09:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.YDg 00:40:58.804 09:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:40:58.804 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:40:58.804 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:40:58.804 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:40:58.804 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:40:58.804 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:40:58.804 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:40:58.804 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9b5c63023c4ecf86d7bf0e4def70d76086609a5494483e1c 00:40:58.804 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:40:58.804 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.W56 00:40:58.804 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9b5c63023c4ecf86d7bf0e4def70d76086609a5494483e1c 2 00:40:58.804 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9b5c63023c4ecf86d7bf0e4def70d76086609a5494483e1c 2 00:40:58.804 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:40:58.804 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:40:58.804 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9b5c63023c4ecf86d7bf0e4def70d76086609a5494483e1c 00:40:58.804 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:40:58.804 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:40:58.804 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.W56 00:40:58.804 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.W56 00:40:58.804 09:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.W56 00:40:58.804 09:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:40:58.804 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:40:58.804 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:40:58.804 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:40:58.804 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:40:58.804 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:40:58.804 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:40:58.804 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7f67fff7937caa924dfb12ab59925226 00:40:58.804 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:40:58.804 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.TYs 00:40:58.804 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7f67fff7937caa924dfb12ab59925226 0 00:40:58.804 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7f67fff7937caa924dfb12ab59925226 0 00:40:58.804 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:40:58.804 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:40:58.805 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7f67fff7937caa924dfb12ab59925226 00:40:58.805 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:40:58.805 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:40:58.805 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.TYs 00:40:58.805 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.TYs 00:40:58.805 09:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.TYs 00:40:58.805 09:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:40:58.805 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:40:58.805 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:40:58.805 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:40:58.805 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:40:58.805 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:40:58.805 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:40:58.805 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=57416e35c48bb51ba0ba16269476e0008183cef7022d2ff9d1676cd6fe4bd2da 00:40:58.805 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:40:58.805 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.CQ0 00:40:58.805 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 57416e35c48bb51ba0ba16269476e0008183cef7022d2ff9d1676cd6fe4bd2da 3 00:40:58.805 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 57416e35c48bb51ba0ba16269476e0008183cef7022d2ff9d1676cd6fe4bd2da 3 00:40:58.805 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:40:58.805 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:40:58.805 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=57416e35c48bb51ba0ba16269476e0008183cef7022d2ff9d1676cd6fe4bd2da 00:40:58.805 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:40:58.805 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:40:58.805 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.CQ0 00:40:58.805 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.CQ0 00:40:58.805 09:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.CQ0 00:40:58.805 09:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:40:58.805 09:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2421410 00:40:58.805 09:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@828 -- # '[' -z 2421410 ']' 00:40:58.805 09:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:58.805 09:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local max_retries=100 00:40:58.805 09:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:58.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:58.805 09:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@837 -- # xtrace_disable 00:40:58.805 09:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:59.063 09:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:40:59.063 09:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@861 -- # return 0 00:40:59.063 09:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:40:59.063 09:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.GaF 00:40:59.063 09:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:59.063 09:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:59.063 09:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:59.063 09:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.ItK ]] 00:40:59.063 09:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ItK 00:40:59.063 09:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:59.063 09:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:59.063 09:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:59.063 09:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:40:59.063 09:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.BMS 00:40:59.063 09:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:59.063 09:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:59.063 09:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:59.063 09:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.jH9 ]] 00:40:59.063 09:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.jH9 00:40:59.063 09:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:59.063 09:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:59.063 09:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:59.063 09:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:40:59.063 09:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.kvk 00:40:59.063 09:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:59.063 09:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:59.063 09:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:59.063 09:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.YDg ]] 00:40:59.063 09:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.YDg 00:40:59.063 09:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:59.063 09:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:59.063 09:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:59.063 09:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:40:59.063 09:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.W56 00:40:59.063 09:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:59.063 09:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:59.063 09:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:59.063 09:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.TYs ]] 00:40:59.063 09:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.TYs 00:40:59.063 09:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:59.321 09:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:59.321 09:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:59.321 09:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:40:59.321 09:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.CQ0 00:40:59.321 09:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:59.321 09:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:59.321 09:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:59.321 09:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:40:59.321 09:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:40:59.321 09:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:40:59.321 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:40:59.321 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:40:59.321 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:40:59.321 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:59.321 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:59.321 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:40:59.321 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:59.321 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:40:59.321 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:40:59.321 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:40:59.321 09:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:40:59.321 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:40:59.321 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:40:59.321 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:40:59.321 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:40:59.321 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:40:59.321 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:40:59.321 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:40:59.321 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:40:59.321 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:40:59.321 09:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:41:00.693 Waiting for block devices as requested 00:41:00.693 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:41:00.693 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:41:00.693 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:41:00.693 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:41:00.950 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:41:00.950 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:41:00.950 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:41:00.950 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:41:01.207 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:41:01.207 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:41:01.207 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:41:01.207 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:41:01.207 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:41:01.464 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:41:01.464 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:41:01.464 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:41:01.464 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:41:02.029 09:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:41:02.029 09:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:41:02.029 09:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:41:02.029 09:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:41:02.029 09:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:41:02.029 09:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:41:02.029 09:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:41:02.029 09:06:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:41:02.029 09:06:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:41:02.029 No valid GPT data, bailing 00:41:02.029 09:06:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:41:02.029 09:06:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:41:02.029 09:06:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:41:02.029 09:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:41:02.029 09:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:41:02.029 09:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:41:02.029 09:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:41:02.029 09:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:41:02.029 09:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:41:02.029 09:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:41:02.029 09:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:41:02.029 09:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:41:02.029 09:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:41:02.029 09:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:41:02.029 09:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:41:02.029 09:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:41:02.029 09:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:41:02.029 09:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:41:02.029 00:41:02.029 Discovery Log Number of Records 2, Generation counter 2 00:41:02.029 =====Discovery Log Entry 0====== 00:41:02.029 trtype: tcp 00:41:02.029 adrfam: ipv4 00:41:02.029 subtype: current discovery subsystem 00:41:02.029 treq: not specified, sq flow control disable supported 00:41:02.029 portid: 1 00:41:02.029 trsvcid: 4420 00:41:02.029 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:41:02.029 traddr: 10.0.0.1 00:41:02.029 eflags: none 00:41:02.029 sectype: none 00:41:02.029 =====Discovery Log Entry 1====== 00:41:02.029 trtype: tcp 00:41:02.029 adrfam: ipv4 00:41:02.029 subtype: nvme subsystem 00:41:02.030 treq: not specified, sq flow control disable supported 00:41:02.030 portid: 1 00:41:02.030 trsvcid: 4420 00:41:02.030 subnqn: nqn.2024-02.io.spdk:cnode0 00:41:02.030 traddr: 10.0.0.1 00:41:02.030 eflags: none 00:41:02.030 sectype: none 00:41:02.030 09:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:41:02.030 09:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:41:02.030 09:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:41:02.030 09:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:41:02.030 09:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:02.030 09:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:02.030 09:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:02.030 09:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:02.030 09:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWFjZTk4YmQwZTE5OGI1NzdjN2NlNDM5ZmFjYzc4MjNlN2E4ODEyYTU5MDZjNjIyG/WV4A==: 00:41:02.030 09:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjhjZTJlYjljOTE0MGU4NGQ1NTRmMTVkMDJiM2M0ZGE5YTFmMzI5OWUwYmNkNTFmrOpQmA==: 00:41:02.030 09:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:02.030 09:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:02.030 09:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWFjZTk4YmQwZTE5OGI1NzdjN2NlNDM5ZmFjYzc4MjNlN2E4ODEyYTU5MDZjNjIyG/WV4A==: 00:41:02.030 09:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjhjZTJlYjljOTE0MGU4NGQ1NTRmMTVkMDJiM2M0ZGE5YTFmMzI5OWUwYmNkNTFmrOpQmA==: ]] 00:41:02.030 09:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjhjZTJlYjljOTE0MGU4NGQ1NTRmMTVkMDJiM2M0ZGE5YTFmMzI5OWUwYmNkNTFmrOpQmA==: 00:41:02.030 09:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:41:02.030 09:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:41:02.030 09:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:41:02.030 09:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:41:02.030 09:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:41:02.030 09:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:02.030 09:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:41:02.030 09:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:41:02.030 09:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:02.030 09:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:02.030 09:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:41:02.030 09:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:02.030 09:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:02.030 09:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:02.030 09:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:02.030 09:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:02.030 09:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:02.030 09:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:02.030 09:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:02.030 09:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:02.030 09:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:02.030 09:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:02.030 09:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:02.030 09:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:02.030 09:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:02.030 09:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:02.030 09:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:02.030 09:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:02.287 nvme0n1 00:41:02.287 09:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:02.287 09:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:02.287 09:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:02.287 09:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:02.287 09:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:02.287 09:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:02.287 09:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:02.287 09:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:02.287 09:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:02.287 09:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:02.287 09:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:02.287 09:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:41:02.287 09:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:41:02.287 09:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:02.287 09:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:41:02.287 09:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:02.287 09:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:02.287 09:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:02.287 09:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:41:02.287 09:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFkZTcyNzYwYWM5NDQ1NTJhY2Y3NWViMzQxOTU0MmO7cVrh: 00:41:02.287 09:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWI2NWNiN2MxZGU1YjFlYmJmMzFkYTA3MWRkNjQzYTRmOTdiZWMyNDEwNDBjMDM1YTY3OTAzZWEzZGVhMmQ4MBiDX28=: 00:41:02.287 09:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:02.287 09:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:02.287 09:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFkZTcyNzYwYWM5NDQ1NTJhY2Y3NWViMzQxOTU0MmO7cVrh: 00:41:02.287 09:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWI2NWNiN2MxZGU1YjFlYmJmMzFkYTA3MWRkNjQzYTRmOTdiZWMyNDEwNDBjMDM1YTY3OTAzZWEzZGVhMmQ4MBiDX28=: ]] 00:41:02.287 09:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWI2NWNiN2MxZGU1YjFlYmJmMzFkYTA3MWRkNjQzYTRmOTdiZWMyNDEwNDBjMDM1YTY3OTAzZWEzZGVhMmQ4MBiDX28=: 00:41:02.287 09:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:41:02.287 09:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:02.287 09:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:02.287 09:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:41:02.287 09:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:41:02.287 09:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:02.287 09:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:41:02.287 09:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:02.287 09:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:02.287 09:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:02.287 09:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:02.287 09:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:02.287 09:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:02.287 09:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:02.287 09:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:02.287 09:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:02.287 09:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:02.287 09:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:02.287 09:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:02.287 09:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:02.287 09:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:02.287 09:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:02.287 09:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:02.287 09:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:02.287 nvme0n1 00:41:02.287 09:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:02.287 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:02.287 09:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:02.287 09:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:02.287 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:02.287 09:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:02.545 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:02.545 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:02.545 09:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:02.545 09:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:02.545 09:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:02.545 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:02.545 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:41:02.545 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:02.545 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:02.545 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:02.545 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:02.545 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWFjZTk4YmQwZTE5OGI1NzdjN2NlNDM5ZmFjYzc4MjNlN2E4ODEyYTU5MDZjNjIyG/WV4A==: 00:41:02.545 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjhjZTJlYjljOTE0MGU4NGQ1NTRmMTVkMDJiM2M0ZGE5YTFmMzI5OWUwYmNkNTFmrOpQmA==: 00:41:02.545 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:02.545 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:02.545 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWFjZTk4YmQwZTE5OGI1NzdjN2NlNDM5ZmFjYzc4MjNlN2E4ODEyYTU5MDZjNjIyG/WV4A==: 00:41:02.545 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjhjZTJlYjljOTE0MGU4NGQ1NTRmMTVkMDJiM2M0ZGE5YTFmMzI5OWUwYmNkNTFmrOpQmA==: ]] 00:41:02.545 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjhjZTJlYjljOTE0MGU4NGQ1NTRmMTVkMDJiM2M0ZGE5YTFmMzI5OWUwYmNkNTFmrOpQmA==: 00:41:02.545 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:41:02.545 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:02.545 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:02.545 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:41:02.545 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:02.545 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:02.545 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:41:02.545 09:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:02.545 09:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:02.545 09:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:02.545 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:02.545 09:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:02.545 09:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:02.545 09:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:02.545 09:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:02.545 09:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:02.545 09:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:02.545 09:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:02.545 09:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:02.545 09:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:02.545 09:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:02.545 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:02.545 09:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:02.545 09:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:02.545 nvme0n1 00:41:02.546 09:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:02.546 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:02.546 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:02.546 09:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:02.546 09:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:02.546 09:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:02.546 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:02.546 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:02.546 09:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:02.546 09:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:02.546 09:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:02.546 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:02.546 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:41:02.546 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:02.546 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:02.546 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:02.546 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:02.546 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDE3YjI1M2ZkNWJjNDIyNGIwMWY0NWQwNGQzOGNmOWT7gHwU: 00:41:02.546 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDNlM2ZkN2IyNDI3ZjM3YTdmOTI0NmNkYzhhNDZlMGRQCItf: 00:41:02.546 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:02.546 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:02.546 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDE3YjI1M2ZkNWJjNDIyNGIwMWY0NWQwNGQzOGNmOWT7gHwU: 00:41:02.546 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDNlM2ZkN2IyNDI3ZjM3YTdmOTI0NmNkYzhhNDZlMGRQCItf: ]] 00:41:02.546 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDNlM2ZkN2IyNDI3ZjM3YTdmOTI0NmNkYzhhNDZlMGRQCItf: 00:41:02.546 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:41:02.546 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:02.546 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:02.546 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:41:02.546 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:41:02.546 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:02.546 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:41:02.546 09:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:02.546 09:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:02.546 09:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:02.546 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:02.546 09:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:02.546 09:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:02.546 09:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:02.546 09:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:02.546 09:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:02.546 09:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:02.546 09:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:02.546 09:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:02.546 09:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:02.546 09:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:02.546 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:02.546 09:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:02.546 09:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:02.804 nvme0n1 00:41:02.804 09:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:02.804 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:02.804 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:02.804 09:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:02.804 09:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:02.804 09:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:02.804 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:02.804 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:02.804 09:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:02.804 09:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:02.804 09:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:02.804 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:02.804 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:41:02.804 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:02.804 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:02.804 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:02.804 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:41:02.804 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWI1YzYzMDIzYzRlY2Y4NmQ3YmYwZTRkZWY3MGQ3NjA4NjYwOWE1NDk0NDgzZTFjdzq6aQ==: 00:41:02.804 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2Y2N2ZmZjc5MzdjYWE5MjRkZmIxMmFiNTk5MjUyMjbDpGhZ: 00:41:02.804 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:02.804 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:02.804 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWI1YzYzMDIzYzRlY2Y4NmQ3YmYwZTRkZWY3MGQ3NjA4NjYwOWE1NDk0NDgzZTFjdzq6aQ==: 00:41:02.804 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2Y2N2ZmZjc5MzdjYWE5MjRkZmIxMmFiNTk5MjUyMjbDpGhZ: ]] 00:41:02.804 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2Y2N2ZmZjc5MzdjYWE5MjRkZmIxMmFiNTk5MjUyMjbDpGhZ: 00:41:02.804 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:41:02.804 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:02.804 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:02.804 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:41:02.804 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:41:02.804 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:02.804 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:41:02.804 09:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:02.804 09:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:02.804 09:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:02.804 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:02.804 09:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:02.804 09:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:02.804 09:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:02.804 09:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:02.804 09:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:02.804 09:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:02.804 09:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:02.804 09:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:02.804 09:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:02.804 09:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:02.804 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:41:02.804 09:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:02.804 09:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:03.062 nvme0n1 00:41:03.062 09:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:03.062 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:03.062 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:03.062 09:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:03.062 09:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:03.062 09:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:03.062 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:03.062 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:03.062 09:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:03.062 09:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:03.062 09:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:03.062 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:03.062 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:41:03.062 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:03.062 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:03.062 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:03.062 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:41:03.062 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTc0MTZlMzVjNDhiYjUxYmEwYmExNjI2OTQ3NmUwMDA4MTgzY2VmNzAyMmQyZmY5ZDE2NzZjZDZmZTRiZDJkYdCbv0Q=: 00:41:03.062 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:41:03.062 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:03.062 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:03.062 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTc0MTZlMzVjNDhiYjUxYmEwYmExNjI2OTQ3NmUwMDA4MTgzY2VmNzAyMmQyZmY5ZDE2NzZjZDZmZTRiZDJkYdCbv0Q=: 00:41:03.062 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:41:03.062 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:41:03.062 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:03.062 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:03.062 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:41:03.062 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:41:03.062 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:03.062 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:41:03.062 09:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:03.062 09:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:03.062 09:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:03.062 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:03.062 09:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:03.062 09:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:03.062 09:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:03.062 09:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:03.062 09:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:03.062 09:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:03.062 09:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:03.062 09:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:03.062 09:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:03.062 09:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:03.062 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:41:03.062 09:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:03.062 09:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:03.062 nvme0n1 00:41:03.062 09:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:03.062 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:03.062 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:03.062 09:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:03.062 09:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:03.062 09:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:03.062 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:03.062 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:03.062 09:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:03.062 09:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:03.062 09:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:03.062 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:41:03.320 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:03.320 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:41:03.320 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:03.320 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:03.320 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:41:03.320 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:41:03.320 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFkZTcyNzYwYWM5NDQ1NTJhY2Y3NWViMzQxOTU0MmO7cVrh: 00:41:03.320 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWI2NWNiN2MxZGU1YjFlYmJmMzFkYTA3MWRkNjQzYTRmOTdiZWMyNDEwNDBjMDM1YTY3OTAzZWEzZGVhMmQ4MBiDX28=: 00:41:03.320 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:03.320 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:41:03.320 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFkZTcyNzYwYWM5NDQ1NTJhY2Y3NWViMzQxOTU0MmO7cVrh: 00:41:03.320 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWI2NWNiN2MxZGU1YjFlYmJmMzFkYTA3MWRkNjQzYTRmOTdiZWMyNDEwNDBjMDM1YTY3OTAzZWEzZGVhMmQ4MBiDX28=: ]] 00:41:03.320 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWI2NWNiN2MxZGU1YjFlYmJmMzFkYTA3MWRkNjQzYTRmOTdiZWMyNDEwNDBjMDM1YTY3OTAzZWEzZGVhMmQ4MBiDX28=: 00:41:03.320 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:41:03.320 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:03.320 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:03.320 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:41:03.320 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:41:03.320 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:03.320 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:41:03.320 09:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:03.320 09:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:03.320 09:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:03.320 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:03.320 09:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:03.320 09:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:03.320 09:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:03.320 09:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:03.320 09:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:03.320 09:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:03.320 09:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:03.320 09:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:03.320 09:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:03.320 09:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:03.320 09:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:03.320 09:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:03.320 09:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:03.320 nvme0n1 00:41:03.320 09:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:03.320 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:03.320 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:03.320 09:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:03.320 09:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:03.320 09:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:03.320 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:03.320 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:03.320 09:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:03.320 09:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:03.320 09:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:03.320 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:03.320 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:41:03.320 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:03.320 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:03.320 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:41:03.320 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:03.320 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWFjZTk4YmQwZTE5OGI1NzdjN2NlNDM5ZmFjYzc4MjNlN2E4ODEyYTU5MDZjNjIyG/WV4A==: 00:41:03.320 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjhjZTJlYjljOTE0MGU4NGQ1NTRmMTVkMDJiM2M0ZGE5YTFmMzI5OWUwYmNkNTFmrOpQmA==: 00:41:03.320 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:03.320 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:41:03.320 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWFjZTk4YmQwZTE5OGI1NzdjN2NlNDM5ZmFjYzc4MjNlN2E4ODEyYTU5MDZjNjIyG/WV4A==: 00:41:03.320 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjhjZTJlYjljOTE0MGU4NGQ1NTRmMTVkMDJiM2M0ZGE5YTFmMzI5OWUwYmNkNTFmrOpQmA==: ]] 00:41:03.320 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjhjZTJlYjljOTE0MGU4NGQ1NTRmMTVkMDJiM2M0ZGE5YTFmMzI5OWUwYmNkNTFmrOpQmA==: 00:41:03.321 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:41:03.321 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:03.321 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:03.321 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:41:03.321 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:03.321 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:03.321 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:41:03.321 09:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:03.321 09:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:03.321 09:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:03.321 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:03.321 09:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:03.321 09:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:03.321 09:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:03.321 09:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:03.321 09:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:03.321 09:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:03.321 09:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:03.321 09:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:03.321 09:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:03.321 09:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:03.321 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:03.321 09:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:03.321 09:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:03.578 nvme0n1 00:41:03.578 09:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:03.578 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:03.578 09:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:03.578 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:03.578 09:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:03.578 09:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:03.578 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:03.578 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:03.578 09:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:03.578 09:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:03.578 09:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:03.578 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:03.578 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:41:03.578 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:03.578 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:03.578 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:41:03.578 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:03.578 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDE3YjI1M2ZkNWJjNDIyNGIwMWY0NWQwNGQzOGNmOWT7gHwU: 00:41:03.578 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDNlM2ZkN2IyNDI3ZjM3YTdmOTI0NmNkYzhhNDZlMGRQCItf: 00:41:03.578 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:03.578 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:41:03.578 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDE3YjI1M2ZkNWJjNDIyNGIwMWY0NWQwNGQzOGNmOWT7gHwU: 00:41:03.578 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDNlM2ZkN2IyNDI3ZjM3YTdmOTI0NmNkYzhhNDZlMGRQCItf: ]] 00:41:03.578 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDNlM2ZkN2IyNDI3ZjM3YTdmOTI0NmNkYzhhNDZlMGRQCItf: 00:41:03.579 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:41:03.579 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:03.579 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:03.579 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:41:03.579 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:41:03.579 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:03.579 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:41:03.579 09:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:03.579 09:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:03.579 09:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:03.579 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:03.579 09:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:03.579 09:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:03.579 09:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:03.579 09:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:03.579 09:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:03.579 09:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:03.579 09:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:03.579 09:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:03.579 09:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:03.579 09:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:03.579 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:03.579 09:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:03.579 09:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:03.836 nvme0n1 00:41:03.836 09:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:03.836 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:03.836 09:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:03.836 09:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:03.836 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:03.836 09:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:03.836 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:03.836 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:03.836 09:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:03.836 09:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:03.836 09:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:03.836 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:03.836 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:41:03.836 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:03.836 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:03.836 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:41:03.836 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:41:03.836 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWI1YzYzMDIzYzRlY2Y4NmQ3YmYwZTRkZWY3MGQ3NjA4NjYwOWE1NDk0NDgzZTFjdzq6aQ==: 00:41:03.836 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2Y2N2ZmZjc5MzdjYWE5MjRkZmIxMmFiNTk5MjUyMjbDpGhZ: 00:41:03.836 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:03.836 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:41:03.836 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWI1YzYzMDIzYzRlY2Y4NmQ3YmYwZTRkZWY3MGQ3NjA4NjYwOWE1NDk0NDgzZTFjdzq6aQ==: 00:41:03.836 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2Y2N2ZmZjc5MzdjYWE5MjRkZmIxMmFiNTk5MjUyMjbDpGhZ: ]] 00:41:03.836 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2Y2N2ZmZjc5MzdjYWE5MjRkZmIxMmFiNTk5MjUyMjbDpGhZ: 00:41:03.837 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:41:03.837 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:03.837 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:03.837 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:41:03.837 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:41:03.837 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:03.837 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:41:03.837 09:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:03.837 09:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:03.837 09:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:03.837 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:03.837 09:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:03.837 09:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:03.837 09:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:03.837 09:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:03.837 09:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:03.837 09:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:03.837 09:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:03.837 09:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:03.837 09:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:03.837 09:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:03.837 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:41:03.837 09:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:03.837 09:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:04.094 nvme0n1 00:41:04.094 09:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:04.094 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:04.094 09:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:04.094 09:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:04.094 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:04.094 09:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:04.094 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:04.094 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:04.094 09:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:04.094 09:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:04.094 09:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:04.094 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:04.094 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:41:04.094 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:04.094 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:04.094 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:41:04.094 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:41:04.094 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTc0MTZlMzVjNDhiYjUxYmEwYmExNjI2OTQ3NmUwMDA4MTgzY2VmNzAyMmQyZmY5ZDE2NzZjZDZmZTRiZDJkYdCbv0Q=: 00:41:04.094 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:41:04.094 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:04.094 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:41:04.094 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTc0MTZlMzVjNDhiYjUxYmEwYmExNjI2OTQ3NmUwMDA4MTgzY2VmNzAyMmQyZmY5ZDE2NzZjZDZmZTRiZDJkYdCbv0Q=: 00:41:04.094 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:41:04.094 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:41:04.094 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:04.094 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:04.094 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:41:04.094 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:41:04.094 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:04.094 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:41:04.094 09:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:04.094 09:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:04.094 09:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:04.094 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:04.094 09:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:04.094 09:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:04.094 09:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:04.094 09:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:04.094 09:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:04.094 09:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:04.094 09:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:04.094 09:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:04.094 09:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:04.094 09:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:04.094 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:41:04.094 09:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:04.094 09:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:04.352 nvme0n1 00:41:04.352 09:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:04.352 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:04.352 09:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:04.352 09:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:04.352 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:04.352 09:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:04.352 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:04.352 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:04.352 09:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:04.352 09:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:04.352 09:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:04.352 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:41:04.352 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:04.352 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:41:04.352 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:04.352 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:04.352 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:41:04.352 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:41:04.352 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFkZTcyNzYwYWM5NDQ1NTJhY2Y3NWViMzQxOTU0MmO7cVrh: 00:41:04.352 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWI2NWNiN2MxZGU1YjFlYmJmMzFkYTA3MWRkNjQzYTRmOTdiZWMyNDEwNDBjMDM1YTY3OTAzZWEzZGVhMmQ4MBiDX28=: 00:41:04.352 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:04.352 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:41:04.352 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFkZTcyNzYwYWM5NDQ1NTJhY2Y3NWViMzQxOTU0MmO7cVrh: 00:41:04.352 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWI2NWNiN2MxZGU1YjFlYmJmMzFkYTA3MWRkNjQzYTRmOTdiZWMyNDEwNDBjMDM1YTY3OTAzZWEzZGVhMmQ4MBiDX28=: ]] 00:41:04.352 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWI2NWNiN2MxZGU1YjFlYmJmMzFkYTA3MWRkNjQzYTRmOTdiZWMyNDEwNDBjMDM1YTY3OTAzZWEzZGVhMmQ4MBiDX28=: 00:41:04.352 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:41:04.352 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:04.352 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:04.352 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:41:04.352 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:41:04.352 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:04.352 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:41:04.352 09:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:04.352 09:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:04.352 09:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:04.352 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:04.352 09:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:04.352 09:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:04.352 09:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:04.352 09:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:04.352 09:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:04.352 09:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:04.352 09:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:04.352 09:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:04.352 09:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:04.352 09:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:04.352 09:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:04.352 09:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:04.352 09:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:04.610 nvme0n1 00:41:04.610 09:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:04.611 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:04.611 09:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:04.611 09:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:04.611 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:04.611 09:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:04.611 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:04.611 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:04.611 09:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:04.611 09:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:04.611 09:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:04.611 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:04.611 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:41:04.611 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:04.611 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:04.611 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:41:04.611 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:04.611 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWFjZTk4YmQwZTE5OGI1NzdjN2NlNDM5ZmFjYzc4MjNlN2E4ODEyYTU5MDZjNjIyG/WV4A==: 00:41:04.611 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjhjZTJlYjljOTE0MGU4NGQ1NTRmMTVkMDJiM2M0ZGE5YTFmMzI5OWUwYmNkNTFmrOpQmA==: 00:41:04.611 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:04.611 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:41:04.611 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWFjZTk4YmQwZTE5OGI1NzdjN2NlNDM5ZmFjYzc4MjNlN2E4ODEyYTU5MDZjNjIyG/WV4A==: 00:41:04.611 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjhjZTJlYjljOTE0MGU4NGQ1NTRmMTVkMDJiM2M0ZGE5YTFmMzI5OWUwYmNkNTFmrOpQmA==: ]] 00:41:04.611 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjhjZTJlYjljOTE0MGU4NGQ1NTRmMTVkMDJiM2M0ZGE5YTFmMzI5OWUwYmNkNTFmrOpQmA==: 00:41:04.611 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:41:04.611 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:04.611 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:04.611 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:41:04.611 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:04.611 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:04.611 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:41:04.611 09:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:04.611 09:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:04.611 09:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:04.611 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:04.611 09:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:04.611 09:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:04.611 09:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:04.611 09:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:04.611 09:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:04.611 09:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:04.611 09:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:04.611 09:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:04.611 09:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:04.611 09:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:04.611 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:04.611 09:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:04.611 09:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:04.868 nvme0n1 00:41:04.868 09:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:04.868 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:04.868 09:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:04.868 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:04.869 09:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:04.869 09:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:04.869 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:04.869 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:04.869 09:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:04.869 09:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:04.869 09:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:04.869 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:04.869 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:41:04.869 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:04.869 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:04.869 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:41:04.869 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:04.869 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDE3YjI1M2ZkNWJjNDIyNGIwMWY0NWQwNGQzOGNmOWT7gHwU: 00:41:04.869 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDNlM2ZkN2IyNDI3ZjM3YTdmOTI0NmNkYzhhNDZlMGRQCItf: 00:41:04.869 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:04.869 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:41:04.869 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDE3YjI1M2ZkNWJjNDIyNGIwMWY0NWQwNGQzOGNmOWT7gHwU: 00:41:04.869 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDNlM2ZkN2IyNDI3ZjM3YTdmOTI0NmNkYzhhNDZlMGRQCItf: ]] 00:41:04.869 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDNlM2ZkN2IyNDI3ZjM3YTdmOTI0NmNkYzhhNDZlMGRQCItf: 00:41:04.869 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:41:04.869 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:04.869 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:04.869 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:41:04.869 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:41:04.869 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:04.869 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:41:04.869 09:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:04.869 09:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:04.869 09:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:04.869 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:04.869 09:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:04.869 09:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:04.869 09:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:04.869 09:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:04.869 09:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:04.869 09:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:04.869 09:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:04.869 09:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:04.869 09:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:04.869 09:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:04.869 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:04.869 09:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:04.869 09:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:05.126 nvme0n1 00:41:05.126 09:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:05.126 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:05.126 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:05.126 09:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:05.126 09:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:05.126 09:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:05.126 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:05.126 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:05.126 09:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:05.126 09:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:05.384 09:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:05.384 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:05.384 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:41:05.384 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:05.384 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:05.384 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:41:05.384 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:41:05.384 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWI1YzYzMDIzYzRlY2Y4NmQ3YmYwZTRkZWY3MGQ3NjA4NjYwOWE1NDk0NDgzZTFjdzq6aQ==: 00:41:05.384 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2Y2N2ZmZjc5MzdjYWE5MjRkZmIxMmFiNTk5MjUyMjbDpGhZ: 00:41:05.384 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:05.384 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:41:05.384 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWI1YzYzMDIzYzRlY2Y4NmQ3YmYwZTRkZWY3MGQ3NjA4NjYwOWE1NDk0NDgzZTFjdzq6aQ==: 00:41:05.384 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2Y2N2ZmZjc5MzdjYWE5MjRkZmIxMmFiNTk5MjUyMjbDpGhZ: ]] 00:41:05.384 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2Y2N2ZmZjc5MzdjYWE5MjRkZmIxMmFiNTk5MjUyMjbDpGhZ: 00:41:05.384 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:41:05.384 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:05.384 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:05.384 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:41:05.384 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:41:05.384 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:05.384 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:41:05.384 09:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:05.384 09:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:05.384 09:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:05.384 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:05.384 09:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:05.384 09:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:05.384 09:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:05.384 09:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:05.384 09:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:05.384 09:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:05.384 09:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:05.384 09:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:05.384 09:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:05.384 09:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:05.384 09:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:41:05.384 09:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:05.384 09:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:05.642 nvme0n1 00:41:05.642 09:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:05.642 09:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:05.642 09:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:05.642 09:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:05.642 09:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:05.642 09:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:05.642 09:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:05.642 09:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:05.642 09:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:05.642 09:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:05.642 09:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:05.642 09:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:05.642 09:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:41:05.642 09:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:05.642 09:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:05.642 09:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:41:05.642 09:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:41:05.642 09:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTc0MTZlMzVjNDhiYjUxYmEwYmExNjI2OTQ3NmUwMDA4MTgzY2VmNzAyMmQyZmY5ZDE2NzZjZDZmZTRiZDJkYdCbv0Q=: 00:41:05.642 09:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:41:05.642 09:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:05.643 09:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:41:05.643 09:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTc0MTZlMzVjNDhiYjUxYmEwYmExNjI2OTQ3NmUwMDA4MTgzY2VmNzAyMmQyZmY5ZDE2NzZjZDZmZTRiZDJkYdCbv0Q=: 00:41:05.643 09:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:41:05.643 09:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:41:05.643 09:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:05.643 09:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:05.643 09:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:41:05.643 09:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:41:05.643 09:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:05.643 09:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:41:05.643 09:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:05.643 09:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:05.643 09:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:05.643 09:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:05.643 09:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:05.643 09:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:05.643 09:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:05.643 09:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:05.643 09:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:05.643 09:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:05.643 09:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:05.643 09:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:05.643 09:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:05.643 09:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:05.643 09:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:41:05.643 09:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:05.643 09:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:05.902 nvme0n1 00:41:05.902 09:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:05.902 09:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:05.902 09:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:05.902 09:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:05.902 09:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:05.902 09:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:05.902 09:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:05.902 09:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:05.902 09:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:05.902 09:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:05.902 09:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:05.902 09:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:41:05.902 09:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:05.902 09:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:41:05.902 09:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:05.902 09:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:05.902 09:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:41:05.902 09:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:41:05.902 09:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFkZTcyNzYwYWM5NDQ1NTJhY2Y3NWViMzQxOTU0MmO7cVrh: 00:41:05.902 09:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWI2NWNiN2MxZGU1YjFlYmJmMzFkYTA3MWRkNjQzYTRmOTdiZWMyNDEwNDBjMDM1YTY3OTAzZWEzZGVhMmQ4MBiDX28=: 00:41:05.902 09:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:05.902 09:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:41:05.902 09:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFkZTcyNzYwYWM5NDQ1NTJhY2Y3NWViMzQxOTU0MmO7cVrh: 00:41:05.902 09:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWI2NWNiN2MxZGU1YjFlYmJmMzFkYTA3MWRkNjQzYTRmOTdiZWMyNDEwNDBjMDM1YTY3OTAzZWEzZGVhMmQ4MBiDX28=: ]] 00:41:05.902 09:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWI2NWNiN2MxZGU1YjFlYmJmMzFkYTA3MWRkNjQzYTRmOTdiZWMyNDEwNDBjMDM1YTY3OTAzZWEzZGVhMmQ4MBiDX28=: 00:41:05.902 09:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:41:05.902 09:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:05.902 09:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:05.902 09:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:41:05.902 09:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:41:05.902 09:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:05.902 09:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:41:05.902 09:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:05.902 09:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:05.902 09:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:05.902 09:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:05.902 09:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:05.902 09:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:05.902 09:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:05.902 09:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:05.902 09:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:05.902 09:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:05.902 09:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:05.902 09:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:05.902 09:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:05.902 09:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:05.902 09:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:05.902 09:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:05.902 09:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:06.518 nvme0n1 00:41:06.518 09:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:06.518 09:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:06.518 09:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:06.518 09:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:06.518 09:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:06.518 09:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:06.518 09:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:06.518 09:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:06.518 09:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:06.518 09:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:06.518 09:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:06.518 09:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:06.518 09:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:41:06.518 09:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:06.518 09:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:06.518 09:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:41:06.518 09:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:06.518 09:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWFjZTk4YmQwZTE5OGI1NzdjN2NlNDM5ZmFjYzc4MjNlN2E4ODEyYTU5MDZjNjIyG/WV4A==: 00:41:06.518 09:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjhjZTJlYjljOTE0MGU4NGQ1NTRmMTVkMDJiM2M0ZGE5YTFmMzI5OWUwYmNkNTFmrOpQmA==: 00:41:06.518 09:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:06.518 09:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:41:06.518 09:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWFjZTk4YmQwZTE5OGI1NzdjN2NlNDM5ZmFjYzc4MjNlN2E4ODEyYTU5MDZjNjIyG/WV4A==: 00:41:06.518 09:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjhjZTJlYjljOTE0MGU4NGQ1NTRmMTVkMDJiM2M0ZGE5YTFmMzI5OWUwYmNkNTFmrOpQmA==: ]] 00:41:06.518 09:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjhjZTJlYjljOTE0MGU4NGQ1NTRmMTVkMDJiM2M0ZGE5YTFmMzI5OWUwYmNkNTFmrOpQmA==: 00:41:06.518 09:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:41:06.518 09:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:06.518 09:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:06.518 09:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:41:06.518 09:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:06.518 09:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:06.518 09:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:41:06.518 09:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:06.518 09:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:06.518 09:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:06.518 09:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:06.518 09:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:06.518 09:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:06.518 09:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:06.518 09:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:06.518 09:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:06.518 09:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:06.518 09:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:06.518 09:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:06.518 09:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:06.518 09:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:06.518 09:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:06.518 09:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:06.518 09:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:07.084 nvme0n1 00:41:07.084 09:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:07.084 09:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:07.084 09:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:07.084 09:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:07.084 09:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:07.084 09:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:07.084 09:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:07.084 09:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:07.084 09:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:07.084 09:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:07.084 09:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:07.084 09:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:07.084 09:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:41:07.084 09:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:07.084 09:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:07.084 09:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:41:07.084 09:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:07.084 09:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDE3YjI1M2ZkNWJjNDIyNGIwMWY0NWQwNGQzOGNmOWT7gHwU: 00:41:07.084 09:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDNlM2ZkN2IyNDI3ZjM3YTdmOTI0NmNkYzhhNDZlMGRQCItf: 00:41:07.084 09:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:07.084 09:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:41:07.085 09:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDE3YjI1M2ZkNWJjNDIyNGIwMWY0NWQwNGQzOGNmOWT7gHwU: 00:41:07.085 09:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDNlM2ZkN2IyNDI3ZjM3YTdmOTI0NmNkYzhhNDZlMGRQCItf: ]] 00:41:07.085 09:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDNlM2ZkN2IyNDI3ZjM3YTdmOTI0NmNkYzhhNDZlMGRQCItf: 00:41:07.085 09:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:41:07.085 09:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:07.085 09:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:07.085 09:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:41:07.085 09:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:41:07.085 09:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:07.085 09:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:41:07.085 09:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:07.085 09:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:07.085 09:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:07.085 09:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:07.085 09:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:07.085 09:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:07.085 09:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:07.085 09:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:07.085 09:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:07.085 09:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:07.085 09:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:07.085 09:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:07.085 09:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:07.085 09:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:07.085 09:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:07.085 09:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:07.085 09:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:07.651 nvme0n1 00:41:07.651 09:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:07.651 09:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:07.651 09:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:07.651 09:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:07.651 09:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:07.651 09:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:07.651 09:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:07.651 09:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:07.651 09:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:07.651 09:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:07.651 09:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:07.651 09:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:07.651 09:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:41:07.651 09:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:07.651 09:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:07.651 09:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:41:07.651 09:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:41:07.651 09:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWI1YzYzMDIzYzRlY2Y4NmQ3YmYwZTRkZWY3MGQ3NjA4NjYwOWE1NDk0NDgzZTFjdzq6aQ==: 00:41:07.651 09:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2Y2N2ZmZjc5MzdjYWE5MjRkZmIxMmFiNTk5MjUyMjbDpGhZ: 00:41:07.651 09:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:07.651 09:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:41:07.651 09:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWI1YzYzMDIzYzRlY2Y4NmQ3YmYwZTRkZWY3MGQ3NjA4NjYwOWE1NDk0NDgzZTFjdzq6aQ==: 00:41:07.651 09:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2Y2N2ZmZjc5MzdjYWE5MjRkZmIxMmFiNTk5MjUyMjbDpGhZ: ]] 00:41:07.651 09:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2Y2N2ZmZjc5MzdjYWE5MjRkZmIxMmFiNTk5MjUyMjbDpGhZ: 00:41:07.651 09:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:41:07.651 09:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:07.651 09:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:07.651 09:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:41:07.651 09:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:41:07.651 09:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:07.651 09:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:41:07.651 09:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:07.651 09:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:07.651 09:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:07.651 09:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:07.651 09:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:07.651 09:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:07.651 09:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:07.651 09:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:07.651 09:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:07.651 09:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:07.651 09:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:07.651 09:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:07.651 09:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:07.651 09:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:07.651 09:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:41:07.651 09:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:07.651 09:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:08.216 nvme0n1 00:41:08.216 09:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:08.216 09:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:08.216 09:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:08.216 09:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:08.216 09:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:08.216 09:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:08.216 09:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:08.216 09:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:08.216 09:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:08.216 09:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:08.216 09:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:08.216 09:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:08.216 09:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:41:08.216 09:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:08.216 09:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:08.216 09:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:41:08.216 09:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:41:08.216 09:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTc0MTZlMzVjNDhiYjUxYmEwYmExNjI2OTQ3NmUwMDA4MTgzY2VmNzAyMmQyZmY5ZDE2NzZjZDZmZTRiZDJkYdCbv0Q=: 00:41:08.216 09:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:41:08.216 09:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:08.216 09:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:41:08.216 09:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTc0MTZlMzVjNDhiYjUxYmEwYmExNjI2OTQ3NmUwMDA4MTgzY2VmNzAyMmQyZmY5ZDE2NzZjZDZmZTRiZDJkYdCbv0Q=: 00:41:08.216 09:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:41:08.216 09:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:41:08.216 09:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:08.216 09:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:08.216 09:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:41:08.216 09:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:41:08.216 09:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:08.216 09:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:41:08.216 09:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:08.216 09:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:08.216 09:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:08.216 09:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:08.216 09:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:08.216 09:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:08.216 09:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:08.216 09:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:08.216 09:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:08.216 09:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:08.216 09:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:08.216 09:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:08.216 09:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:08.216 09:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:08.216 09:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:41:08.216 09:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:08.216 09:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:08.780 nvme0n1 00:41:08.780 09:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:08.780 09:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:08.780 09:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:08.780 09:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:08.780 09:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:08.780 09:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:08.780 09:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:08.780 09:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:08.780 09:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:08.780 09:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:08.780 09:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:08.780 09:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:41:08.780 09:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:08.780 09:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:41:08.780 09:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:08.780 09:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:08.780 09:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:41:08.781 09:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:41:08.781 09:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFkZTcyNzYwYWM5NDQ1NTJhY2Y3NWViMzQxOTU0MmO7cVrh: 00:41:08.781 09:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWI2NWNiN2MxZGU1YjFlYmJmMzFkYTA3MWRkNjQzYTRmOTdiZWMyNDEwNDBjMDM1YTY3OTAzZWEzZGVhMmQ4MBiDX28=: 00:41:08.781 09:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:08.781 09:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:41:08.781 09:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFkZTcyNzYwYWM5NDQ1NTJhY2Y3NWViMzQxOTU0MmO7cVrh: 00:41:08.781 09:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWI2NWNiN2MxZGU1YjFlYmJmMzFkYTA3MWRkNjQzYTRmOTdiZWMyNDEwNDBjMDM1YTY3OTAzZWEzZGVhMmQ4MBiDX28=: ]] 00:41:08.781 09:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWI2NWNiN2MxZGU1YjFlYmJmMzFkYTA3MWRkNjQzYTRmOTdiZWMyNDEwNDBjMDM1YTY3OTAzZWEzZGVhMmQ4MBiDX28=: 00:41:08.781 09:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:41:08.781 09:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:08.781 09:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:08.781 09:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:41:08.781 09:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:41:08.781 09:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:08.781 09:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:41:08.781 09:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:08.781 09:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:08.781 09:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:08.781 09:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:08.781 09:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:08.781 09:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:08.781 09:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:08.781 09:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:08.781 09:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:08.781 09:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:08.781 09:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:08.781 09:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:08.781 09:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:08.781 09:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:08.781 09:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:08.781 09:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:08.781 09:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:09.712 nvme0n1 00:41:09.712 09:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:09.712 09:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:09.712 09:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:09.712 09:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:09.712 09:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:09.712 09:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:09.712 09:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:09.712 09:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:09.712 09:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:09.712 09:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:09.712 09:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:09.712 09:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:09.712 09:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:41:09.712 09:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:09.712 09:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:09.713 09:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:41:09.713 09:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:09.713 09:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWFjZTk4YmQwZTE5OGI1NzdjN2NlNDM5ZmFjYzc4MjNlN2E4ODEyYTU5MDZjNjIyG/WV4A==: 00:41:09.713 09:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjhjZTJlYjljOTE0MGU4NGQ1NTRmMTVkMDJiM2M0ZGE5YTFmMzI5OWUwYmNkNTFmrOpQmA==: 00:41:09.713 09:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:09.713 09:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:41:09.713 09:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWFjZTk4YmQwZTE5OGI1NzdjN2NlNDM5ZmFjYzc4MjNlN2E4ODEyYTU5MDZjNjIyG/WV4A==: 00:41:09.713 09:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjhjZTJlYjljOTE0MGU4NGQ1NTRmMTVkMDJiM2M0ZGE5YTFmMzI5OWUwYmNkNTFmrOpQmA==: ]] 00:41:09.713 09:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjhjZTJlYjljOTE0MGU4NGQ1NTRmMTVkMDJiM2M0ZGE5YTFmMzI5OWUwYmNkNTFmrOpQmA==: 00:41:09.713 09:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:41:09.713 09:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:09.713 09:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:09.713 09:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:41:09.713 09:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:09.713 09:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:09.713 09:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:41:09.713 09:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:09.713 09:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:09.713 09:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:09.713 09:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:09.713 09:07:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:09.713 09:07:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:09.713 09:07:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:09.713 09:07:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:09.713 09:07:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:09.713 09:07:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:09.713 09:07:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:09.713 09:07:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:09.713 09:07:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:09.713 09:07:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:09.713 09:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:09.713 09:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:09.713 09:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:10.646 nvme0n1 00:41:10.646 09:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:10.646 09:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:10.646 09:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:10.646 09:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:10.646 09:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:10.646 09:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:10.646 09:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:10.646 09:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:10.646 09:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:10.646 09:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:10.646 09:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:10.646 09:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:10.646 09:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:41:10.646 09:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:10.646 09:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:10.646 09:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:41:10.646 09:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:10.646 09:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDE3YjI1M2ZkNWJjNDIyNGIwMWY0NWQwNGQzOGNmOWT7gHwU: 00:41:10.646 09:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDNlM2ZkN2IyNDI3ZjM3YTdmOTI0NmNkYzhhNDZlMGRQCItf: 00:41:10.646 09:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:10.646 09:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:41:10.646 09:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDE3YjI1M2ZkNWJjNDIyNGIwMWY0NWQwNGQzOGNmOWT7gHwU: 00:41:10.646 09:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDNlM2ZkN2IyNDI3ZjM3YTdmOTI0NmNkYzhhNDZlMGRQCItf: ]] 00:41:10.646 09:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDNlM2ZkN2IyNDI3ZjM3YTdmOTI0NmNkYzhhNDZlMGRQCItf: 00:41:10.646 09:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:41:10.646 09:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:10.646 09:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:10.646 09:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:41:10.646 09:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:41:10.646 09:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:10.646 09:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:41:10.646 09:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:10.646 09:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:10.646 09:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:10.646 09:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:10.646 09:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:10.646 09:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:10.646 09:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:10.646 09:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:10.646 09:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:10.646 09:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:10.646 09:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:10.646 09:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:10.646 09:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:10.646 09:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:10.646 09:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:10.646 09:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:10.646 09:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:11.579 nvme0n1 00:41:11.579 09:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:11.579 09:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:11.579 09:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:11.579 09:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:11.579 09:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:11.579 09:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:11.579 09:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:11.579 09:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:11.579 09:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:11.579 09:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:11.579 09:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:11.579 09:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:11.579 09:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:41:11.579 09:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:11.579 09:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:11.579 09:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:41:11.579 09:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:41:11.579 09:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWI1YzYzMDIzYzRlY2Y4NmQ3YmYwZTRkZWY3MGQ3NjA4NjYwOWE1NDk0NDgzZTFjdzq6aQ==: 00:41:11.579 09:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2Y2N2ZmZjc5MzdjYWE5MjRkZmIxMmFiNTk5MjUyMjbDpGhZ: 00:41:11.579 09:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:11.579 09:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:41:11.579 09:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWI1YzYzMDIzYzRlY2Y4NmQ3YmYwZTRkZWY3MGQ3NjA4NjYwOWE1NDk0NDgzZTFjdzq6aQ==: 00:41:11.579 09:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2Y2N2ZmZjc5MzdjYWE5MjRkZmIxMmFiNTk5MjUyMjbDpGhZ: ]] 00:41:11.579 09:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2Y2N2ZmZjc5MzdjYWE5MjRkZmIxMmFiNTk5MjUyMjbDpGhZ: 00:41:11.579 09:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:41:11.579 09:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:11.579 09:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:11.579 09:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:41:11.579 09:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:41:11.579 09:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:11.579 09:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:41:11.579 09:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:11.579 09:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:11.579 09:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:11.579 09:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:11.579 09:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:11.579 09:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:11.579 09:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:11.579 09:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:11.579 09:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:11.579 09:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:11.579 09:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:11.579 09:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:11.579 09:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:11.579 09:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:11.579 09:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:41:11.579 09:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:11.579 09:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:12.512 nvme0n1 00:41:12.512 09:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:12.512 09:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:12.512 09:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:12.512 09:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:12.512 09:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:12.512 09:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:12.512 09:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:12.512 09:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:12.512 09:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:12.512 09:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:12.512 09:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:12.512 09:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:12.512 09:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:41:12.512 09:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:12.512 09:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:12.512 09:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:41:12.512 09:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:41:12.512 09:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTc0MTZlMzVjNDhiYjUxYmEwYmExNjI2OTQ3NmUwMDA4MTgzY2VmNzAyMmQyZmY5ZDE2NzZjZDZmZTRiZDJkYdCbv0Q=: 00:41:12.512 09:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:41:12.512 09:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:12.512 09:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:41:12.512 09:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTc0MTZlMzVjNDhiYjUxYmEwYmExNjI2OTQ3NmUwMDA4MTgzY2VmNzAyMmQyZmY5ZDE2NzZjZDZmZTRiZDJkYdCbv0Q=: 00:41:12.512 09:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:41:12.512 09:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:41:12.513 09:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:12.513 09:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:12.513 09:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:41:12.513 09:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:41:12.513 09:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:12.513 09:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:41:12.513 09:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:12.513 09:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:12.513 09:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:12.513 09:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:12.513 09:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:12.513 09:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:12.513 09:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:12.513 09:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:12.513 09:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:12.513 09:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:12.513 09:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:12.513 09:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:12.513 09:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:12.513 09:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:12.513 09:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:41:12.513 09:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:12.513 09:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:13.445 nvme0n1 00:41:13.445 09:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:13.445 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:13.445 09:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:13.445 09:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:13.445 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:13.445 09:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:13.445 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:13.445 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:13.445 09:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:13.445 09:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:13.445 09:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:13.445 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:41:13.445 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:41:13.445 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:13.445 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:41:13.445 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:13.445 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:13.445 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:13.445 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:41:13.445 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFkZTcyNzYwYWM5NDQ1NTJhY2Y3NWViMzQxOTU0MmO7cVrh: 00:41:13.445 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWI2NWNiN2MxZGU1YjFlYmJmMzFkYTA3MWRkNjQzYTRmOTdiZWMyNDEwNDBjMDM1YTY3OTAzZWEzZGVhMmQ4MBiDX28=: 00:41:13.445 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:13.445 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:13.445 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFkZTcyNzYwYWM5NDQ1NTJhY2Y3NWViMzQxOTU0MmO7cVrh: 00:41:13.445 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWI2NWNiN2MxZGU1YjFlYmJmMzFkYTA3MWRkNjQzYTRmOTdiZWMyNDEwNDBjMDM1YTY3OTAzZWEzZGVhMmQ4MBiDX28=: ]] 00:41:13.445 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWI2NWNiN2MxZGU1YjFlYmJmMzFkYTA3MWRkNjQzYTRmOTdiZWMyNDEwNDBjMDM1YTY3OTAzZWEzZGVhMmQ4MBiDX28=: 00:41:13.445 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:41:13.445 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:13.445 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:13.445 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:41:13.445 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:41:13.445 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:13.445 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:41:13.445 09:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:13.445 09:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:13.445 09:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:13.445 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:13.445 09:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:13.445 09:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:13.445 09:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:13.445 09:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:13.445 09:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:13.445 09:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:13.445 09:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:13.445 09:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:13.445 09:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:13.445 09:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:13.445 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:13.445 09:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:13.445 09:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:13.703 nvme0n1 00:41:13.703 09:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:13.703 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:13.703 09:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:13.703 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:13.703 09:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:13.703 09:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:13.703 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:13.703 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:13.703 09:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:13.703 09:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:13.703 09:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:13.703 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:13.703 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:41:13.703 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:13.703 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:13.703 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:13.703 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:13.703 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWFjZTk4YmQwZTE5OGI1NzdjN2NlNDM5ZmFjYzc4MjNlN2E4ODEyYTU5MDZjNjIyG/WV4A==: 00:41:13.703 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjhjZTJlYjljOTE0MGU4NGQ1NTRmMTVkMDJiM2M0ZGE5YTFmMzI5OWUwYmNkNTFmrOpQmA==: 00:41:13.703 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:13.703 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:13.703 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWFjZTk4YmQwZTE5OGI1NzdjN2NlNDM5ZmFjYzc4MjNlN2E4ODEyYTU5MDZjNjIyG/WV4A==: 00:41:13.703 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjhjZTJlYjljOTE0MGU4NGQ1NTRmMTVkMDJiM2M0ZGE5YTFmMzI5OWUwYmNkNTFmrOpQmA==: ]] 00:41:13.703 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjhjZTJlYjljOTE0MGU4NGQ1NTRmMTVkMDJiM2M0ZGE5YTFmMzI5OWUwYmNkNTFmrOpQmA==: 00:41:13.703 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:41:13.703 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:13.703 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:13.703 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:41:13.703 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:13.703 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:13.703 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:41:13.703 09:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:13.703 09:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:13.703 09:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:13.703 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:13.703 09:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:13.703 09:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:13.703 09:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:13.703 09:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:13.703 09:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:13.703 09:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:13.703 09:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:13.703 09:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:13.703 09:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:13.703 09:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:13.703 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:13.703 09:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:13.703 09:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:13.961 nvme0n1 00:41:13.961 09:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:13.961 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:13.961 09:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:13.961 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:13.961 09:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:13.961 09:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:13.961 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:13.961 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:13.961 09:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:13.961 09:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:13.961 09:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:13.961 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:13.961 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:41:13.962 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:13.962 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:13.962 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:13.962 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:13.962 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDE3YjI1M2ZkNWJjNDIyNGIwMWY0NWQwNGQzOGNmOWT7gHwU: 00:41:13.962 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDNlM2ZkN2IyNDI3ZjM3YTdmOTI0NmNkYzhhNDZlMGRQCItf: 00:41:13.962 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:13.962 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:13.962 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDE3YjI1M2ZkNWJjNDIyNGIwMWY0NWQwNGQzOGNmOWT7gHwU: 00:41:13.962 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDNlM2ZkN2IyNDI3ZjM3YTdmOTI0NmNkYzhhNDZlMGRQCItf: ]] 00:41:13.962 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDNlM2ZkN2IyNDI3ZjM3YTdmOTI0NmNkYzhhNDZlMGRQCItf: 00:41:13.962 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:41:13.962 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:13.962 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:13.962 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:41:13.962 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:41:13.962 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:13.962 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:41:13.962 09:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:13.962 09:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:13.962 09:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:13.962 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:13.962 09:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:13.962 09:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:13.962 09:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:13.962 09:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:13.962 09:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:13.962 09:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:13.962 09:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:13.962 09:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:13.962 09:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:13.962 09:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:13.962 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:13.962 09:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:13.962 09:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:13.962 nvme0n1 00:41:13.962 09:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:13.962 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:13.962 09:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:13.962 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:13.962 09:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:13.962 09:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:13.962 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:13.962 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:13.962 09:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:14.219 09:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:14.219 09:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:14.219 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:14.219 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:41:14.219 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:14.219 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:14.219 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:14.219 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:41:14.219 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWI1YzYzMDIzYzRlY2Y4NmQ3YmYwZTRkZWY3MGQ3NjA4NjYwOWE1NDk0NDgzZTFjdzq6aQ==: 00:41:14.219 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2Y2N2ZmZjc5MzdjYWE5MjRkZmIxMmFiNTk5MjUyMjbDpGhZ: 00:41:14.219 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:14.219 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:14.219 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWI1YzYzMDIzYzRlY2Y4NmQ3YmYwZTRkZWY3MGQ3NjA4NjYwOWE1NDk0NDgzZTFjdzq6aQ==: 00:41:14.219 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2Y2N2ZmZjc5MzdjYWE5MjRkZmIxMmFiNTk5MjUyMjbDpGhZ: ]] 00:41:14.219 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2Y2N2ZmZjc5MzdjYWE5MjRkZmIxMmFiNTk5MjUyMjbDpGhZ: 00:41:14.219 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:41:14.219 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:14.219 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:14.219 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:41:14.219 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:41:14.219 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:14.219 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:41:14.219 09:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:14.219 09:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:14.219 09:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:14.219 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:14.219 09:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:14.219 09:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:14.219 09:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:14.219 09:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:14.219 09:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:14.219 09:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:14.219 09:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:14.219 09:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:14.219 09:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:14.219 09:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:14.219 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:41:14.219 09:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:14.219 09:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:14.219 nvme0n1 00:41:14.219 09:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:14.219 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:14.219 09:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:14.219 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:14.219 09:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:14.219 09:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:14.219 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:14.219 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:14.219 09:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:14.219 09:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:14.219 09:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:14.219 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:14.219 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:41:14.220 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:14.220 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:14.220 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:14.220 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:41:14.220 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTc0MTZlMzVjNDhiYjUxYmEwYmExNjI2OTQ3NmUwMDA4MTgzY2VmNzAyMmQyZmY5ZDE2NzZjZDZmZTRiZDJkYdCbv0Q=: 00:41:14.220 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:41:14.220 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:14.220 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:14.220 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTc0MTZlMzVjNDhiYjUxYmEwYmExNjI2OTQ3NmUwMDA4MTgzY2VmNzAyMmQyZmY5ZDE2NzZjZDZmZTRiZDJkYdCbv0Q=: 00:41:14.220 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:41:14.220 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:41:14.220 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:14.220 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:14.220 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:41:14.220 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:41:14.220 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:14.220 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:41:14.220 09:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:14.220 09:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:14.220 09:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:14.220 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:14.220 09:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:14.220 09:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:14.220 09:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:14.220 09:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:14.220 09:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:14.220 09:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:14.220 09:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:14.220 09:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:14.220 09:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:14.220 09:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:14.220 09:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:41:14.220 09:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:14.220 09:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:14.476 nvme0n1 00:41:14.476 09:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:14.476 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:14.476 09:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:14.476 09:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:14.476 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:14.476 09:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:14.476 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:14.476 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:14.476 09:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:14.476 09:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:14.476 09:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:14.476 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:41:14.476 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:14.476 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:41:14.476 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:14.476 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:14.476 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:41:14.476 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:41:14.476 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFkZTcyNzYwYWM5NDQ1NTJhY2Y3NWViMzQxOTU0MmO7cVrh: 00:41:14.476 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWI2NWNiN2MxZGU1YjFlYmJmMzFkYTA3MWRkNjQzYTRmOTdiZWMyNDEwNDBjMDM1YTY3OTAzZWEzZGVhMmQ4MBiDX28=: 00:41:14.476 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:14.476 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:41:14.476 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFkZTcyNzYwYWM5NDQ1NTJhY2Y3NWViMzQxOTU0MmO7cVrh: 00:41:14.476 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWI2NWNiN2MxZGU1YjFlYmJmMzFkYTA3MWRkNjQzYTRmOTdiZWMyNDEwNDBjMDM1YTY3OTAzZWEzZGVhMmQ4MBiDX28=: ]] 00:41:14.476 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWI2NWNiN2MxZGU1YjFlYmJmMzFkYTA3MWRkNjQzYTRmOTdiZWMyNDEwNDBjMDM1YTY3OTAzZWEzZGVhMmQ4MBiDX28=: 00:41:14.476 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:41:14.476 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:14.476 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:14.476 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:41:14.476 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:41:14.476 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:14.476 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:41:14.476 09:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:14.476 09:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:14.476 09:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:14.476 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:14.476 09:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:14.476 09:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:14.476 09:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:14.476 09:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:14.476 09:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:14.476 09:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:14.476 09:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:14.476 09:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:14.476 09:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:14.476 09:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:14.476 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:14.476 09:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:14.476 09:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:14.733 nvme0n1 00:41:14.733 09:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:14.733 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:14.733 09:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:14.733 09:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:14.733 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:14.733 09:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:14.733 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:14.733 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:14.733 09:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:14.733 09:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:14.733 09:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:14.733 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:14.733 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:41:14.733 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:14.733 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:14.733 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:41:14.733 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:14.733 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWFjZTk4YmQwZTE5OGI1NzdjN2NlNDM5ZmFjYzc4MjNlN2E4ODEyYTU5MDZjNjIyG/WV4A==: 00:41:14.733 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjhjZTJlYjljOTE0MGU4NGQ1NTRmMTVkMDJiM2M0ZGE5YTFmMzI5OWUwYmNkNTFmrOpQmA==: 00:41:14.733 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:14.733 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:41:14.733 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWFjZTk4YmQwZTE5OGI1NzdjN2NlNDM5ZmFjYzc4MjNlN2E4ODEyYTU5MDZjNjIyG/WV4A==: 00:41:14.733 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjhjZTJlYjljOTE0MGU4NGQ1NTRmMTVkMDJiM2M0ZGE5YTFmMzI5OWUwYmNkNTFmrOpQmA==: ]] 00:41:14.733 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjhjZTJlYjljOTE0MGU4NGQ1NTRmMTVkMDJiM2M0ZGE5YTFmMzI5OWUwYmNkNTFmrOpQmA==: 00:41:14.733 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:41:14.733 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:14.733 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:14.733 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:41:14.733 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:14.733 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:14.733 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:41:14.733 09:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:14.733 09:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:14.733 09:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:14.733 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:14.733 09:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:14.733 09:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:14.733 09:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:14.733 09:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:14.733 09:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:14.733 09:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:14.733 09:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:14.733 09:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:14.733 09:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:14.733 09:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:14.733 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:14.733 09:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:14.733 09:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:15.170 nvme0n1 00:41:15.170 09:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:15.170 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:15.170 09:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:15.170 09:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:15.170 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:15.170 09:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:15.170 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:15.170 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:15.170 09:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:15.170 09:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:15.170 09:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:15.170 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:15.170 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:41:15.170 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:15.170 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:15.170 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:41:15.170 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:15.170 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDE3YjI1M2ZkNWJjNDIyNGIwMWY0NWQwNGQzOGNmOWT7gHwU: 00:41:15.170 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDNlM2ZkN2IyNDI3ZjM3YTdmOTI0NmNkYzhhNDZlMGRQCItf: 00:41:15.170 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:15.170 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:41:15.170 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDE3YjI1M2ZkNWJjNDIyNGIwMWY0NWQwNGQzOGNmOWT7gHwU: 00:41:15.170 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDNlM2ZkN2IyNDI3ZjM3YTdmOTI0NmNkYzhhNDZlMGRQCItf: ]] 00:41:15.170 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDNlM2ZkN2IyNDI3ZjM3YTdmOTI0NmNkYzhhNDZlMGRQCItf: 00:41:15.170 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:41:15.170 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:15.170 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:15.170 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:41:15.170 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:41:15.170 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:15.170 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:41:15.170 09:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:15.170 09:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:15.170 09:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:15.170 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:15.170 09:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:15.170 09:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:15.170 09:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:15.170 09:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:15.170 09:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:15.170 09:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:15.170 09:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:15.170 09:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:15.170 09:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:15.170 09:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:15.170 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:15.170 09:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:15.170 09:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:15.170 nvme0n1 00:41:15.170 09:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:15.170 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:15.170 09:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:15.170 09:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:15.170 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:15.170 09:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:15.170 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:15.170 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:15.170 09:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:15.170 09:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:15.170 09:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:15.170 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:15.170 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:41:15.171 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:15.171 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:15.171 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:41:15.171 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:41:15.171 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWI1YzYzMDIzYzRlY2Y4NmQ3YmYwZTRkZWY3MGQ3NjA4NjYwOWE1NDk0NDgzZTFjdzq6aQ==: 00:41:15.171 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2Y2N2ZmZjc5MzdjYWE5MjRkZmIxMmFiNTk5MjUyMjbDpGhZ: 00:41:15.171 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:15.171 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:41:15.171 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWI1YzYzMDIzYzRlY2Y4NmQ3YmYwZTRkZWY3MGQ3NjA4NjYwOWE1NDk0NDgzZTFjdzq6aQ==: 00:41:15.171 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2Y2N2ZmZjc5MzdjYWE5MjRkZmIxMmFiNTk5MjUyMjbDpGhZ: ]] 00:41:15.171 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2Y2N2ZmZjc5MzdjYWE5MjRkZmIxMmFiNTk5MjUyMjbDpGhZ: 00:41:15.171 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:41:15.171 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:15.171 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:15.171 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:41:15.171 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:41:15.171 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:15.171 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:41:15.171 09:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:15.171 09:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:15.171 09:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:15.171 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:15.171 09:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:15.171 09:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:15.171 09:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:15.171 09:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:15.171 09:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:15.171 09:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:15.171 09:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:15.171 09:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:15.171 09:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:15.171 09:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:15.171 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:41:15.171 09:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:15.171 09:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:15.427 nvme0n1 00:41:15.427 09:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:15.427 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:15.427 09:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:15.427 09:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:15.427 09:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:15.427 09:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:15.427 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:15.427 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:15.427 09:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:15.427 09:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:15.427 09:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:15.427 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:15.427 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:41:15.427 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:15.427 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:15.427 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:41:15.427 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:41:15.427 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTc0MTZlMzVjNDhiYjUxYmEwYmExNjI2OTQ3NmUwMDA4MTgzY2VmNzAyMmQyZmY5ZDE2NzZjZDZmZTRiZDJkYdCbv0Q=: 00:41:15.427 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:41:15.427 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:15.427 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:41:15.427 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTc0MTZlMzVjNDhiYjUxYmEwYmExNjI2OTQ3NmUwMDA4MTgzY2VmNzAyMmQyZmY5ZDE2NzZjZDZmZTRiZDJkYdCbv0Q=: 00:41:15.427 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:41:15.427 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:41:15.427 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:15.427 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:15.427 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:41:15.427 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:41:15.427 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:15.427 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:41:15.427 09:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:15.427 09:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:15.428 09:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:15.428 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:15.428 09:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:15.428 09:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:15.428 09:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:15.428 09:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:15.428 09:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:15.428 09:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:15.428 09:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:15.428 09:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:15.428 09:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:15.428 09:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:15.428 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:41:15.428 09:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:15.428 09:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:15.428 nvme0n1 00:41:15.428 09:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:15.428 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:15.428 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:15.428 09:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:15.428 09:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:15.685 09:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:15.685 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:15.685 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:15.685 09:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:15.685 09:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:15.685 09:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:15.685 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:41:15.685 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:15.685 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:41:15.685 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:15.685 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:15.685 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:41:15.685 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:41:15.685 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFkZTcyNzYwYWM5NDQ1NTJhY2Y3NWViMzQxOTU0MmO7cVrh: 00:41:15.685 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWI2NWNiN2MxZGU1YjFlYmJmMzFkYTA3MWRkNjQzYTRmOTdiZWMyNDEwNDBjMDM1YTY3OTAzZWEzZGVhMmQ4MBiDX28=: 00:41:15.685 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:15.685 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:41:15.685 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFkZTcyNzYwYWM5NDQ1NTJhY2Y3NWViMzQxOTU0MmO7cVrh: 00:41:15.685 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWI2NWNiN2MxZGU1YjFlYmJmMzFkYTA3MWRkNjQzYTRmOTdiZWMyNDEwNDBjMDM1YTY3OTAzZWEzZGVhMmQ4MBiDX28=: ]] 00:41:15.685 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWI2NWNiN2MxZGU1YjFlYmJmMzFkYTA3MWRkNjQzYTRmOTdiZWMyNDEwNDBjMDM1YTY3OTAzZWEzZGVhMmQ4MBiDX28=: 00:41:15.685 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:41:15.685 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:15.685 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:15.685 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:41:15.685 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:41:15.685 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:15.685 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:41:15.685 09:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:15.685 09:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:15.685 09:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:15.685 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:15.685 09:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:15.685 09:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:15.685 09:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:15.685 09:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:15.685 09:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:15.685 09:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:15.685 09:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:15.685 09:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:15.685 09:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:15.685 09:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:15.685 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:15.685 09:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:15.685 09:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:15.942 nvme0n1 00:41:15.942 09:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:15.943 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:15.943 09:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:15.943 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:15.943 09:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:15.943 09:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:15.943 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:15.943 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:15.943 09:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:15.943 09:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:15.943 09:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:15.943 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:15.943 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:41:15.943 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:15.943 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:15.943 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:41:15.943 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:15.943 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWFjZTk4YmQwZTE5OGI1NzdjN2NlNDM5ZmFjYzc4MjNlN2E4ODEyYTU5MDZjNjIyG/WV4A==: 00:41:15.943 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjhjZTJlYjljOTE0MGU4NGQ1NTRmMTVkMDJiM2M0ZGE5YTFmMzI5OWUwYmNkNTFmrOpQmA==: 00:41:15.943 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:15.943 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:41:15.943 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWFjZTk4YmQwZTE5OGI1NzdjN2NlNDM5ZmFjYzc4MjNlN2E4ODEyYTU5MDZjNjIyG/WV4A==: 00:41:15.943 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjhjZTJlYjljOTE0MGU4NGQ1NTRmMTVkMDJiM2M0ZGE5YTFmMzI5OWUwYmNkNTFmrOpQmA==: ]] 00:41:15.943 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjhjZTJlYjljOTE0MGU4NGQ1NTRmMTVkMDJiM2M0ZGE5YTFmMzI5OWUwYmNkNTFmrOpQmA==: 00:41:15.943 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:41:15.943 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:15.943 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:15.943 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:41:15.943 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:15.943 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:15.943 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:41:15.943 09:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:15.943 09:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:15.943 09:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:15.943 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:15.943 09:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:15.943 09:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:15.943 09:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:15.943 09:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:15.943 09:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:15.943 09:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:15.943 09:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:15.943 09:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:15.943 09:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:15.943 09:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:15.943 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:15.943 09:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:15.943 09:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:16.200 nvme0n1 00:41:16.200 09:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:16.200 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:16.200 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:16.200 09:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:16.200 09:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:16.200 09:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:16.200 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:16.200 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:16.200 09:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:16.200 09:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:16.200 09:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:16.200 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:16.200 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:41:16.200 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:16.200 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:16.200 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:41:16.200 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:16.200 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDE3YjI1M2ZkNWJjNDIyNGIwMWY0NWQwNGQzOGNmOWT7gHwU: 00:41:16.200 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDNlM2ZkN2IyNDI3ZjM3YTdmOTI0NmNkYzhhNDZlMGRQCItf: 00:41:16.200 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:16.200 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:41:16.200 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDE3YjI1M2ZkNWJjNDIyNGIwMWY0NWQwNGQzOGNmOWT7gHwU: 00:41:16.200 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDNlM2ZkN2IyNDI3ZjM3YTdmOTI0NmNkYzhhNDZlMGRQCItf: ]] 00:41:16.200 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDNlM2ZkN2IyNDI3ZjM3YTdmOTI0NmNkYzhhNDZlMGRQCItf: 00:41:16.200 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:41:16.200 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:16.200 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:16.200 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:41:16.200 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:41:16.200 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:16.200 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:41:16.200 09:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:16.200 09:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:16.200 09:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:16.200 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:16.200 09:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:16.201 09:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:16.201 09:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:16.201 09:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:16.201 09:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:16.201 09:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:16.201 09:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:16.201 09:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:16.201 09:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:16.201 09:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:16.201 09:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:16.201 09:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:16.201 09:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:16.458 nvme0n1 00:41:16.458 09:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:16.458 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:16.458 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:16.458 09:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:16.458 09:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:16.458 09:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:16.458 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:16.458 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:16.458 09:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:16.458 09:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:16.458 09:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:16.458 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:16.458 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:41:16.458 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:16.458 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:16.458 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:41:16.458 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:41:16.458 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWI1YzYzMDIzYzRlY2Y4NmQ3YmYwZTRkZWY3MGQ3NjA4NjYwOWE1NDk0NDgzZTFjdzq6aQ==: 00:41:16.458 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2Y2N2ZmZjc5MzdjYWE5MjRkZmIxMmFiNTk5MjUyMjbDpGhZ: 00:41:16.458 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:16.458 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:41:16.458 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWI1YzYzMDIzYzRlY2Y4NmQ3YmYwZTRkZWY3MGQ3NjA4NjYwOWE1NDk0NDgzZTFjdzq6aQ==: 00:41:16.458 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2Y2N2ZmZjc5MzdjYWE5MjRkZmIxMmFiNTk5MjUyMjbDpGhZ: ]] 00:41:16.458 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2Y2N2ZmZjc5MzdjYWE5MjRkZmIxMmFiNTk5MjUyMjbDpGhZ: 00:41:16.458 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:41:16.458 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:16.458 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:16.458 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:41:16.458 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:41:16.458 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:16.458 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:41:16.458 09:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:16.458 09:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:16.458 09:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:16.458 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:16.458 09:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:16.458 09:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:16.458 09:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:16.458 09:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:16.458 09:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:16.458 09:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:16.458 09:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:16.458 09:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:16.458 09:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:16.458 09:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:16.458 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:41:16.458 09:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:16.458 09:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:16.716 nvme0n1 00:41:16.716 09:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:16.716 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:16.716 09:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:16.716 09:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:16.716 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:16.716 09:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:16.716 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:16.716 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:16.716 09:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:16.716 09:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:16.716 09:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:16.716 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:16.716 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:41:16.716 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:16.716 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:16.716 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:41:16.716 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:41:16.716 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTc0MTZlMzVjNDhiYjUxYmEwYmExNjI2OTQ3NmUwMDA4MTgzY2VmNzAyMmQyZmY5ZDE2NzZjZDZmZTRiZDJkYdCbv0Q=: 00:41:16.716 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:41:16.716 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:16.716 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:41:16.716 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTc0MTZlMzVjNDhiYjUxYmEwYmExNjI2OTQ3NmUwMDA4MTgzY2VmNzAyMmQyZmY5ZDE2NzZjZDZmZTRiZDJkYdCbv0Q=: 00:41:16.716 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:41:16.716 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:41:16.716 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:16.716 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:16.716 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:41:16.716 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:41:16.716 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:16.716 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:41:16.716 09:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:16.716 09:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:16.973 09:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:16.973 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:16.973 09:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:16.973 09:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:16.973 09:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:16.973 09:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:16.973 09:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:16.973 09:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:16.974 09:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:16.974 09:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:16.974 09:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:16.974 09:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:16.974 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:41:16.974 09:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:16.974 09:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:17.231 nvme0n1 00:41:17.231 09:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:17.231 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:17.231 09:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:17.231 09:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:17.231 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:17.231 09:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:17.231 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:17.231 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:17.231 09:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:17.231 09:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:17.231 09:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:17.231 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:41:17.231 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:17.231 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:41:17.231 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:17.231 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:17.231 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:41:17.231 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:41:17.231 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFkZTcyNzYwYWM5NDQ1NTJhY2Y3NWViMzQxOTU0MmO7cVrh: 00:41:17.231 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWI2NWNiN2MxZGU1YjFlYmJmMzFkYTA3MWRkNjQzYTRmOTdiZWMyNDEwNDBjMDM1YTY3OTAzZWEzZGVhMmQ4MBiDX28=: 00:41:17.231 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:17.231 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:41:17.231 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFkZTcyNzYwYWM5NDQ1NTJhY2Y3NWViMzQxOTU0MmO7cVrh: 00:41:17.231 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWI2NWNiN2MxZGU1YjFlYmJmMzFkYTA3MWRkNjQzYTRmOTdiZWMyNDEwNDBjMDM1YTY3OTAzZWEzZGVhMmQ4MBiDX28=: ]] 00:41:17.231 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWI2NWNiN2MxZGU1YjFlYmJmMzFkYTA3MWRkNjQzYTRmOTdiZWMyNDEwNDBjMDM1YTY3OTAzZWEzZGVhMmQ4MBiDX28=: 00:41:17.231 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:41:17.231 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:17.231 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:17.231 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:41:17.231 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:41:17.231 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:17.231 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:41:17.231 09:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:17.231 09:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:17.231 09:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:17.231 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:17.231 09:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:17.231 09:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:17.231 09:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:17.231 09:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:17.231 09:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:17.231 09:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:17.231 09:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:17.231 09:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:17.231 09:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:17.231 09:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:17.231 09:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:17.231 09:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:17.231 09:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:17.796 nvme0n1 00:41:17.796 09:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:17.796 09:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:17.796 09:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:17.796 09:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:17.796 09:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:17.796 09:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:17.796 09:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:17.796 09:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:17.796 09:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:17.796 09:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:17.796 09:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:17.796 09:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:17.796 09:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:41:17.796 09:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:17.796 09:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:17.796 09:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:41:17.796 09:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:17.796 09:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWFjZTk4YmQwZTE5OGI1NzdjN2NlNDM5ZmFjYzc4MjNlN2E4ODEyYTU5MDZjNjIyG/WV4A==: 00:41:17.796 09:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjhjZTJlYjljOTE0MGU4NGQ1NTRmMTVkMDJiM2M0ZGE5YTFmMzI5OWUwYmNkNTFmrOpQmA==: 00:41:17.796 09:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:17.796 09:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:41:17.796 09:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWFjZTk4YmQwZTE5OGI1NzdjN2NlNDM5ZmFjYzc4MjNlN2E4ODEyYTU5MDZjNjIyG/WV4A==: 00:41:17.796 09:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjhjZTJlYjljOTE0MGU4NGQ1NTRmMTVkMDJiM2M0ZGE5YTFmMzI5OWUwYmNkNTFmrOpQmA==: ]] 00:41:17.796 09:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjhjZTJlYjljOTE0MGU4NGQ1NTRmMTVkMDJiM2M0ZGE5YTFmMzI5OWUwYmNkNTFmrOpQmA==: 00:41:17.796 09:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:41:17.796 09:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:17.796 09:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:17.796 09:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:41:17.796 09:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:17.796 09:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:17.796 09:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:41:17.796 09:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:17.796 09:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:17.796 09:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:17.796 09:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:17.796 09:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:17.796 09:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:17.796 09:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:17.796 09:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:17.796 09:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:17.796 09:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:17.796 09:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:17.796 09:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:17.796 09:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:17.796 09:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:17.796 09:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:17.796 09:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:17.796 09:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:18.053 nvme0n1 00:41:18.053 09:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:18.311 09:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:18.311 09:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:18.311 09:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:18.311 09:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:18.311 09:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:18.311 09:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:18.311 09:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:18.311 09:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:18.311 09:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:18.311 09:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:18.311 09:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:18.311 09:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:41:18.311 09:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:18.311 09:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:18.311 09:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:41:18.311 09:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:18.311 09:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDE3YjI1M2ZkNWJjNDIyNGIwMWY0NWQwNGQzOGNmOWT7gHwU: 00:41:18.311 09:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDNlM2ZkN2IyNDI3ZjM3YTdmOTI0NmNkYzhhNDZlMGRQCItf: 00:41:18.311 09:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:18.311 09:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:41:18.311 09:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDE3YjI1M2ZkNWJjNDIyNGIwMWY0NWQwNGQzOGNmOWT7gHwU: 00:41:18.311 09:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDNlM2ZkN2IyNDI3ZjM3YTdmOTI0NmNkYzhhNDZlMGRQCItf: ]] 00:41:18.311 09:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDNlM2ZkN2IyNDI3ZjM3YTdmOTI0NmNkYzhhNDZlMGRQCItf: 00:41:18.311 09:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:41:18.311 09:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:18.311 09:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:18.311 09:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:41:18.311 09:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:41:18.311 09:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:18.311 09:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:41:18.311 09:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:18.311 09:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:18.311 09:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:18.311 09:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:18.311 09:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:18.311 09:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:18.311 09:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:18.311 09:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:18.311 09:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:18.311 09:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:18.311 09:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:18.311 09:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:18.311 09:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:18.311 09:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:18.311 09:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:18.311 09:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:18.311 09:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:18.877 nvme0n1 00:41:18.877 09:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:18.877 09:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:18.877 09:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:18.877 09:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:18.877 09:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:18.877 09:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:18.877 09:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:18.877 09:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:18.877 09:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:18.877 09:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:18.877 09:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:18.877 09:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:18.877 09:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:41:18.877 09:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:18.877 09:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:18.877 09:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:41:18.877 09:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:41:18.877 09:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWI1YzYzMDIzYzRlY2Y4NmQ3YmYwZTRkZWY3MGQ3NjA4NjYwOWE1NDk0NDgzZTFjdzq6aQ==: 00:41:18.877 09:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2Y2N2ZmZjc5MzdjYWE5MjRkZmIxMmFiNTk5MjUyMjbDpGhZ: 00:41:18.877 09:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:18.877 09:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:41:18.877 09:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWI1YzYzMDIzYzRlY2Y4NmQ3YmYwZTRkZWY3MGQ3NjA4NjYwOWE1NDk0NDgzZTFjdzq6aQ==: 00:41:18.877 09:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2Y2N2ZmZjc5MzdjYWE5MjRkZmIxMmFiNTk5MjUyMjbDpGhZ: ]] 00:41:18.877 09:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2Y2N2ZmZjc5MzdjYWE5MjRkZmIxMmFiNTk5MjUyMjbDpGhZ: 00:41:18.877 09:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:41:18.877 09:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:18.877 09:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:18.877 09:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:41:18.877 09:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:41:18.877 09:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:18.878 09:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:41:18.878 09:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:18.878 09:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:18.878 09:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:18.878 09:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:18.878 09:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:18.878 09:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:18.878 09:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:18.878 09:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:18.878 09:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:18.878 09:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:18.878 09:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:18.878 09:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:18.878 09:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:18.878 09:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:18.878 09:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:41:18.878 09:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:18.878 09:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:19.443 nvme0n1 00:41:19.443 09:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:19.443 09:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:19.443 09:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:19.443 09:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:19.443 09:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:19.443 09:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:19.443 09:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:19.443 09:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:19.443 09:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:19.443 09:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:19.443 09:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:19.443 09:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:19.443 09:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:41:19.443 09:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:19.443 09:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:19.443 09:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:41:19.444 09:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:41:19.444 09:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTc0MTZlMzVjNDhiYjUxYmEwYmExNjI2OTQ3NmUwMDA4MTgzY2VmNzAyMmQyZmY5ZDE2NzZjZDZmZTRiZDJkYdCbv0Q=: 00:41:19.444 09:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:41:19.444 09:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:19.444 09:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:41:19.444 09:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTc0MTZlMzVjNDhiYjUxYmEwYmExNjI2OTQ3NmUwMDA4MTgzY2VmNzAyMmQyZmY5ZDE2NzZjZDZmZTRiZDJkYdCbv0Q=: 00:41:19.444 09:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:41:19.444 09:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:41:19.444 09:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:19.444 09:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:19.444 09:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:41:19.444 09:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:41:19.444 09:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:19.444 09:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:41:19.444 09:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:19.444 09:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:19.444 09:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:19.444 09:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:19.444 09:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:19.444 09:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:19.444 09:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:19.444 09:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:19.444 09:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:19.444 09:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:19.444 09:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:19.444 09:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:19.444 09:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:19.444 09:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:19.444 09:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:41:19.444 09:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:19.444 09:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:20.036 nvme0n1 00:41:20.036 09:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:20.036 09:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:20.036 09:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:20.036 09:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:20.036 09:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:20.036 09:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:20.037 09:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:20.037 09:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:20.037 09:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:20.037 09:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:20.037 09:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:20.037 09:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:41:20.037 09:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:20.037 09:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:41:20.037 09:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:20.037 09:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:20.037 09:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:41:20.037 09:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:41:20.037 09:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFkZTcyNzYwYWM5NDQ1NTJhY2Y3NWViMzQxOTU0MmO7cVrh: 00:41:20.037 09:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWI2NWNiN2MxZGU1YjFlYmJmMzFkYTA3MWRkNjQzYTRmOTdiZWMyNDEwNDBjMDM1YTY3OTAzZWEzZGVhMmQ4MBiDX28=: 00:41:20.037 09:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:20.037 09:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:41:20.037 09:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFkZTcyNzYwYWM5NDQ1NTJhY2Y3NWViMzQxOTU0MmO7cVrh: 00:41:20.037 09:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWI2NWNiN2MxZGU1YjFlYmJmMzFkYTA3MWRkNjQzYTRmOTdiZWMyNDEwNDBjMDM1YTY3OTAzZWEzZGVhMmQ4MBiDX28=: ]] 00:41:20.037 09:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWI2NWNiN2MxZGU1YjFlYmJmMzFkYTA3MWRkNjQzYTRmOTdiZWMyNDEwNDBjMDM1YTY3OTAzZWEzZGVhMmQ4MBiDX28=: 00:41:20.037 09:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:41:20.037 09:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:20.037 09:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:20.037 09:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:41:20.037 09:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:41:20.037 09:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:20.037 09:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:41:20.037 09:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:20.037 09:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:20.037 09:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:20.037 09:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:20.037 09:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:20.037 09:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:20.037 09:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:20.037 09:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:20.037 09:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:20.037 09:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:20.037 09:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:20.037 09:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:20.037 09:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:20.037 09:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:20.037 09:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:20.037 09:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:20.037 09:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:20.970 nvme0n1 00:41:20.970 09:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:20.970 09:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:20.970 09:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:20.970 09:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:20.970 09:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:20.970 09:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:20.970 09:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:20.970 09:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:20.970 09:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:20.970 09:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:20.971 09:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:20.971 09:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:20.971 09:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:41:20.971 09:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:20.971 09:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:20.971 09:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:41:20.971 09:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:20.971 09:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWFjZTk4YmQwZTE5OGI1NzdjN2NlNDM5ZmFjYzc4MjNlN2E4ODEyYTU5MDZjNjIyG/WV4A==: 00:41:20.971 09:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjhjZTJlYjljOTE0MGU4NGQ1NTRmMTVkMDJiM2M0ZGE5YTFmMzI5OWUwYmNkNTFmrOpQmA==: 00:41:20.971 09:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:20.971 09:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:41:20.971 09:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWFjZTk4YmQwZTE5OGI1NzdjN2NlNDM5ZmFjYzc4MjNlN2E4ODEyYTU5MDZjNjIyG/WV4A==: 00:41:20.971 09:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjhjZTJlYjljOTE0MGU4NGQ1NTRmMTVkMDJiM2M0ZGE5YTFmMzI5OWUwYmNkNTFmrOpQmA==: ]] 00:41:20.971 09:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjhjZTJlYjljOTE0MGU4NGQ1NTRmMTVkMDJiM2M0ZGE5YTFmMzI5OWUwYmNkNTFmrOpQmA==: 00:41:20.971 09:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:41:20.971 09:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:20.971 09:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:20.971 09:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:41:20.971 09:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:20.971 09:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:20.971 09:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:41:20.971 09:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:20.971 09:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:20.971 09:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:20.971 09:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:20.971 09:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:20.971 09:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:20.971 09:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:20.971 09:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:20.971 09:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:20.971 09:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:20.971 09:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:20.971 09:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:20.971 09:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:20.971 09:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:20.971 09:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:20.971 09:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:20.971 09:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:21.904 nvme0n1 00:41:21.905 09:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:21.905 09:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:21.905 09:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:21.905 09:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:21.905 09:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:21.905 09:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:21.905 09:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:21.905 09:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:21.905 09:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:21.905 09:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:21.905 09:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:21.905 09:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:21.905 09:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:41:21.905 09:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:21.905 09:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:21.905 09:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:41:21.905 09:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:21.905 09:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDE3YjI1M2ZkNWJjNDIyNGIwMWY0NWQwNGQzOGNmOWT7gHwU: 00:41:21.905 09:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDNlM2ZkN2IyNDI3ZjM3YTdmOTI0NmNkYzhhNDZlMGRQCItf: 00:41:21.905 09:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:21.905 09:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:41:21.905 09:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDE3YjI1M2ZkNWJjNDIyNGIwMWY0NWQwNGQzOGNmOWT7gHwU: 00:41:21.905 09:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDNlM2ZkN2IyNDI3ZjM3YTdmOTI0NmNkYzhhNDZlMGRQCItf: ]] 00:41:21.905 09:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDNlM2ZkN2IyNDI3ZjM3YTdmOTI0NmNkYzhhNDZlMGRQCItf: 00:41:21.905 09:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:41:21.905 09:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:21.905 09:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:21.905 09:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:41:21.905 09:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:41:21.905 09:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:21.905 09:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:41:21.905 09:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:21.905 09:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:21.905 09:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:21.905 09:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:21.905 09:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:21.905 09:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:21.905 09:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:21.905 09:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:21.905 09:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:21.905 09:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:21.905 09:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:21.905 09:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:21.905 09:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:21.905 09:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:21.905 09:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:21.905 09:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:21.905 09:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:22.838 nvme0n1 00:41:22.838 09:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:22.838 09:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:22.838 09:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:22.838 09:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:22.838 09:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:22.838 09:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:22.838 09:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:22.838 09:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:22.838 09:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:22.838 09:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:22.838 09:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:22.838 09:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:22.838 09:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:41:22.838 09:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:22.838 09:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:22.838 09:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:41:22.838 09:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:41:22.838 09:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWI1YzYzMDIzYzRlY2Y4NmQ3YmYwZTRkZWY3MGQ3NjA4NjYwOWE1NDk0NDgzZTFjdzq6aQ==: 00:41:22.838 09:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2Y2N2ZmZjc5MzdjYWE5MjRkZmIxMmFiNTk5MjUyMjbDpGhZ: 00:41:22.838 09:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:22.838 09:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:41:22.838 09:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWI1YzYzMDIzYzRlY2Y4NmQ3YmYwZTRkZWY3MGQ3NjA4NjYwOWE1NDk0NDgzZTFjdzq6aQ==: 00:41:22.838 09:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2Y2N2ZmZjc5MzdjYWE5MjRkZmIxMmFiNTk5MjUyMjbDpGhZ: ]] 00:41:22.838 09:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2Y2N2ZmZjc5MzdjYWE5MjRkZmIxMmFiNTk5MjUyMjbDpGhZ: 00:41:22.838 09:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:41:22.838 09:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:22.838 09:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:22.838 09:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:41:22.838 09:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:41:22.838 09:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:22.838 09:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:41:22.838 09:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:22.838 09:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:22.838 09:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:22.839 09:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:22.839 09:07:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:22.839 09:07:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:22.839 09:07:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:22.839 09:07:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:22.839 09:07:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:22.839 09:07:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:22.839 09:07:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:22.839 09:07:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:22.839 09:07:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:22.839 09:07:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:22.839 09:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:41:22.839 09:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:22.839 09:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:23.772 nvme0n1 00:41:23.772 09:07:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:23.772 09:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:23.772 09:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:23.772 09:07:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:23.772 09:07:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:23.772 09:07:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:23.772 09:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:23.772 09:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:23.772 09:07:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:23.772 09:07:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:23.772 09:07:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:23.772 09:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:23.772 09:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:41:23.772 09:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:23.772 09:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:23.772 09:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:41:23.772 09:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:41:23.772 09:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTc0MTZlMzVjNDhiYjUxYmEwYmExNjI2OTQ3NmUwMDA4MTgzY2VmNzAyMmQyZmY5ZDE2NzZjZDZmZTRiZDJkYdCbv0Q=: 00:41:23.772 09:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:41:23.772 09:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:23.772 09:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:41:23.772 09:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTc0MTZlMzVjNDhiYjUxYmEwYmExNjI2OTQ3NmUwMDA4MTgzY2VmNzAyMmQyZmY5ZDE2NzZjZDZmZTRiZDJkYdCbv0Q=: 00:41:23.772 09:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:41:23.772 09:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:41:23.772 09:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:23.772 09:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:23.772 09:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:41:23.772 09:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:41:23.772 09:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:23.772 09:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:41:23.772 09:07:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:23.772 09:07:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:24.029 09:07:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:24.029 09:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:24.029 09:07:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:24.029 09:07:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:24.029 09:07:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:24.029 09:07:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:24.029 09:07:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:24.030 09:07:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:24.030 09:07:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:24.030 09:07:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:24.030 09:07:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:24.030 09:07:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:24.030 09:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:41:24.030 09:07:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:24.030 09:07:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:24.962 nvme0n1 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFkZTcyNzYwYWM5NDQ1NTJhY2Y3NWViMzQxOTU0MmO7cVrh: 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWI2NWNiN2MxZGU1YjFlYmJmMzFkYTA3MWRkNjQzYTRmOTdiZWMyNDEwNDBjMDM1YTY3OTAzZWEzZGVhMmQ4MBiDX28=: 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFkZTcyNzYwYWM5NDQ1NTJhY2Y3NWViMzQxOTU0MmO7cVrh: 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWI2NWNiN2MxZGU1YjFlYmJmMzFkYTA3MWRkNjQzYTRmOTdiZWMyNDEwNDBjMDM1YTY3OTAzZWEzZGVhMmQ4MBiDX28=: ]] 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWI2NWNiN2MxZGU1YjFlYmJmMzFkYTA3MWRkNjQzYTRmOTdiZWMyNDEwNDBjMDM1YTY3OTAzZWEzZGVhMmQ4MBiDX28=: 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:24.962 nvme0n1 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWFjZTk4YmQwZTE5OGI1NzdjN2NlNDM5ZmFjYzc4MjNlN2E4ODEyYTU5MDZjNjIyG/WV4A==: 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjhjZTJlYjljOTE0MGU4NGQ1NTRmMTVkMDJiM2M0ZGE5YTFmMzI5OWUwYmNkNTFmrOpQmA==: 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWFjZTk4YmQwZTE5OGI1NzdjN2NlNDM5ZmFjYzc4MjNlN2E4ODEyYTU5MDZjNjIyG/WV4A==: 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjhjZTJlYjljOTE0MGU4NGQ1NTRmMTVkMDJiM2M0ZGE5YTFmMzI5OWUwYmNkNTFmrOpQmA==: ]] 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjhjZTJlYjljOTE0MGU4NGQ1NTRmMTVkMDJiM2M0ZGE5YTFmMzI5OWUwYmNkNTFmrOpQmA==: 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:24.962 09:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:24.963 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:24.963 09:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:24.963 09:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:25.220 nvme0n1 00:41:25.220 09:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:25.220 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:25.220 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:25.220 09:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:25.220 09:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:25.220 09:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:25.220 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:25.220 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:25.221 09:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:25.221 09:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:25.221 09:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:25.221 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:25.221 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:41:25.221 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:25.221 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:25.221 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:25.221 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:25.221 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDE3YjI1M2ZkNWJjNDIyNGIwMWY0NWQwNGQzOGNmOWT7gHwU: 00:41:25.221 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDNlM2ZkN2IyNDI3ZjM3YTdmOTI0NmNkYzhhNDZlMGRQCItf: 00:41:25.221 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:25.221 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:25.221 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDE3YjI1M2ZkNWJjNDIyNGIwMWY0NWQwNGQzOGNmOWT7gHwU: 00:41:25.221 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDNlM2ZkN2IyNDI3ZjM3YTdmOTI0NmNkYzhhNDZlMGRQCItf: ]] 00:41:25.221 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDNlM2ZkN2IyNDI3ZjM3YTdmOTI0NmNkYzhhNDZlMGRQCItf: 00:41:25.221 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:41:25.221 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:25.221 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:25.221 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:41:25.221 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:41:25.221 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:25.221 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:41:25.221 09:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:25.221 09:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:25.221 09:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:25.221 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:25.221 09:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:25.221 09:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:25.221 09:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:25.221 09:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:25.221 09:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:25.221 09:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:25.221 09:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:25.221 09:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:25.221 09:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:25.221 09:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:25.221 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:25.221 09:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:25.221 09:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:25.221 nvme0n1 00:41:25.221 09:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:25.221 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:25.221 09:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:25.221 09:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:25.221 09:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:25.221 09:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWI1YzYzMDIzYzRlY2Y4NmQ3YmYwZTRkZWY3MGQ3NjA4NjYwOWE1NDk0NDgzZTFjdzq6aQ==: 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2Y2N2ZmZjc5MzdjYWE5MjRkZmIxMmFiNTk5MjUyMjbDpGhZ: 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWI1YzYzMDIzYzRlY2Y4NmQ3YmYwZTRkZWY3MGQ3NjA4NjYwOWE1NDk0NDgzZTFjdzq6aQ==: 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2Y2N2ZmZjc5MzdjYWE5MjRkZmIxMmFiNTk5MjUyMjbDpGhZ: ]] 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2Y2N2ZmZjc5MzdjYWE5MjRkZmIxMmFiNTk5MjUyMjbDpGhZ: 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:25.479 nvme0n1 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTc0MTZlMzVjNDhiYjUxYmEwYmExNjI2OTQ3NmUwMDA4MTgzY2VmNzAyMmQyZmY5ZDE2NzZjZDZmZTRiZDJkYdCbv0Q=: 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTc0MTZlMzVjNDhiYjUxYmEwYmExNjI2OTQ3NmUwMDA4MTgzY2VmNzAyMmQyZmY5ZDE2NzZjZDZmZTRiZDJkYdCbv0Q=: 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:25.479 09:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:25.480 09:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:25.480 09:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:25.480 09:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:25.480 09:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:25.480 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:41:25.480 09:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:25.480 09:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:25.737 nvme0n1 00:41:25.737 09:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:25.737 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:25.737 09:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:25.737 09:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:25.737 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:25.737 09:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:25.737 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:25.737 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:25.737 09:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:25.737 09:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:25.737 09:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:25.737 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:41:25.737 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:25.737 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:41:25.737 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:25.737 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:25.737 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:41:25.737 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:41:25.737 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFkZTcyNzYwYWM5NDQ1NTJhY2Y3NWViMzQxOTU0MmO7cVrh: 00:41:25.737 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWI2NWNiN2MxZGU1YjFlYmJmMzFkYTA3MWRkNjQzYTRmOTdiZWMyNDEwNDBjMDM1YTY3OTAzZWEzZGVhMmQ4MBiDX28=: 00:41:25.737 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:25.737 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:41:25.737 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFkZTcyNzYwYWM5NDQ1NTJhY2Y3NWViMzQxOTU0MmO7cVrh: 00:41:25.737 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWI2NWNiN2MxZGU1YjFlYmJmMzFkYTA3MWRkNjQzYTRmOTdiZWMyNDEwNDBjMDM1YTY3OTAzZWEzZGVhMmQ4MBiDX28=: ]] 00:41:25.737 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWI2NWNiN2MxZGU1YjFlYmJmMzFkYTA3MWRkNjQzYTRmOTdiZWMyNDEwNDBjMDM1YTY3OTAzZWEzZGVhMmQ4MBiDX28=: 00:41:25.737 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:41:25.737 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:25.737 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:25.737 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:41:25.737 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:41:25.737 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:25.737 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:41:25.737 09:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:25.737 09:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:25.737 09:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:25.737 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:25.737 09:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:25.737 09:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:25.737 09:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:25.737 09:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:25.737 09:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:25.738 09:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:25.738 09:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:25.738 09:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:25.738 09:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:25.738 09:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:25.738 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:25.738 09:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:25.738 09:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:25.995 nvme0n1 00:41:25.995 09:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:25.995 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:25.995 09:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:25.995 09:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:25.995 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:25.995 09:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:25.995 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:25.995 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:25.995 09:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:25.995 09:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:25.995 09:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:25.995 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:25.995 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:41:25.995 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:25.995 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:25.995 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:41:25.995 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:25.995 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWFjZTk4YmQwZTE5OGI1NzdjN2NlNDM5ZmFjYzc4MjNlN2E4ODEyYTU5MDZjNjIyG/WV4A==: 00:41:25.995 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjhjZTJlYjljOTE0MGU4NGQ1NTRmMTVkMDJiM2M0ZGE5YTFmMzI5OWUwYmNkNTFmrOpQmA==: 00:41:25.995 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:25.995 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:41:25.995 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWFjZTk4YmQwZTE5OGI1NzdjN2NlNDM5ZmFjYzc4MjNlN2E4ODEyYTU5MDZjNjIyG/WV4A==: 00:41:25.995 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjhjZTJlYjljOTE0MGU4NGQ1NTRmMTVkMDJiM2M0ZGE5YTFmMzI5OWUwYmNkNTFmrOpQmA==: ]] 00:41:25.995 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjhjZTJlYjljOTE0MGU4NGQ1NTRmMTVkMDJiM2M0ZGE5YTFmMzI5OWUwYmNkNTFmrOpQmA==: 00:41:25.996 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:41:25.996 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:25.996 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:25.996 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:41:25.996 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:25.996 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:25.996 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:41:25.996 09:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:25.996 09:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:25.996 09:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:25.996 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:25.996 09:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:25.996 09:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:25.996 09:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:25.996 09:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:25.996 09:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:25.996 09:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:25.996 09:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:25.996 09:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:25.996 09:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:25.996 09:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:25.996 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:25.996 09:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:25.996 09:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:26.254 nvme0n1 00:41:26.254 09:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:26.254 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:26.254 09:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:26.254 09:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:26.254 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:26.254 09:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:26.254 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:26.254 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:26.254 09:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:26.254 09:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:26.254 09:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:26.254 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:26.254 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:41:26.254 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:26.254 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:26.254 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:41:26.254 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:26.254 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDE3YjI1M2ZkNWJjNDIyNGIwMWY0NWQwNGQzOGNmOWT7gHwU: 00:41:26.254 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDNlM2ZkN2IyNDI3ZjM3YTdmOTI0NmNkYzhhNDZlMGRQCItf: 00:41:26.254 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:26.254 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:41:26.254 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDE3YjI1M2ZkNWJjNDIyNGIwMWY0NWQwNGQzOGNmOWT7gHwU: 00:41:26.254 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDNlM2ZkN2IyNDI3ZjM3YTdmOTI0NmNkYzhhNDZlMGRQCItf: ]] 00:41:26.254 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDNlM2ZkN2IyNDI3ZjM3YTdmOTI0NmNkYzhhNDZlMGRQCItf: 00:41:26.254 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:41:26.254 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:26.254 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:26.254 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:41:26.254 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:41:26.254 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:26.254 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:41:26.254 09:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:26.254 09:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:26.254 09:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:26.254 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:26.254 09:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:26.254 09:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:26.254 09:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:26.254 09:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:26.254 09:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:26.254 09:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:26.254 09:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:26.254 09:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:26.254 09:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:26.254 09:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:26.254 09:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:26.254 09:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:26.254 09:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:26.512 nvme0n1 00:41:26.512 09:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:26.512 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:26.512 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:26.512 09:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:26.512 09:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:26.512 09:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:26.512 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:26.512 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:26.512 09:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:26.512 09:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:26.512 09:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:26.512 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:26.512 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:41:26.512 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:26.512 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:26.512 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:41:26.512 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:41:26.512 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWI1YzYzMDIzYzRlY2Y4NmQ3YmYwZTRkZWY3MGQ3NjA4NjYwOWE1NDk0NDgzZTFjdzq6aQ==: 00:41:26.512 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2Y2N2ZmZjc5MzdjYWE5MjRkZmIxMmFiNTk5MjUyMjbDpGhZ: 00:41:26.512 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:26.512 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:41:26.512 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWI1YzYzMDIzYzRlY2Y4NmQ3YmYwZTRkZWY3MGQ3NjA4NjYwOWE1NDk0NDgzZTFjdzq6aQ==: 00:41:26.512 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2Y2N2ZmZjc5MzdjYWE5MjRkZmIxMmFiNTk5MjUyMjbDpGhZ: ]] 00:41:26.512 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2Y2N2ZmZjc5MzdjYWE5MjRkZmIxMmFiNTk5MjUyMjbDpGhZ: 00:41:26.512 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:41:26.512 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:26.512 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:26.512 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:41:26.512 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:41:26.512 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:26.512 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:41:26.512 09:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:26.512 09:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:26.512 09:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:26.512 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:26.512 09:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:26.512 09:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:26.512 09:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:26.512 09:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:26.512 09:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:26.512 09:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:26.512 09:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:26.512 09:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:26.512 09:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:26.512 09:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:26.512 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:41:26.512 09:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:26.512 09:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:26.512 nvme0n1 00:41:26.512 09:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:26.512 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:26.512 09:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:26.512 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:26.512 09:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:26.512 09:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTc0MTZlMzVjNDhiYjUxYmEwYmExNjI2OTQ3NmUwMDA4MTgzY2VmNzAyMmQyZmY5ZDE2NzZjZDZmZTRiZDJkYdCbv0Q=: 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTc0MTZlMzVjNDhiYjUxYmEwYmExNjI2OTQ3NmUwMDA4MTgzY2VmNzAyMmQyZmY5ZDE2NzZjZDZmZTRiZDJkYdCbv0Q=: 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:26.770 nvme0n1 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFkZTcyNzYwYWM5NDQ1NTJhY2Y3NWViMzQxOTU0MmO7cVrh: 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWI2NWNiN2MxZGU1YjFlYmJmMzFkYTA3MWRkNjQzYTRmOTdiZWMyNDEwNDBjMDM1YTY3OTAzZWEzZGVhMmQ4MBiDX28=: 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFkZTcyNzYwYWM5NDQ1NTJhY2Y3NWViMzQxOTU0MmO7cVrh: 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWI2NWNiN2MxZGU1YjFlYmJmMzFkYTA3MWRkNjQzYTRmOTdiZWMyNDEwNDBjMDM1YTY3OTAzZWEzZGVhMmQ4MBiDX28=: ]] 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWI2NWNiN2MxZGU1YjFlYmJmMzFkYTA3MWRkNjQzYTRmOTdiZWMyNDEwNDBjMDM1YTY3OTAzZWEzZGVhMmQ4MBiDX28=: 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:26.770 09:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:27.027 nvme0n1 00:41:27.027 09:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:27.027 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:27.027 09:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:27.027 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:27.027 09:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:27.027 09:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:27.285 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:27.285 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:27.285 09:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:27.285 09:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:27.285 09:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:27.285 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:27.285 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:41:27.285 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:27.285 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:27.285 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:41:27.285 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:27.285 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWFjZTk4YmQwZTE5OGI1NzdjN2NlNDM5ZmFjYzc4MjNlN2E4ODEyYTU5MDZjNjIyG/WV4A==: 00:41:27.285 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjhjZTJlYjljOTE0MGU4NGQ1NTRmMTVkMDJiM2M0ZGE5YTFmMzI5OWUwYmNkNTFmrOpQmA==: 00:41:27.285 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:27.285 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:41:27.285 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWFjZTk4YmQwZTE5OGI1NzdjN2NlNDM5ZmFjYzc4MjNlN2E4ODEyYTU5MDZjNjIyG/WV4A==: 00:41:27.285 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjhjZTJlYjljOTE0MGU4NGQ1NTRmMTVkMDJiM2M0ZGE5YTFmMzI5OWUwYmNkNTFmrOpQmA==: ]] 00:41:27.285 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjhjZTJlYjljOTE0MGU4NGQ1NTRmMTVkMDJiM2M0ZGE5YTFmMzI5OWUwYmNkNTFmrOpQmA==: 00:41:27.285 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:41:27.285 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:27.285 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:27.285 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:41:27.285 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:27.285 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:27.285 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:41:27.285 09:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:27.285 09:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:27.285 09:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:27.285 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:27.285 09:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:27.285 09:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:27.285 09:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:27.285 09:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:27.285 09:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:27.285 09:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:27.285 09:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:27.285 09:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:27.285 09:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:27.285 09:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:27.285 09:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:27.285 09:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:27.285 09:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:27.542 nvme0n1 00:41:27.543 09:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:27.543 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:27.543 09:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:27.543 09:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:27.543 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:27.543 09:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:27.543 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:27.543 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:27.543 09:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:27.543 09:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:27.543 09:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:27.543 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:27.543 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:41:27.543 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:27.543 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:27.543 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:41:27.543 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:27.543 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDE3YjI1M2ZkNWJjNDIyNGIwMWY0NWQwNGQzOGNmOWT7gHwU: 00:41:27.543 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDNlM2ZkN2IyNDI3ZjM3YTdmOTI0NmNkYzhhNDZlMGRQCItf: 00:41:27.543 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:27.543 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:41:27.543 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDE3YjI1M2ZkNWJjNDIyNGIwMWY0NWQwNGQzOGNmOWT7gHwU: 00:41:27.543 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDNlM2ZkN2IyNDI3ZjM3YTdmOTI0NmNkYzhhNDZlMGRQCItf: ]] 00:41:27.543 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDNlM2ZkN2IyNDI3ZjM3YTdmOTI0NmNkYzhhNDZlMGRQCItf: 00:41:27.543 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:41:27.543 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:27.543 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:27.543 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:41:27.543 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:41:27.543 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:27.543 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:41:27.543 09:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:27.543 09:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:27.543 09:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:27.543 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:27.543 09:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:27.543 09:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:27.543 09:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:27.543 09:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:27.543 09:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:27.543 09:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:27.543 09:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:27.543 09:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:27.543 09:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:27.543 09:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:27.543 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:27.543 09:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:27.543 09:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:27.800 nvme0n1 00:41:27.800 09:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:27.800 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:27.800 09:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:27.800 09:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:27.800 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:27.800 09:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:27.800 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:27.800 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:27.800 09:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:27.800 09:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:27.800 09:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:27.800 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:27.800 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:41:27.800 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:27.800 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:27.800 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:41:27.800 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:41:27.800 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWI1YzYzMDIzYzRlY2Y4NmQ3YmYwZTRkZWY3MGQ3NjA4NjYwOWE1NDk0NDgzZTFjdzq6aQ==: 00:41:27.800 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2Y2N2ZmZjc5MzdjYWE5MjRkZmIxMmFiNTk5MjUyMjbDpGhZ: 00:41:27.800 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:27.800 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:41:27.800 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWI1YzYzMDIzYzRlY2Y4NmQ3YmYwZTRkZWY3MGQ3NjA4NjYwOWE1NDk0NDgzZTFjdzq6aQ==: 00:41:27.800 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2Y2N2ZmZjc5MzdjYWE5MjRkZmIxMmFiNTk5MjUyMjbDpGhZ: ]] 00:41:27.800 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2Y2N2ZmZjc5MzdjYWE5MjRkZmIxMmFiNTk5MjUyMjbDpGhZ: 00:41:27.800 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:41:27.800 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:27.800 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:27.800 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:41:27.800 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:41:27.800 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:27.800 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:41:27.800 09:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:27.800 09:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:27.800 09:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:27.800 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:27.800 09:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:27.800 09:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:27.800 09:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:27.800 09:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:27.800 09:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:27.800 09:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:27.800 09:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:27.800 09:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:27.800 09:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:27.800 09:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:27.800 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:41:27.800 09:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:27.800 09:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:28.058 nvme0n1 00:41:28.058 09:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:28.058 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:28.058 09:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:28.058 09:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:28.058 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:28.058 09:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:28.058 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:28.058 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:28.058 09:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:28.058 09:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:28.058 09:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:28.058 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:28.058 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:41:28.058 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:28.058 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:28.058 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:41:28.058 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:41:28.058 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTc0MTZlMzVjNDhiYjUxYmEwYmExNjI2OTQ3NmUwMDA4MTgzY2VmNzAyMmQyZmY5ZDE2NzZjZDZmZTRiZDJkYdCbv0Q=: 00:41:28.058 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:41:28.058 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:28.058 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:41:28.058 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTc0MTZlMzVjNDhiYjUxYmEwYmExNjI2OTQ3NmUwMDA4MTgzY2VmNzAyMmQyZmY5ZDE2NzZjZDZmZTRiZDJkYdCbv0Q=: 00:41:28.058 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:41:28.058 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:41:28.058 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:28.058 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:28.058 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:41:28.058 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:41:28.058 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:28.058 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:41:28.058 09:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:28.058 09:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:28.058 09:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:28.058 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:28.058 09:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:28.058 09:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:28.058 09:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:28.058 09:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:28.058 09:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:28.058 09:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:28.058 09:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:28.058 09:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:28.058 09:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:28.058 09:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:28.058 09:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:41:28.058 09:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:28.058 09:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:28.315 nvme0n1 00:41:28.315 09:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:28.315 09:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:28.315 09:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:28.315 09:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:28.315 09:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:28.315 09:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:28.575 09:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:28.575 09:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:28.575 09:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:28.575 09:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:28.575 09:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:28.575 09:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:41:28.575 09:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:28.575 09:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:41:28.575 09:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:28.575 09:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:28.575 09:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:41:28.575 09:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:41:28.575 09:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFkZTcyNzYwYWM5NDQ1NTJhY2Y3NWViMzQxOTU0MmO7cVrh: 00:41:28.575 09:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWI2NWNiN2MxZGU1YjFlYmJmMzFkYTA3MWRkNjQzYTRmOTdiZWMyNDEwNDBjMDM1YTY3OTAzZWEzZGVhMmQ4MBiDX28=: 00:41:28.575 09:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:28.575 09:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:41:28.575 09:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFkZTcyNzYwYWM5NDQ1NTJhY2Y3NWViMzQxOTU0MmO7cVrh: 00:41:28.575 09:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWI2NWNiN2MxZGU1YjFlYmJmMzFkYTA3MWRkNjQzYTRmOTdiZWMyNDEwNDBjMDM1YTY3OTAzZWEzZGVhMmQ4MBiDX28=: ]] 00:41:28.575 09:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWI2NWNiN2MxZGU1YjFlYmJmMzFkYTA3MWRkNjQzYTRmOTdiZWMyNDEwNDBjMDM1YTY3OTAzZWEzZGVhMmQ4MBiDX28=: 00:41:28.575 09:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:41:28.575 09:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:28.575 09:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:28.575 09:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:41:28.575 09:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:41:28.575 09:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:28.575 09:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:41:28.576 09:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:28.576 09:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:28.576 09:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:28.576 09:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:28.576 09:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:28.576 09:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:28.576 09:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:28.576 09:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:28.576 09:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:28.576 09:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:28.576 09:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:28.576 09:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:28.576 09:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:28.576 09:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:28.576 09:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:28.576 09:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:28.576 09:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:29.138 nvme0n1 00:41:29.138 09:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:29.138 09:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:29.138 09:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:29.138 09:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:29.138 09:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:29.138 09:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:29.138 09:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:29.138 09:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:29.138 09:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:29.138 09:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:29.138 09:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:29.138 09:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:29.138 09:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:41:29.138 09:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:29.138 09:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:29.138 09:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:41:29.138 09:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:29.138 09:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWFjZTk4YmQwZTE5OGI1NzdjN2NlNDM5ZmFjYzc4MjNlN2E4ODEyYTU5MDZjNjIyG/WV4A==: 00:41:29.138 09:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjhjZTJlYjljOTE0MGU4NGQ1NTRmMTVkMDJiM2M0ZGE5YTFmMzI5OWUwYmNkNTFmrOpQmA==: 00:41:29.138 09:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:29.138 09:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:41:29.138 09:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWFjZTk4YmQwZTE5OGI1NzdjN2NlNDM5ZmFjYzc4MjNlN2E4ODEyYTU5MDZjNjIyG/WV4A==: 00:41:29.138 09:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjhjZTJlYjljOTE0MGU4NGQ1NTRmMTVkMDJiM2M0ZGE5YTFmMzI5OWUwYmNkNTFmrOpQmA==: ]] 00:41:29.138 09:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjhjZTJlYjljOTE0MGU4NGQ1NTRmMTVkMDJiM2M0ZGE5YTFmMzI5OWUwYmNkNTFmrOpQmA==: 00:41:29.138 09:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:41:29.138 09:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:29.138 09:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:29.138 09:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:41:29.138 09:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:29.138 09:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:29.138 09:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:41:29.138 09:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:29.138 09:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:29.138 09:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:29.138 09:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:29.138 09:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:29.138 09:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:29.138 09:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:29.138 09:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:29.138 09:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:29.138 09:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:29.138 09:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:29.138 09:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:29.138 09:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:29.138 09:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:29.139 09:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:29.139 09:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:29.139 09:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:29.703 nvme0n1 00:41:29.703 09:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:29.703 09:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:29.703 09:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:29.703 09:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:29.703 09:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:29.704 09:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:29.704 09:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:29.704 09:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:29.704 09:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:29.704 09:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:29.704 09:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:29.704 09:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:29.704 09:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:41:29.704 09:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:29.704 09:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:29.704 09:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:41:29.704 09:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:29.704 09:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDE3YjI1M2ZkNWJjNDIyNGIwMWY0NWQwNGQzOGNmOWT7gHwU: 00:41:29.704 09:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDNlM2ZkN2IyNDI3ZjM3YTdmOTI0NmNkYzhhNDZlMGRQCItf: 00:41:29.704 09:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:29.704 09:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:41:29.704 09:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDE3YjI1M2ZkNWJjNDIyNGIwMWY0NWQwNGQzOGNmOWT7gHwU: 00:41:29.704 09:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDNlM2ZkN2IyNDI3ZjM3YTdmOTI0NmNkYzhhNDZlMGRQCItf: ]] 00:41:29.704 09:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDNlM2ZkN2IyNDI3ZjM3YTdmOTI0NmNkYzhhNDZlMGRQCItf: 00:41:29.704 09:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:41:29.704 09:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:29.704 09:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:29.704 09:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:41:29.704 09:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:41:29.704 09:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:29.704 09:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:41:29.704 09:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:29.704 09:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:29.704 09:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:29.704 09:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:29.704 09:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:29.704 09:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:29.704 09:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:29.704 09:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:29.704 09:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:29.704 09:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:29.704 09:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:29.704 09:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:29.704 09:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:29.704 09:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:29.704 09:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:29.704 09:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:29.704 09:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:30.268 nvme0n1 00:41:30.268 09:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:30.268 09:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:30.268 09:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:30.268 09:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:30.268 09:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:30.268 09:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:30.268 09:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:30.268 09:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:30.268 09:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:30.268 09:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:30.268 09:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:30.268 09:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:30.268 09:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:41:30.268 09:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:30.268 09:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:30.268 09:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:41:30.268 09:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:41:30.268 09:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWI1YzYzMDIzYzRlY2Y4NmQ3YmYwZTRkZWY3MGQ3NjA4NjYwOWE1NDk0NDgzZTFjdzq6aQ==: 00:41:30.268 09:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2Y2N2ZmZjc5MzdjYWE5MjRkZmIxMmFiNTk5MjUyMjbDpGhZ: 00:41:30.268 09:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:30.268 09:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:41:30.268 09:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWI1YzYzMDIzYzRlY2Y4NmQ3YmYwZTRkZWY3MGQ3NjA4NjYwOWE1NDk0NDgzZTFjdzq6aQ==: 00:41:30.268 09:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2Y2N2ZmZjc5MzdjYWE5MjRkZmIxMmFiNTk5MjUyMjbDpGhZ: ]] 00:41:30.268 09:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2Y2N2ZmZjc5MzdjYWE5MjRkZmIxMmFiNTk5MjUyMjbDpGhZ: 00:41:30.268 09:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:41:30.268 09:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:30.268 09:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:30.268 09:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:41:30.268 09:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:41:30.268 09:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:30.268 09:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:41:30.268 09:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:30.268 09:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:30.268 09:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:30.268 09:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:30.268 09:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:30.268 09:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:30.268 09:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:30.268 09:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:30.268 09:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:30.268 09:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:30.268 09:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:30.268 09:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:30.268 09:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:30.268 09:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:30.268 09:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:41:30.268 09:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:30.268 09:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:30.833 nvme0n1 00:41:30.833 09:07:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:30.833 09:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:30.833 09:07:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:30.833 09:07:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:30.833 09:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:30.833 09:07:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:30.833 09:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:30.833 09:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:30.833 09:07:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:30.833 09:07:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:30.833 09:07:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:30.833 09:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:30.833 09:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:41:30.833 09:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:30.833 09:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:30.833 09:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:41:30.833 09:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:41:30.833 09:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTc0MTZlMzVjNDhiYjUxYmEwYmExNjI2OTQ3NmUwMDA4MTgzY2VmNzAyMmQyZmY5ZDE2NzZjZDZmZTRiZDJkYdCbv0Q=: 00:41:30.833 09:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:41:30.833 09:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:30.833 09:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:41:30.833 09:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTc0MTZlMzVjNDhiYjUxYmEwYmExNjI2OTQ3NmUwMDA4MTgzY2VmNzAyMmQyZmY5ZDE2NzZjZDZmZTRiZDJkYdCbv0Q=: 00:41:30.833 09:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:41:30.833 09:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:41:30.833 09:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:30.833 09:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:30.833 09:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:41:30.833 09:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:41:30.833 09:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:30.833 09:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:41:30.833 09:07:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:30.833 09:07:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:30.833 09:07:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:30.833 09:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:30.833 09:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:30.833 09:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:30.833 09:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:30.833 09:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:30.833 09:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:30.833 09:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:30.833 09:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:30.833 09:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:30.833 09:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:30.833 09:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:30.833 09:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:41:30.833 09:07:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:30.833 09:07:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:31.398 nvme0n1 00:41:31.398 09:07:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:31.398 09:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:31.398 09:07:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:31.398 09:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:31.398 09:07:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:31.398 09:07:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:31.398 09:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:31.398 09:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:31.398 09:07:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:31.398 09:07:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:31.398 09:07:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:31.398 09:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:41:31.398 09:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:31.398 09:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:41:31.398 09:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:31.398 09:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:31.398 09:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:41:31.398 09:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:41:31.398 09:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFkZTcyNzYwYWM5NDQ1NTJhY2Y3NWViMzQxOTU0MmO7cVrh: 00:41:31.398 09:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWI2NWNiN2MxZGU1YjFlYmJmMzFkYTA3MWRkNjQzYTRmOTdiZWMyNDEwNDBjMDM1YTY3OTAzZWEzZGVhMmQ4MBiDX28=: 00:41:31.398 09:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:31.398 09:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:41:31.398 09:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFkZTcyNzYwYWM5NDQ1NTJhY2Y3NWViMzQxOTU0MmO7cVrh: 00:41:31.398 09:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWI2NWNiN2MxZGU1YjFlYmJmMzFkYTA3MWRkNjQzYTRmOTdiZWMyNDEwNDBjMDM1YTY3OTAzZWEzZGVhMmQ4MBiDX28=: ]] 00:41:31.398 09:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWI2NWNiN2MxZGU1YjFlYmJmMzFkYTA3MWRkNjQzYTRmOTdiZWMyNDEwNDBjMDM1YTY3OTAzZWEzZGVhMmQ4MBiDX28=: 00:41:31.398 09:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:41:31.398 09:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:31.398 09:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:31.398 09:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:41:31.398 09:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:41:31.398 09:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:31.398 09:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:41:31.398 09:07:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:31.398 09:07:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:31.398 09:07:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:31.398 09:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:31.398 09:07:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:31.398 09:07:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:31.398 09:07:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:31.398 09:07:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:31.398 09:07:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:31.398 09:07:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:31.398 09:07:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:31.398 09:07:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:31.398 09:07:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:31.398 09:07:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:31.398 09:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:31.398 09:07:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:31.398 09:07:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:32.329 nvme0n1 00:41:32.329 09:07:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:32.329 09:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:32.329 09:07:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:32.329 09:07:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:32.329 09:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:32.329 09:07:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:32.329 09:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:32.330 09:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:32.330 09:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:32.330 09:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:32.330 09:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:32.330 09:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:32.330 09:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:41:32.330 09:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:32.330 09:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:32.330 09:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:41:32.330 09:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:32.330 09:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWFjZTk4YmQwZTE5OGI1NzdjN2NlNDM5ZmFjYzc4MjNlN2E4ODEyYTU5MDZjNjIyG/WV4A==: 00:41:32.330 09:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjhjZTJlYjljOTE0MGU4NGQ1NTRmMTVkMDJiM2M0ZGE5YTFmMzI5OWUwYmNkNTFmrOpQmA==: 00:41:32.330 09:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:32.330 09:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:41:32.330 09:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWFjZTk4YmQwZTE5OGI1NzdjN2NlNDM5ZmFjYzc4MjNlN2E4ODEyYTU5MDZjNjIyG/WV4A==: 00:41:32.330 09:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjhjZTJlYjljOTE0MGU4NGQ1NTRmMTVkMDJiM2M0ZGE5YTFmMzI5OWUwYmNkNTFmrOpQmA==: ]] 00:41:32.330 09:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjhjZTJlYjljOTE0MGU4NGQ1NTRmMTVkMDJiM2M0ZGE5YTFmMzI5OWUwYmNkNTFmrOpQmA==: 00:41:32.330 09:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:41:32.330 09:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:32.330 09:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:32.330 09:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:41:32.330 09:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:32.330 09:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:32.330 09:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:41:32.330 09:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:32.330 09:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:32.330 09:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:32.330 09:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:32.330 09:07:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:32.330 09:07:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:32.330 09:07:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:32.330 09:07:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:32.330 09:07:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:32.330 09:07:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:32.330 09:07:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:32.330 09:07:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:32.330 09:07:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:32.330 09:07:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:32.330 09:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:32.330 09:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:32.330 09:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:33.270 nvme0n1 00:41:33.270 09:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:33.270 09:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:33.270 09:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:33.270 09:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:33.270 09:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:33.270 09:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:33.270 09:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:33.271 09:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:33.271 09:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:33.271 09:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:33.271 09:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:33.271 09:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:33.271 09:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:41:33.271 09:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:33.271 09:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:33.271 09:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:41:33.271 09:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:33.271 09:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDE3YjI1M2ZkNWJjNDIyNGIwMWY0NWQwNGQzOGNmOWT7gHwU: 00:41:33.271 09:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDNlM2ZkN2IyNDI3ZjM3YTdmOTI0NmNkYzhhNDZlMGRQCItf: 00:41:33.271 09:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:33.271 09:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:41:33.271 09:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDE3YjI1M2ZkNWJjNDIyNGIwMWY0NWQwNGQzOGNmOWT7gHwU: 00:41:33.271 09:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDNlM2ZkN2IyNDI3ZjM3YTdmOTI0NmNkYzhhNDZlMGRQCItf: ]] 00:41:33.271 09:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDNlM2ZkN2IyNDI3ZjM3YTdmOTI0NmNkYzhhNDZlMGRQCItf: 00:41:33.271 09:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:41:33.271 09:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:33.271 09:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:33.271 09:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:41:33.271 09:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:41:33.271 09:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:33.271 09:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:41:33.271 09:07:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:33.271 09:07:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:33.271 09:07:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:33.271 09:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:33.271 09:07:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:33.271 09:07:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:33.271 09:07:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:33.271 09:07:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:33.271 09:07:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:33.271 09:07:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:33.271 09:07:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:33.271 09:07:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:33.271 09:07:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:33.271 09:07:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:33.271 09:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:33.271 09:07:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:33.271 09:07:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:34.236 nvme0n1 00:41:34.237 09:07:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:34.237 09:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:34.237 09:07:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:34.237 09:07:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:34.237 09:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:34.237 09:07:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:34.237 09:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:34.237 09:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:34.237 09:07:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:34.237 09:07:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:34.237 09:07:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:34.237 09:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:34.237 09:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:41:34.237 09:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:34.237 09:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:34.237 09:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:41:34.237 09:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:41:34.237 09:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWI1YzYzMDIzYzRlY2Y4NmQ3YmYwZTRkZWY3MGQ3NjA4NjYwOWE1NDk0NDgzZTFjdzq6aQ==: 00:41:34.237 09:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2Y2N2ZmZjc5MzdjYWE5MjRkZmIxMmFiNTk5MjUyMjbDpGhZ: 00:41:34.237 09:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:34.237 09:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:41:34.237 09:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWI1YzYzMDIzYzRlY2Y4NmQ3YmYwZTRkZWY3MGQ3NjA4NjYwOWE1NDk0NDgzZTFjdzq6aQ==: 00:41:34.237 09:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2Y2N2ZmZjc5MzdjYWE5MjRkZmIxMmFiNTk5MjUyMjbDpGhZ: ]] 00:41:34.237 09:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2Y2N2ZmZjc5MzdjYWE5MjRkZmIxMmFiNTk5MjUyMjbDpGhZ: 00:41:34.237 09:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:41:34.237 09:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:34.237 09:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:34.237 09:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:41:34.237 09:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:41:34.237 09:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:34.237 09:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:41:34.237 09:07:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:34.237 09:07:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:34.237 09:07:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:34.237 09:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:34.237 09:07:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:34.237 09:07:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:34.237 09:07:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:34.237 09:07:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:34.237 09:07:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:34.237 09:07:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:34.237 09:07:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:34.237 09:07:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:34.237 09:07:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:34.237 09:07:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:34.237 09:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:41:34.237 09:07:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:34.237 09:07:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:35.168 nvme0n1 00:41:35.168 09:07:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:35.168 09:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:35.168 09:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:35.168 09:07:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:35.168 09:07:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:35.168 09:07:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:35.168 09:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:35.168 09:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:35.168 09:07:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:35.168 09:07:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:35.425 09:07:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:35.425 09:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:35.425 09:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:41:35.425 09:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:35.425 09:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:35.425 09:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:41:35.425 09:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:41:35.425 09:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTc0MTZlMzVjNDhiYjUxYmEwYmExNjI2OTQ3NmUwMDA4MTgzY2VmNzAyMmQyZmY5ZDE2NzZjZDZmZTRiZDJkYdCbv0Q=: 00:41:35.425 09:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:41:35.425 09:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:35.425 09:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:41:35.425 09:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTc0MTZlMzVjNDhiYjUxYmEwYmExNjI2OTQ3NmUwMDA4MTgzY2VmNzAyMmQyZmY5ZDE2NzZjZDZmZTRiZDJkYdCbv0Q=: 00:41:35.425 09:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:41:35.425 09:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:41:35.425 09:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:35.425 09:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:35.425 09:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:41:35.425 09:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:41:35.425 09:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:35.425 09:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:41:35.425 09:07:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:35.425 09:07:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:35.425 09:07:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:35.425 09:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:35.425 09:07:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:35.425 09:07:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:35.425 09:07:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:35.425 09:07:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:35.425 09:07:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:35.425 09:07:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:35.425 09:07:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:35.425 09:07:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:35.425 09:07:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:35.425 09:07:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:35.425 09:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:41:35.425 09:07:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:35.425 09:07:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:36.357 nvme0n1 00:41:36.357 09:07:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:36.357 09:07:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:36.357 09:07:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:36.357 09:07:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:36.357 09:07:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:36.357 09:07:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:36.357 09:07:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:36.357 09:07:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:36.357 09:07:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:36.357 09:07:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:36.357 09:07:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:36.357 09:07:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:41:36.357 09:07:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:36.357 09:07:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:36.357 09:07:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:36.357 09:07:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:36.357 09:07:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWFjZTk4YmQwZTE5OGI1NzdjN2NlNDM5ZmFjYzc4MjNlN2E4ODEyYTU5MDZjNjIyG/WV4A==: 00:41:36.357 09:07:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjhjZTJlYjljOTE0MGU4NGQ1NTRmMTVkMDJiM2M0ZGE5YTFmMzI5OWUwYmNkNTFmrOpQmA==: 00:41:36.357 09:07:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:36.357 09:07:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:36.357 09:07:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWFjZTk4YmQwZTE5OGI1NzdjN2NlNDM5ZmFjYzc4MjNlN2E4ODEyYTU5MDZjNjIyG/WV4A==: 00:41:36.357 09:07:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjhjZTJlYjljOTE0MGU4NGQ1NTRmMTVkMDJiM2M0ZGE5YTFmMzI5OWUwYmNkNTFmrOpQmA==: ]] 00:41:36.357 09:07:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjhjZTJlYjljOTE0MGU4NGQ1NTRmMTVkMDJiM2M0ZGE5YTFmMzI5OWUwYmNkNTFmrOpQmA==: 00:41:36.357 09:07:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:41:36.357 09:07:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:36.357 09:07:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:36.357 09:07:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:36.357 09:07:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:41:36.357 09:07:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:36.357 09:07:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:36.357 09:07:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:36.357 09:07:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:36.357 09:07:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:36.357 09:07:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:36.357 09:07:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:36.357 09:07:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:36.357 09:07:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:36.357 09:07:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:36.357 09:07:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:41:36.357 09:07:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:41:36.357 09:07:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:41:36.357 09:07:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:41:36.357 09:07:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:41:36.357 09:07:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:41:36.357 09:07:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:41:36.357 09:07:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:41:36.357 09:07:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:36.357 09:07:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:36.357 request: 00:41:36.357 { 00:41:36.357 "name": "nvme0", 00:41:36.357 "trtype": "tcp", 00:41:36.357 "traddr": "10.0.0.1", 00:41:36.357 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:41:36.357 "adrfam": "ipv4", 00:41:36.357 "trsvcid": "4420", 00:41:36.357 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:41:36.357 "method": "bdev_nvme_attach_controller", 00:41:36.357 "req_id": 1 00:41:36.357 } 00:41:36.357 Got JSON-RPC error response 00:41:36.357 response: 00:41:36.357 { 00:41:36.357 "code": -32602, 00:41:36.357 "message": "Invalid parameters" 00:41:36.357 } 00:41:36.357 09:07:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:41:36.357 09:07:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:41:36.357 09:07:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:41:36.357 09:07:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:41:36.357 09:07:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:41:36.357 09:07:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:41:36.357 09:07:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:36.357 09:07:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:36.357 09:07:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:41:36.357 09:07:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:36.357 09:07:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:41:36.357 09:07:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:41:36.357 09:07:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:36.357 09:07:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:36.357 09:07:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:36.357 09:07:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:36.357 09:07:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:36.357 09:07:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:36.357 09:07:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:36.357 09:07:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:36.357 09:07:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:36.357 09:07:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:36.357 09:07:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:41:36.357 09:07:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:41:36.357 09:07:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:41:36.357 09:07:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:41:36.357 09:07:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:41:36.357 09:07:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:41:36.357 09:07:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:41:36.357 09:07:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:41:36.357 09:07:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:36.357 09:07:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:36.357 request: 00:41:36.357 { 00:41:36.357 "name": "nvme0", 00:41:36.357 "trtype": "tcp", 00:41:36.357 "traddr": "10.0.0.1", 00:41:36.357 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:41:36.357 "adrfam": "ipv4", 00:41:36.357 "trsvcid": "4420", 00:41:36.357 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:41:36.357 "dhchap_key": "key2", 00:41:36.357 "method": "bdev_nvme_attach_controller", 00:41:36.357 "req_id": 1 00:41:36.357 } 00:41:36.357 Got JSON-RPC error response 00:41:36.357 response: 00:41:36.357 { 00:41:36.357 "code": -32602, 00:41:36.357 "message": "Invalid parameters" 00:41:36.357 } 00:41:36.357 09:07:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:41:36.357 09:07:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:41:36.358 09:07:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:41:36.358 09:07:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:41:36.358 09:07:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:41:36.358 09:07:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:41:36.358 09:07:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:36.358 09:07:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:36.358 09:07:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:41:36.358 09:07:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:36.358 09:07:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:41:36.358 09:07:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:41:36.358 09:07:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:36.358 09:07:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:36.358 09:07:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:36.358 09:07:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:36.358 09:07:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:36.358 09:07:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:36.358 09:07:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:36.358 09:07:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:36.358 09:07:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:36.358 09:07:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:36.358 09:07:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:41:36.358 09:07:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:41:36.358 09:07:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:41:36.358 09:07:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:41:36.358 09:07:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:41:36.358 09:07:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:41:36.358 09:07:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:41:36.358 09:07:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:41:36.358 09:07:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:36.358 09:07:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:36.358 request: 00:41:36.358 { 00:41:36.358 "name": "nvme0", 00:41:36.358 "trtype": "tcp", 00:41:36.358 "traddr": "10.0.0.1", 00:41:36.358 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:41:36.358 "adrfam": "ipv4", 00:41:36.358 "trsvcid": "4420", 00:41:36.358 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:41:36.358 "dhchap_key": "key1", 00:41:36.358 "dhchap_ctrlr_key": "ckey2", 00:41:36.358 "method": "bdev_nvme_attach_controller", 00:41:36.358 "req_id": 1 00:41:36.358 } 00:41:36.358 Got JSON-RPC error response 00:41:36.358 response: 00:41:36.358 { 00:41:36.358 "code": -32602, 00:41:36.358 "message": "Invalid parameters" 00:41:36.358 } 00:41:36.358 09:07:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:41:36.358 09:07:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:41:36.358 09:07:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:41:36.358 09:07:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:41:36.358 09:07:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:41:36.358 09:07:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:41:36.358 09:07:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:41:36.358 09:07:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:41:36.358 09:07:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:41:36.358 09:07:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:41:36.358 09:07:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:41:36.358 09:07:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:41:36.358 09:07:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:41:36.358 09:07:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:41:36.358 rmmod nvme_tcp 00:41:36.615 rmmod nvme_fabrics 00:41:36.615 09:07:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:41:36.615 09:07:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:41:36.615 09:07:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:41:36.615 09:07:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 2421410 ']' 00:41:36.615 09:07:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 2421410 00:41:36.615 09:07:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@947 -- # '[' -z 2421410 ']' 00:41:36.615 09:07:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # kill -0 2421410 00:41:36.615 09:07:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # uname 00:41:36.615 09:07:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:41:36.615 09:07:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2421410 00:41:36.615 09:07:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:41:36.615 09:07:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:41:36.615 09:07:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2421410' 00:41:36.615 killing process with pid 2421410 00:41:36.615 09:07:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # kill 2421410 00:41:36.615 09:07:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@971 -- # wait 2421410 00:41:36.874 09:07:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:41:36.874 09:07:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:41:36.874 09:07:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:41:36.874 09:07:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:41:36.874 09:07:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:41:36.874 09:07:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:36.874 09:07:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:36.874 09:07:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:38.777 09:07:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:41:38.777 09:07:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:41:38.777 09:07:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:41:38.777 09:07:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:41:38.777 09:07:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:41:38.777 09:07:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:41:38.777 09:07:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:41:38.777 09:07:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:41:38.777 09:07:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:41:38.777 09:07:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:41:38.777 09:07:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:41:38.777 09:07:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:41:38.777 09:07:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:41:40.146 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:41:40.146 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:41:40.146 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:41:40.146 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:41:40.146 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:41:40.146 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:41:40.146 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:41:40.146 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:41:40.146 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:41:40.146 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:41:40.146 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:41:40.146 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:41:40.146 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:41:40.146 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:41:40.146 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:41:40.146 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:41:41.078 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:41:41.335 09:07:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.GaF /tmp/spdk.key-null.BMS /tmp/spdk.key-sha256.kvk /tmp/spdk.key-sha384.W56 /tmp/spdk.key-sha512.CQ0 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:41:41.335 09:07:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:41:42.708 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:41:42.708 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:41:42.708 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:41:42.708 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:41:42.708 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:41:42.708 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:41:42.708 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:41:42.708 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:41:42.708 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:41:42.708 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:41:42.708 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:41:42.708 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:41:42.708 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:41:42.708 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:41:42.708 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:41:42.708 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:41:42.708 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:41:42.708 00:41:42.708 real 0m47.182s 00:41:42.708 user 0m44.544s 00:41:42.708 sys 0m6.158s 00:41:42.708 09:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # xtrace_disable 00:41:42.708 09:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:42.708 ************************************ 00:41:42.708 END TEST nvmf_auth_host 00:41:42.708 ************************************ 00:41:42.708 09:07:37 nvmf_tcp -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:41:42.708 09:07:37 nvmf_tcp -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:41:42.708 09:07:37 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:41:42.708 09:07:37 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:41:42.708 09:07:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:42.708 ************************************ 00:41:42.708 START TEST nvmf_digest 00:41:42.708 ************************************ 00:41:42.708 09:07:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:41:42.708 * Looking for test storage... 00:41:42.708 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:41:42.708 09:07:37 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:42.708 09:07:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:41:42.708 09:07:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:42.708 09:07:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:42.708 09:07:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:42.708 09:07:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:42.708 09:07:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:42.708 09:07:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:42.708 09:07:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:42.708 09:07:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:42.708 09:07:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:42.708 09:07:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:42.708 09:07:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:41:42.708 09:07:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:41:42.708 09:07:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:42.708 09:07:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:42.708 09:07:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:42.708 09:07:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:42.708 09:07:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:42.708 09:07:37 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:42.708 09:07:37 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:42.708 09:07:37 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:42.708 09:07:37 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:42.708 09:07:37 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:42.708 09:07:37 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:42.708 09:07:37 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:41:42.708 09:07:37 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:42.708 09:07:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:41:42.708 09:07:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:41:42.708 09:07:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:41:42.708 09:07:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:42.708 09:07:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:42.708 09:07:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:42.708 09:07:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:41:42.708 09:07:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:41:42.708 09:07:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:41:42.708 09:07:37 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:41:42.708 09:07:37 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:41:42.708 09:07:37 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:41:42.708 09:07:37 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:41:42.708 09:07:37 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:41:42.708 09:07:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:41:42.708 09:07:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:42.708 09:07:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:41:42.708 09:07:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:41:42.708 09:07:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:41:42.708 09:07:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:42.708 09:07:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:42.708 09:07:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:42.708 09:07:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:41:42.708 09:07:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:41:42.709 09:07:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:41:42.709 09:07:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:41:45.233 Found 0000:09:00.0 (0x8086 - 0x159b) 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:41:45.233 Found 0000:09:00.1 (0x8086 - 0x159b) 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:41:45.233 Found net devices under 0000:09:00.0: cvl_0_0 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:41:45.233 Found net devices under 0000:09:00.1: cvl_0_1 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:41:45.233 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:45.234 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:45.234 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:45.234 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:41:45.234 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:45.234 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:45.234 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:41:45.234 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:45.234 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:45.234 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:41:45.234 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:41:45.234 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:41:45.234 09:07:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:45.491 09:07:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:45.491 09:07:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:45.491 09:07:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:41:45.491 09:07:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:45.491 09:07:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:45.491 09:07:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:45.491 09:07:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:41:45.491 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:45.491 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:41:45.491 00:41:45.491 --- 10.0.0.2 ping statistics --- 00:41:45.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:45.491 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:41:45.491 09:07:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:45.491 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:45.491 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:41:45.491 00:41:45.491 --- 10.0.0.1 ping statistics --- 00:41:45.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:45.491 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:41:45.491 09:07:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:45.491 09:07:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:41:45.491 09:07:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:41:45.491 09:07:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:45.491 09:07:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:41:45.491 09:07:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:41:45.491 09:07:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:45.491 09:07:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:41:45.491 09:07:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:41:45.491 09:07:40 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:41:45.491 09:07:40 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:41:45.491 09:07:40 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:41:45.491 09:07:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:41:45.491 09:07:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1104 -- # xtrace_disable 00:41:45.491 09:07:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:41:45.491 ************************************ 00:41:45.491 START TEST nvmf_digest_clean 00:41:45.491 ************************************ 00:41:45.491 09:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1122 -- # run_digest 00:41:45.491 09:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:41:45.491 09:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:41:45.492 09:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:41:45.492 09:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:41:45.492 09:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:41:45.492 09:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:41:45.492 09:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@721 -- # xtrace_disable 00:41:45.492 09:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:41:45.492 09:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=2431024 00:41:45.492 09:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:41:45.492 09:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 2431024 00:41:45.492 09:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@828 -- # '[' -z 2431024 ']' 00:41:45.492 09:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:45.492 09:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local max_retries=100 00:41:45.492 09:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:45.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:45.492 09:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # xtrace_disable 00:41:45.492 09:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:41:45.492 [2024-05-15 09:07:40.193755] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:41:45.492 [2024-05-15 09:07:40.193847] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:45.492 EAL: No free 2048 kB hugepages reported on node 1 00:41:45.492 [2024-05-15 09:07:40.272309] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:45.749 [2024-05-15 09:07:40.360241] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:45.749 [2024-05-15 09:07:40.360292] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:45.749 [2024-05-15 09:07:40.360306] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:45.749 [2024-05-15 09:07:40.360318] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:45.749 [2024-05-15 09:07:40.360328] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:45.749 [2024-05-15 09:07:40.360353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:41:45.749 09:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:41:45.749 09:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@861 -- # return 0 00:41:45.749 09:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:41:45.749 09:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@727 -- # xtrace_disable 00:41:45.749 09:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:41:45.749 09:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:45.749 09:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:41:45.749 09:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:41:45.749 09:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:41:45.749 09:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:45.749 09:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:41:45.749 null0 00:41:46.007 [2024-05-15 09:07:40.542192] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:46.007 [2024-05-15 09:07:40.566181] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:41:46.007 [2024-05-15 09:07:40.566456] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:46.007 09:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:46.007 09:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:41:46.007 09:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:41:46.007 09:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:41:46.007 09:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:41:46.007 09:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:41:46.007 09:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:41:46.007 09:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:41:46.007 09:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2431173 00:41:46.007 09:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2431173 /var/tmp/bperf.sock 00:41:46.007 09:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:41:46.007 09:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@828 -- # '[' -z 2431173 ']' 00:41:46.007 09:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:41:46.007 09:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local max_retries=100 00:41:46.007 09:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:41:46.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:41:46.007 09:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # xtrace_disable 00:41:46.007 09:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:41:46.007 [2024-05-15 09:07:40.611330] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:41:46.007 [2024-05-15 09:07:40.611397] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2431173 ] 00:41:46.007 EAL: No free 2048 kB hugepages reported on node 1 00:41:46.007 [2024-05-15 09:07:40.681443] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:46.007 [2024-05-15 09:07:40.768636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:41:46.264 09:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:41:46.264 09:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@861 -- # return 0 00:41:46.264 09:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:41:46.264 09:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:41:46.264 09:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:41:46.520 09:07:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:41:46.520 09:07:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:41:46.777 nvme0n1 00:41:46.777 09:07:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:41:46.777 09:07:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:41:47.034 Running I/O for 2 seconds... 00:41:48.928 00:41:48.928 Latency(us) 00:41:48.928 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:48.928 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:41:48.928 nvme0n1 : 2.01 15604.58 60.96 0.00 0.00 8189.85 3810.80 18155.90 00:41:48.928 =================================================================================================================== 00:41:48.928 Total : 15604.58 60.96 0.00 0.00 8189.85 3810.80 18155.90 00:41:48.928 0 00:41:48.928 09:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:41:48.928 09:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:41:48.928 09:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:41:48.928 09:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:41:48.928 09:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:41:48.928 | select(.opcode=="crc32c") 00:41:48.928 | "\(.module_name) \(.executed)"' 00:41:49.186 09:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:41:49.186 09:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:41:49.186 09:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:41:49.186 09:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:41:49.186 09:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2431173 00:41:49.186 09:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # '[' -z 2431173 ']' 00:41:49.186 09:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # kill -0 2431173 00:41:49.186 09:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # uname 00:41:49.186 09:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:41:49.186 09:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2431173 00:41:49.186 09:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:41:49.186 09:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:41:49.186 09:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2431173' 00:41:49.186 killing process with pid 2431173 00:41:49.186 09:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # kill 2431173 00:41:49.186 Received shutdown signal, test time was about 2.000000 seconds 00:41:49.186 00:41:49.186 Latency(us) 00:41:49.186 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:49.186 =================================================================================================================== 00:41:49.186 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:49.186 09:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # wait 2431173 00:41:49.472 09:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:41:49.472 09:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:41:49.472 09:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:41:49.472 09:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:41:49.472 09:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:41:49.472 09:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:41:49.472 09:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:41:49.472 09:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2431573 00:41:49.472 09:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2431573 /var/tmp/bperf.sock 00:41:49.472 09:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:41:49.472 09:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@828 -- # '[' -z 2431573 ']' 00:41:49.472 09:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:41:49.472 09:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local max_retries=100 00:41:49.472 09:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:41:49.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:41:49.472 09:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # xtrace_disable 00:41:49.472 09:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:41:49.472 [2024-05-15 09:07:44.160579] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:41:49.472 [2024-05-15 09:07:44.160675] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2431573 ] 00:41:49.472 I/O size of 131072 is greater than zero copy threshold (65536). 00:41:49.472 Zero copy mechanism will not be used. 00:41:49.472 EAL: No free 2048 kB hugepages reported on node 1 00:41:49.472 [2024-05-15 09:07:44.232116] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:49.729 [2024-05-15 09:07:44.322487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:41:49.729 09:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:41:49.729 09:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@861 -- # return 0 00:41:49.729 09:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:41:49.729 09:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:41:49.729 09:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:41:49.987 09:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:41:49.987 09:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:41:50.244 nvme0n1 00:41:50.244 09:07:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:41:50.244 09:07:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:41:50.515 I/O size of 131072 is greater than zero copy threshold (65536). 00:41:50.515 Zero copy mechanism will not be used. 00:41:50.515 Running I/O for 2 seconds... 00:41:52.414 00:41:52.414 Latency(us) 00:41:52.414 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:52.414 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:41:52.414 nvme0n1 : 2.00 4692.55 586.57 0.00 0.00 3404.96 782.79 4830.25 00:41:52.414 =================================================================================================================== 00:41:52.414 Total : 4692.55 586.57 0.00 0.00 3404.96 782.79 4830.25 00:41:52.414 0 00:41:52.414 09:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:41:52.414 09:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:41:52.414 09:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:41:52.414 09:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:41:52.414 | select(.opcode=="crc32c") 00:41:52.414 | "\(.module_name) \(.executed)"' 00:41:52.414 09:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:41:52.672 09:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:41:52.672 09:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:41:52.672 09:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:41:52.672 09:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:41:52.672 09:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2431573 00:41:52.672 09:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # '[' -z 2431573 ']' 00:41:52.672 09:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # kill -0 2431573 00:41:52.672 09:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # uname 00:41:52.672 09:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:41:52.672 09:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2431573 00:41:52.672 09:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:41:52.672 09:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:41:52.672 09:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2431573' 00:41:52.672 killing process with pid 2431573 00:41:52.672 09:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # kill 2431573 00:41:52.672 Received shutdown signal, test time was about 2.000000 seconds 00:41:52.672 00:41:52.672 Latency(us) 00:41:52.672 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:52.672 =================================================================================================================== 00:41:52.672 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:52.672 09:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # wait 2431573 00:41:52.929 09:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:41:52.929 09:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:41:52.929 09:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:41:52.929 09:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:41:52.929 09:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:41:52.929 09:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:41:52.929 09:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:41:52.930 09:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2431985 00:41:52.930 09:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:41:52.930 09:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2431985 /var/tmp/bperf.sock 00:41:52.930 09:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@828 -- # '[' -z 2431985 ']' 00:41:52.930 09:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:41:52.930 09:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local max_retries=100 00:41:52.930 09:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:41:52.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:41:52.930 09:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # xtrace_disable 00:41:52.930 09:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:41:52.930 [2024-05-15 09:07:47.694791] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:41:52.930 [2024-05-15 09:07:47.694869] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2431985 ] 00:41:53.187 EAL: No free 2048 kB hugepages reported on node 1 00:41:53.187 [2024-05-15 09:07:47.765756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:53.187 [2024-05-15 09:07:47.853272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:41:53.187 09:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:41:53.187 09:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@861 -- # return 0 00:41:53.187 09:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:41:53.187 09:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:41:53.187 09:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:41:53.445 09:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:41:53.445 09:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:41:54.012 nvme0n1 00:41:54.012 09:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:41:54.012 09:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:41:54.012 Running I/O for 2 seconds... 00:41:56.539 00:41:56.539 Latency(us) 00:41:56.539 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:56.539 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:56.539 nvme0n1 : 2.01 20613.88 80.52 0.00 0.00 6194.16 3046.21 10194.49 00:41:56.539 =================================================================================================================== 00:41:56.539 Total : 20613.88 80.52 0.00 0.00 6194.16 3046.21 10194.49 00:41:56.539 0 00:41:56.539 09:07:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:41:56.539 09:07:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:41:56.539 09:07:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:41:56.539 09:07:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:41:56.539 09:07:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:41:56.539 | select(.opcode=="crc32c") 00:41:56.539 | "\(.module_name) \(.executed)"' 00:41:56.539 09:07:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:41:56.539 09:07:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:41:56.539 09:07:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:41:56.539 09:07:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:41:56.539 09:07:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2431985 00:41:56.539 09:07:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # '[' -z 2431985 ']' 00:41:56.539 09:07:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # kill -0 2431985 00:41:56.539 09:07:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # uname 00:41:56.539 09:07:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:41:56.539 09:07:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2431985 00:41:56.539 09:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:41:56.539 09:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:41:56.539 09:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2431985' 00:41:56.539 killing process with pid 2431985 00:41:56.539 09:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # kill 2431985 00:41:56.539 Received shutdown signal, test time was about 2.000000 seconds 00:41:56.539 00:41:56.539 Latency(us) 00:41:56.539 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:56.539 =================================================================================================================== 00:41:56.539 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:56.539 09:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # wait 2431985 00:41:56.539 09:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:41:56.539 09:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:41:56.539 09:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:41:56.539 09:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:41:56.539 09:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:41:56.539 09:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:41:56.539 09:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:41:56.539 09:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2432385 00:41:56.539 09:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2432385 /var/tmp/bperf.sock 00:41:56.539 09:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:41:56.539 09:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@828 -- # '[' -z 2432385 ']' 00:41:56.539 09:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:41:56.539 09:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local max_retries=100 00:41:56.539 09:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:41:56.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:41:56.539 09:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # xtrace_disable 00:41:56.539 09:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:41:56.539 [2024-05-15 09:07:51.293772] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:41:56.539 [2024-05-15 09:07:51.293860] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2432385 ] 00:41:56.539 I/O size of 131072 is greater than zero copy threshold (65536). 00:41:56.539 Zero copy mechanism will not be used. 00:41:56.539 EAL: No free 2048 kB hugepages reported on node 1 00:41:56.797 [2024-05-15 09:07:51.366863] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:56.797 [2024-05-15 09:07:51.456427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:41:56.797 09:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:41:56.797 09:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@861 -- # return 0 00:41:56.797 09:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:41:56.797 09:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:41:56.797 09:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:41:57.056 09:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:41:57.056 09:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:41:57.621 nvme0n1 00:41:57.621 09:07:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:41:57.621 09:07:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:41:57.879 I/O size of 131072 is greater than zero copy threshold (65536). 00:41:57.879 Zero copy mechanism will not be used. 00:41:57.879 Running I/O for 2 seconds... 00:41:59.778 00:41:59.778 Latency(us) 00:41:59.778 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:59.778 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:41:59.778 nvme0n1 : 2.00 5344.13 668.02 0.00 0.00 2985.38 2281.62 13689.74 00:41:59.778 =================================================================================================================== 00:41:59.778 Total : 5344.13 668.02 0.00 0.00 2985.38 2281.62 13689.74 00:41:59.778 0 00:41:59.778 09:07:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:41:59.778 09:07:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:41:59.778 09:07:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:41:59.778 09:07:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:41:59.778 09:07:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:41:59.778 | select(.opcode=="crc32c") 00:41:59.778 | "\(.module_name) \(.executed)"' 00:42:00.036 09:07:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:42:00.036 09:07:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:42:00.036 09:07:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:42:00.036 09:07:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:42:00.036 09:07:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2432385 00:42:00.036 09:07:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # '[' -z 2432385 ']' 00:42:00.036 09:07:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # kill -0 2432385 00:42:00.036 09:07:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # uname 00:42:00.036 09:07:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:42:00.036 09:07:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2432385 00:42:00.036 09:07:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:42:00.036 09:07:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:42:00.036 09:07:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2432385' 00:42:00.036 killing process with pid 2432385 00:42:00.036 09:07:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # kill 2432385 00:42:00.036 Received shutdown signal, test time was about 2.000000 seconds 00:42:00.036 00:42:00.036 Latency(us) 00:42:00.036 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:00.036 =================================================================================================================== 00:42:00.036 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:00.036 09:07:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # wait 2432385 00:42:00.294 09:07:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2431024 00:42:00.294 09:07:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # '[' -z 2431024 ']' 00:42:00.294 09:07:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # kill -0 2431024 00:42:00.294 09:07:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # uname 00:42:00.294 09:07:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:42:00.294 09:07:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2431024 00:42:00.294 09:07:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:42:00.294 09:07:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:42:00.294 09:07:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2431024' 00:42:00.294 killing process with pid 2431024 00:42:00.294 09:07:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # kill 2431024 00:42:00.294 [2024-05-15 09:07:54.960694] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:42:00.294 09:07:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # wait 2431024 00:42:00.553 00:42:00.553 real 0m15.056s 00:42:00.553 user 0m29.296s 00:42:00.553 sys 0m4.327s 00:42:00.553 09:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # xtrace_disable 00:42:00.553 09:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:42:00.553 ************************************ 00:42:00.553 END TEST nvmf_digest_clean 00:42:00.553 ************************************ 00:42:00.553 09:07:55 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:42:00.553 09:07:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:42:00.553 09:07:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1104 -- # xtrace_disable 00:42:00.553 09:07:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:42:00.553 ************************************ 00:42:00.553 START TEST nvmf_digest_error 00:42:00.553 ************************************ 00:42:00.553 09:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1122 -- # run_digest_error 00:42:00.553 09:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:42:00.553 09:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:42:00.553 09:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@721 -- # xtrace_disable 00:42:00.553 09:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:42:00.553 09:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=2432888 00:42:00.553 09:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:42:00.553 09:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 2432888 00:42:00.553 09:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # '[' -z 2432888 ']' 00:42:00.553 09:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:00.553 09:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local max_retries=100 00:42:00.553 09:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:00.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:00.553 09:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # xtrace_disable 00:42:00.553 09:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:42:00.553 [2024-05-15 09:07:55.307729] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:42:00.553 [2024-05-15 09:07:55.307819] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:00.812 EAL: No free 2048 kB hugepages reported on node 1 00:42:00.812 [2024-05-15 09:07:55.387018] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:00.812 [2024-05-15 09:07:55.477540] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:00.812 [2024-05-15 09:07:55.477594] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:00.812 [2024-05-15 09:07:55.477618] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:00.812 [2024-05-15 09:07:55.477632] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:00.812 [2024-05-15 09:07:55.477645] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:00.812 [2024-05-15 09:07:55.477675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:42:00.812 09:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:42:00.812 09:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@861 -- # return 0 00:42:00.812 09:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:42:00.812 09:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@727 -- # xtrace_disable 00:42:00.812 09:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:42:00.812 09:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:00.812 09:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:42:00.812 09:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:00.812 09:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:42:00.812 [2024-05-15 09:07:55.546252] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:42:00.812 09:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:00.812 09:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:42:00.812 09:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:42:00.812 09:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:00.812 09:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:42:01.071 null0 00:42:01.071 [2024-05-15 09:07:55.664537] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:01.071 [2024-05-15 09:07:55.688530] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:42:01.071 [2024-05-15 09:07:55.688841] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:01.071 09:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:01.071 09:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:42:01.071 09:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:42:01.071 09:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:42:01.071 09:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:42:01.071 09:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:42:01.071 09:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2432968 00:42:01.071 09:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2432968 /var/tmp/bperf.sock 00:42:01.071 09:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:42:01.071 09:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # '[' -z 2432968 ']' 00:42:01.071 09:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:01.071 09:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local max_retries=100 00:42:01.071 09:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:01.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:01.071 09:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # xtrace_disable 00:42:01.071 09:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:42:01.071 [2024-05-15 09:07:55.732773] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:42:01.071 [2024-05-15 09:07:55.732837] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2432968 ] 00:42:01.071 EAL: No free 2048 kB hugepages reported on node 1 00:42:01.071 [2024-05-15 09:07:55.802510] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:01.329 [2024-05-15 09:07:55.891330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:42:01.329 09:07:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:42:01.329 09:07:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@861 -- # return 0 00:42:01.329 09:07:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:42:01.329 09:07:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:42:01.586 09:07:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:42:01.586 09:07:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:01.586 09:07:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:42:01.586 09:07:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:01.586 09:07:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:42:01.586 09:07:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:42:02.153 nvme0n1 00:42:02.153 09:07:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:42:02.153 09:07:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:02.153 09:07:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:42:02.153 09:07:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:02.153 09:07:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:42:02.153 09:07:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:02.153 Running I/O for 2 seconds... 00:42:02.153 [2024-05-15 09:07:56.888824] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:02.153 [2024-05-15 09:07:56.888881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:25599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:02.153 [2024-05-15 09:07:56.888905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:02.153 [2024-05-15 09:07:56.905150] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:02.153 [2024-05-15 09:07:56.905188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:23913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:02.153 [2024-05-15 09:07:56.905208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:02.153 [2024-05-15 09:07:56.918382] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:02.153 [2024-05-15 09:07:56.918415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:9552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:02.153 [2024-05-15 09:07:56.918433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:02.153 [2024-05-15 09:07:56.930801] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:02.154 [2024-05-15 09:07:56.930835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:02.154 [2024-05-15 09:07:56.930852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:02.154 [2024-05-15 09:07:56.944046] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:02.154 [2024-05-15 09:07:56.944082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:17054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:02.154 [2024-05-15 09:07:56.944101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:02.413 [2024-05-15 09:07:56.958557] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:02.413 [2024-05-15 09:07:56.958591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:8603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:02.413 [2024-05-15 09:07:56.958610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:02.413 [2024-05-15 09:07:56.972298] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:02.413 [2024-05-15 09:07:56.972331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:8876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:02.413 [2024-05-15 09:07:56.972361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:02.413 [2024-05-15 09:07:56.984954] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:02.413 [2024-05-15 09:07:56.984989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:17748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:02.413 [2024-05-15 09:07:56.985009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:02.413 [2024-05-15 09:07:56.997909] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:02.413 [2024-05-15 09:07:56.997944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:16712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:02.413 [2024-05-15 09:07:56.997964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:02.413 [2024-05-15 09:07:57.013254] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:02.413 [2024-05-15 09:07:57.013285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:02.413 [2024-05-15 09:07:57.013303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:02.413 [2024-05-15 09:07:57.025713] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:02.413 [2024-05-15 09:07:57.025749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:02.413 [2024-05-15 09:07:57.025769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:02.413 [2024-05-15 09:07:57.042862] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:02.413 [2024-05-15 09:07:57.042897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:10420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:02.413 [2024-05-15 09:07:57.042916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:02.413 [2024-05-15 09:07:57.055488] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:02.413 [2024-05-15 09:07:57.055520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:6868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:02.413 [2024-05-15 09:07:57.055537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:02.413 [2024-05-15 09:07:57.069232] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:02.413 [2024-05-15 09:07:57.069280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:24867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:02.413 [2024-05-15 09:07:57.069297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:02.413 [2024-05-15 09:07:57.084223] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:02.413 [2024-05-15 09:07:57.084271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:24416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:02.413 [2024-05-15 09:07:57.084289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:02.413 [2024-05-15 09:07:57.097590] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:02.413 [2024-05-15 09:07:57.097625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:02.413 [2024-05-15 09:07:57.097644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:02.413 [2024-05-15 09:07:57.110940] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:02.413 [2024-05-15 09:07:57.110974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:14605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:02.413 [2024-05-15 09:07:57.110994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:02.413 [2024-05-15 09:07:57.123604] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:02.413 [2024-05-15 09:07:57.123638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:5365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:02.413 [2024-05-15 09:07:57.123657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:02.413 [2024-05-15 09:07:57.137925] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:02.413 [2024-05-15 09:07:57.137959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:6090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:02.414 [2024-05-15 09:07:57.137978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:02.414 [2024-05-15 09:07:57.150486] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:02.414 [2024-05-15 09:07:57.150517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:02.414 [2024-05-15 09:07:57.150534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:02.414 [2024-05-15 09:07:57.166364] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:02.414 [2024-05-15 09:07:57.166396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:20141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:02.414 [2024-05-15 09:07:57.166414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:02.414 [2024-05-15 09:07:57.179368] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:02.414 [2024-05-15 09:07:57.179398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:3549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:02.414 [2024-05-15 09:07:57.179414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:02.414 [2024-05-15 09:07:57.194565] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:02.414 [2024-05-15 09:07:57.194600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:1584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:02.414 [2024-05-15 09:07:57.194620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:02.673 [2024-05-15 09:07:57.208869] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:02.673 [2024-05-15 09:07:57.208908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:15274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:02.673 [2024-05-15 09:07:57.208938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:02.673 [2024-05-15 09:07:57.224099] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:02.673 [2024-05-15 09:07:57.224135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:02.673 [2024-05-15 09:07:57.224155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:02.673 [2024-05-15 09:07:57.237974] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:02.673 [2024-05-15 09:07:57.238008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:02.673 [2024-05-15 09:07:57.238028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:02.673 [2024-05-15 09:07:57.251747] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:02.673 [2024-05-15 09:07:57.251783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:02.673 [2024-05-15 09:07:57.251803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:02.673 [2024-05-15 09:07:57.266919] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:02.673 [2024-05-15 09:07:57.266955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:02.673 [2024-05-15 09:07:57.266975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:02.673 [2024-05-15 09:07:57.282970] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:02.673 [2024-05-15 09:07:57.283006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:02.673 [2024-05-15 09:07:57.283026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:02.673 [2024-05-15 09:07:57.296689] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:02.673 [2024-05-15 09:07:57.296724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:15541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:02.673 [2024-05-15 09:07:57.296745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:02.673 [2024-05-15 09:07:57.312202] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:02.673 [2024-05-15 09:07:57.312262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:02.673 [2024-05-15 09:07:57.312281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:02.673 [2024-05-15 09:07:57.325085] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:02.673 [2024-05-15 09:07:57.325120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:02.673 [2024-05-15 09:07:57.325139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:02.673 [2024-05-15 09:07:57.338410] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:02.673 [2024-05-15 09:07:57.338446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:02.673 [2024-05-15 09:07:57.338464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:02.673 [2024-05-15 09:07:57.351301] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:02.673 [2024-05-15 09:07:57.351329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:02.673 [2024-05-15 09:07:57.351345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:02.673 [2024-05-15 09:07:57.364911] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:02.673 [2024-05-15 09:07:57.364945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:6353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:02.673 [2024-05-15 09:07:57.364965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:02.673 [2024-05-15 09:07:57.377658] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:02.673 [2024-05-15 09:07:57.377691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:02.673 [2024-05-15 09:07:57.377710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:02.673 [2024-05-15 09:07:57.391304] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:02.673 [2024-05-15 09:07:57.391333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:8820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:02.673 [2024-05-15 09:07:57.391364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:02.673 [2024-05-15 09:07:57.404512] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:02.673 [2024-05-15 09:07:57.404542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:02.673 [2024-05-15 09:07:57.404559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:02.673 [2024-05-15 09:07:57.418377] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:02.673 [2024-05-15 09:07:57.418407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:02.673 [2024-05-15 09:07:57.418424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:02.673 [2024-05-15 09:07:57.430845] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:02.673 [2024-05-15 09:07:57.430880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:02.673 [2024-05-15 09:07:57.430899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:02.673 [2024-05-15 09:07:57.445659] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:02.673 [2024-05-15 09:07:57.445694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:02.673 [2024-05-15 09:07:57.445713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:02.673 [2024-05-15 09:07:57.462059] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:02.673 [2024-05-15 09:07:57.462097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:02.673 [2024-05-15 09:07:57.462118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:02.933 [2024-05-15 09:07:57.478061] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:02.933 [2024-05-15 09:07:57.478100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:11815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:02.933 [2024-05-15 09:07:57.478120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:02.933 [2024-05-15 09:07:57.490727] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:02.933 [2024-05-15 09:07:57.490763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:02.933 [2024-05-15 09:07:57.490783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:02.933 [2024-05-15 09:07:57.505355] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:02.933 [2024-05-15 09:07:57.505385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:02.933 [2024-05-15 09:07:57.505402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:02.933 [2024-05-15 09:07:57.517454] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:02.933 [2024-05-15 09:07:57.517485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:16240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:02.933 [2024-05-15 09:07:57.517502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:02.933 [2024-05-15 09:07:57.532944] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:02.933 [2024-05-15 09:07:57.532979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:02.933 [2024-05-15 09:07:57.532999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:02.933 [2024-05-15 09:07:57.549953] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:02.933 [2024-05-15 09:07:57.549987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:2793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:02.933 [2024-05-15 09:07:57.550006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:02.933 [2024-05-15 09:07:57.561944] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:02.933 [2024-05-15 09:07:57.561981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:9968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:02.933 [2024-05-15 09:07:57.562002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:02.933 [2024-05-15 09:07:57.577977] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:02.933 [2024-05-15 09:07:57.578011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:02.933 [2024-05-15 09:07:57.578037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:02.933 [2024-05-15 09:07:57.588972] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:02.933 [2024-05-15 09:07:57.589006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:14022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:02.933 [2024-05-15 09:07:57.589025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:02.933 [2024-05-15 09:07:57.604490] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:02.933 [2024-05-15 09:07:57.604519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:02.933 [2024-05-15 09:07:57.604550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:02.933 [2024-05-15 09:07:57.622237] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:02.933 [2024-05-15 09:07:57.622283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:02.933 [2024-05-15 09:07:57.622300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:02.933 [2024-05-15 09:07:57.638488] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:02.933 [2024-05-15 09:07:57.638532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:02.933 [2024-05-15 09:07:57.638548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:02.933 [2024-05-15 09:07:57.652145] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:02.933 [2024-05-15 09:07:57.652179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:22922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:02.933 [2024-05-15 09:07:57.652198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:02.933 [2024-05-15 09:07:57.664063] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:02.933 [2024-05-15 09:07:57.664096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:22018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:02.933 [2024-05-15 09:07:57.664115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:02.933 [2024-05-15 09:07:57.681689] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:02.933 [2024-05-15 09:07:57.681720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:16856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:02.933 [2024-05-15 09:07:57.681736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:02.933 [2024-05-15 09:07:57.697263] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:02.933 [2024-05-15 09:07:57.697293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:02.933 [2024-05-15 09:07:57.697309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:02.933 [2024-05-15 09:07:57.709323] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:02.933 [2024-05-15 09:07:57.709355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:02.933 [2024-05-15 09:07:57.709372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:02.933 [2024-05-15 09:07:57.723080] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:02.933 [2024-05-15 09:07:57.723118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:02.933 [2024-05-15 09:07:57.723139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:03.192 [2024-05-15 09:07:57.736379] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:03.192 [2024-05-15 09:07:57.736410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:03.192 [2024-05-15 09:07:57.736428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:03.192 [2024-05-15 09:07:57.751709] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:03.192 [2024-05-15 09:07:57.751744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:9393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:03.192 [2024-05-15 09:07:57.751764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:03.192 [2024-05-15 09:07:57.763491] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:03.192 [2024-05-15 09:07:57.763520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:17506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:03.192 [2024-05-15 09:07:57.763553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:03.192 [2024-05-15 09:07:57.778578] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:03.192 [2024-05-15 09:07:57.778612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:03.192 [2024-05-15 09:07:57.778631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:03.192 [2024-05-15 09:07:57.789754] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:03.192 [2024-05-15 09:07:57.789788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:03.192 [2024-05-15 09:07:57.789806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:03.192 [2024-05-15 09:07:57.804726] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:03.192 [2024-05-15 09:07:57.804756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:4566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:03.192 [2024-05-15 09:07:57.804773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:03.192 [2024-05-15 09:07:57.819399] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:03.192 [2024-05-15 09:07:57.819430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:20489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:03.192 [2024-05-15 09:07:57.819469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:03.192 [2024-05-15 09:07:57.834400] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:03.193 [2024-05-15 09:07:57.834430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:03.193 [2024-05-15 09:07:57.834447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:03.193 [2024-05-15 09:07:57.847582] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:03.193 [2024-05-15 09:07:57.847611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:03.193 [2024-05-15 09:07:57.847628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:03.193 [2024-05-15 09:07:57.859621] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:03.193 [2024-05-15 09:07:57.859656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:03.193 [2024-05-15 09:07:57.859675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:03.193 [2024-05-15 09:07:57.872728] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:03.193 [2024-05-15 09:07:57.872762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:7711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:03.193 [2024-05-15 09:07:57.872780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:03.193 [2024-05-15 09:07:57.886683] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:03.193 [2024-05-15 09:07:57.886716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:4291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:03.193 [2024-05-15 09:07:57.886734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:03.193 [2024-05-15 09:07:57.899060] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:03.193 [2024-05-15 09:07:57.899093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:03.193 [2024-05-15 09:07:57.899112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:03.193 [2024-05-15 09:07:57.912310] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:03.193 [2024-05-15 09:07:57.912339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:15261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:03.193 [2024-05-15 09:07:57.912356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:03.193 [2024-05-15 09:07:57.926167] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:03.193 [2024-05-15 09:07:57.926200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:03.193 [2024-05-15 09:07:57.926226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:03.193 [2024-05-15 09:07:57.939973] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:03.193 [2024-05-15 09:07:57.940013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:11507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:03.193 [2024-05-15 09:07:57.940032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:03.193 [2024-05-15 09:07:57.952971] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:03.193 [2024-05-15 09:07:57.953004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:03.193 [2024-05-15 09:07:57.953023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:03.193 [2024-05-15 09:07:57.965908] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:03.193 [2024-05-15 09:07:57.965941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:12252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:03.193 [2024-05-15 09:07:57.965960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:03.193 [2024-05-15 09:07:57.979286] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:03.193 [2024-05-15 09:07:57.979313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:03.193 [2024-05-15 09:07:57.979329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:03.452 [2024-05-15 09:07:57.994445] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:03.452 [2024-05-15 09:07:57.994476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:03.452 [2024-05-15 09:07:57.994493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:03.452 [2024-05-15 09:07:58.011515] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:03.452 [2024-05-15 09:07:58.011551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:8282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:03.452 [2024-05-15 09:07:58.011570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:03.452 [2024-05-15 09:07:58.026972] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:03.452 [2024-05-15 09:07:58.027002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:03.452 [2024-05-15 09:07:58.027019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:03.452 [2024-05-15 09:07:58.039042] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:03.452 [2024-05-15 09:07:58.039077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:25338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:03.452 [2024-05-15 09:07:58.039096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:03.452 [2024-05-15 09:07:58.053459] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:03.452 [2024-05-15 09:07:58.053490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:03.452 [2024-05-15 09:07:58.053508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:03.452 [2024-05-15 09:07:58.068674] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:03.452 [2024-05-15 09:07:58.068703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:3047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:03.452 [2024-05-15 09:07:58.068734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:03.452 [2024-05-15 09:07:58.081035] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:03.452 [2024-05-15 09:07:58.081069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:19284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:03.452 [2024-05-15 09:07:58.081088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:03.452 [2024-05-15 09:07:58.096327] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:03.452 [2024-05-15 09:07:58.096355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:03.452 [2024-05-15 09:07:58.096385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:03.452 [2024-05-15 09:07:58.109383] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:03.452 [2024-05-15 09:07:58.109410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:7785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:03.452 [2024-05-15 09:07:58.109426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:03.452 [2024-05-15 09:07:58.124764] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:03.452 [2024-05-15 09:07:58.124798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:03.452 [2024-05-15 09:07:58.124817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:03.452 [2024-05-15 09:07:58.139689] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:03.452 [2024-05-15 09:07:58.139723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:03.452 [2024-05-15 09:07:58.139741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:03.452 [2024-05-15 09:07:58.155020] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:03.452 [2024-05-15 09:07:58.155053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:16626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:03.452 [2024-05-15 09:07:58.155072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:03.452 [2024-05-15 09:07:58.167196] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:03.452 [2024-05-15 09:07:58.167237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:11368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:03.452 [2024-05-15 09:07:58.167270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:03.452 [2024-05-15 09:07:58.183908] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:03.452 [2024-05-15 09:07:58.183941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:25105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:03.453 [2024-05-15 09:07:58.183967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:03.453 [2024-05-15 09:07:58.195169] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:03.453 [2024-05-15 09:07:58.195202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:03.453 [2024-05-15 09:07:58.195229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:03.453 [2024-05-15 09:07:58.212129] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:03.453 [2024-05-15 09:07:58.212163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:20657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:03.453 [2024-05-15 09:07:58.212182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:03.453 [2024-05-15 09:07:58.224923] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:03.453 [2024-05-15 09:07:58.224956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:12382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:03.453 [2024-05-15 09:07:58.224975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:03.453 [2024-05-15 09:07:58.239404] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:03.453 [2024-05-15 09:07:58.239432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:19153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:03.453 [2024-05-15 09:07:58.239447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:03.711 [2024-05-15 09:07:58.252345] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:03.711 [2024-05-15 09:07:58.252381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:03.711 [2024-05-15 09:07:58.252401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:03.711 [2024-05-15 09:07:58.265908] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:03.711 [2024-05-15 09:07:58.265942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:03.711 [2024-05-15 09:07:58.265961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:03.711 [2024-05-15 09:07:58.278573] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:03.711 [2024-05-15 09:07:58.278627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:11256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:03.711 [2024-05-15 09:07:58.278647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:03.711 [2024-05-15 09:07:58.291764] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:03.711 [2024-05-15 09:07:58.291798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:5718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:03.711 [2024-05-15 09:07:58.291817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:03.711 [2024-05-15 09:07:58.304112] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:03.711 [2024-05-15 09:07:58.304153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:9902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:03.711 [2024-05-15 09:07:58.304173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:03.711 [2024-05-15 09:07:58.318296] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:03.711 [2024-05-15 09:07:58.318326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:03.711 [2024-05-15 09:07:58.318343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:03.711 [2024-05-15 09:07:58.332391] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:03.711 [2024-05-15 09:07:58.332421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:03.711 [2024-05-15 09:07:58.332437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:03.711 [2024-05-15 09:07:58.346197] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:03.711 [2024-05-15 09:07:58.346238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:03.711 [2024-05-15 09:07:58.346273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:03.711 [2024-05-15 09:07:58.358382] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:03.711 [2024-05-15 09:07:58.358413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:03.711 [2024-05-15 09:07:58.358430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:03.711 [2024-05-15 09:07:58.372892] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:03.711 [2024-05-15 09:07:58.372925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:18395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:03.712 [2024-05-15 09:07:58.372943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:03.712 [2024-05-15 09:07:58.384684] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:03.712 [2024-05-15 09:07:58.384718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:03.712 [2024-05-15 09:07:58.384737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:03.712 [2024-05-15 09:07:58.399336] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:03.712 [2024-05-15 09:07:58.399365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:03.712 [2024-05-15 09:07:58.399381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:03.712 [2024-05-15 09:07:58.412155] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:03.712 [2024-05-15 09:07:58.412188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:03.712 [2024-05-15 09:07:58.412222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:03.712 [2024-05-15 09:07:58.425858] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:03.712 [2024-05-15 09:07:58.425891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:18600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:03.712 [2024-05-15 09:07:58.425910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:03.712 [2024-05-15 09:07:58.438208] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:03.712 [2024-05-15 09:07:58.438248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:8429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:03.712 [2024-05-15 09:07:58.438267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:03.712 [2024-05-15 09:07:58.452697] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:03.712 [2024-05-15 09:07:58.452730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:3147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:03.712 [2024-05-15 09:07:58.452749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:03.712 [2024-05-15 09:07:58.465570] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:03.712 [2024-05-15 09:07:58.465599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:03.712 [2024-05-15 09:07:58.465615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:03.712 [2024-05-15 09:07:58.480542] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:03.712 [2024-05-15 09:07:58.480572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:03.712 [2024-05-15 09:07:58.480590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:03.712 [2024-05-15 09:07:58.493377] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:03.712 [2024-05-15 09:07:58.493408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:7835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:03.712 [2024-05-15 09:07:58.493424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:04.015 [2024-05-15 09:07:58.507148] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:04.015 [2024-05-15 09:07:58.507186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:16991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:04.015 [2024-05-15 09:07:58.507206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:04.015 [2024-05-15 09:07:58.523394] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:04.015 [2024-05-15 09:07:58.523424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:10847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:04.015 [2024-05-15 09:07:58.523441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:04.015 [2024-05-15 09:07:58.535992] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:04.015 [2024-05-15 09:07:58.536036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:04.015 [2024-05-15 09:07:58.536056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:04.015 [2024-05-15 09:07:58.552674] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:04.015 [2024-05-15 09:07:58.552709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:04.015 [2024-05-15 09:07:58.552729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:04.015 [2024-05-15 09:07:58.564470] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:04.015 [2024-05-15 09:07:58.564499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:04.015 [2024-05-15 09:07:58.564516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:04.015 [2024-05-15 09:07:58.579308] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:04.015 [2024-05-15 09:07:58.579338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:04.015 [2024-05-15 09:07:58.579354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:04.015 [2024-05-15 09:07:58.592510] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:04.015 [2024-05-15 09:07:58.592541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:98 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:04.015 [2024-05-15 09:07:58.592558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:04.015 [2024-05-15 09:07:58.605676] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:04.015 [2024-05-15 09:07:58.605707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:24838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:04.015 [2024-05-15 09:07:58.605725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:04.015 [2024-05-15 09:07:58.618801] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:04.015 [2024-05-15 09:07:58.618836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:04.015 [2024-05-15 09:07:58.618854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:04.015 [2024-05-15 09:07:58.631923] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:04.016 [2024-05-15 09:07:58.631958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:04.016 [2024-05-15 09:07:58.631977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:04.016 [2024-05-15 09:07:58.645778] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:04.016 [2024-05-15 09:07:58.645812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:14934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:04.016 [2024-05-15 09:07:58.645831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:04.016 [2024-05-15 09:07:58.657794] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:04.016 [2024-05-15 09:07:58.657828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:11728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:04.016 [2024-05-15 09:07:58.657847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:04.016 [2024-05-15 09:07:58.671882] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:04.016 [2024-05-15 09:07:58.671916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:04.016 [2024-05-15 09:07:58.671935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:04.016 [2024-05-15 09:07:58.683795] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:04.016 [2024-05-15 09:07:58.683830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:11839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:04.016 [2024-05-15 09:07:58.683849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:04.016 [2024-05-15 09:07:58.699580] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:04.016 [2024-05-15 09:07:58.699613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:10655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:04.016 [2024-05-15 09:07:58.699630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:04.016 [2024-05-15 09:07:58.715176] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:04.016 [2024-05-15 09:07:58.715210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:04.016 [2024-05-15 09:07:58.715241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:04.016 [2024-05-15 09:07:58.728410] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:04.016 [2024-05-15 09:07:58.728439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:04.016 [2024-05-15 09:07:58.728455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:04.016 [2024-05-15 09:07:58.744297] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:04.016 [2024-05-15 09:07:58.744326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:04.016 [2024-05-15 09:07:58.744342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:04.016 [2024-05-15 09:07:58.756304] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:04.016 [2024-05-15 09:07:58.756334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:04.016 [2024-05-15 09:07:58.756350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:04.016 [2024-05-15 09:07:58.772550] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:04.016 [2024-05-15 09:07:58.772584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:04.016 [2024-05-15 09:07:58.772612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:04.016 [2024-05-15 09:07:58.788589] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:04.016 [2024-05-15 09:07:58.788623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:04.016 [2024-05-15 09:07:58.788642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:04.016 [2024-05-15 09:07:58.800019] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:04.016 [2024-05-15 09:07:58.800063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:04.016 [2024-05-15 09:07:58.800094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:04.323 [2024-05-15 09:07:58.816076] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:04.323 [2024-05-15 09:07:58.816114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:04.323 [2024-05-15 09:07:58.816135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:04.323 [2024-05-15 09:07:58.830423] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:04.323 [2024-05-15 09:07:58.830454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:3310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:04.323 [2024-05-15 09:07:58.830472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:04.323 [2024-05-15 09:07:58.843610] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:04.323 [2024-05-15 09:07:58.843645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:04.323 [2024-05-15 09:07:58.843670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:04.323 [2024-05-15 09:07:58.857421] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:04.323 [2024-05-15 09:07:58.857452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:04.323 [2024-05-15 09:07:58.857469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:04.323 [2024-05-15 09:07:58.871005] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f64420) 00:42:04.323 [2024-05-15 09:07:58.871039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:04.323 [2024-05-15 09:07:58.871058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:04.323 00:42:04.323 Latency(us) 00:42:04.323 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:04.323 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:42:04.323 nvme0n1 : 2.00 18295.90 71.47 0.00 0.00 6985.79 3689.43 19029.71 00:42:04.323 =================================================================================================================== 00:42:04.323 Total : 18295.90 71.47 0.00 0.00 6985.79 3689.43 19029.71 00:42:04.323 0 00:42:04.323 09:07:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:42:04.323 09:07:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:42:04.323 09:07:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:42:04.323 | .driver_specific 00:42:04.323 | .nvme_error 00:42:04.323 | .status_code 00:42:04.323 | .command_transient_transport_error' 00:42:04.323 09:07:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:42:04.582 09:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 143 > 0 )) 00:42:04.582 09:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2432968 00:42:04.582 09:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' -z 2432968 ']' 00:42:04.582 09:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # kill -0 2432968 00:42:04.582 09:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # uname 00:42:04.582 09:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:42:04.582 09:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2432968 00:42:04.582 09:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:42:04.583 09:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:42:04.583 09:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2432968' 00:42:04.583 killing process with pid 2432968 00:42:04.583 09:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # kill 2432968 00:42:04.583 Received shutdown signal, test time was about 2.000000 seconds 00:42:04.583 00:42:04.583 Latency(us) 00:42:04.583 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:04.583 =================================================================================================================== 00:42:04.583 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:04.583 09:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # wait 2432968 00:42:04.842 09:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:42:04.842 09:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:42:04.842 09:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:42:04.842 09:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:42:04.842 09:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:42:04.842 09:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2433374 00:42:04.842 09:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:42:04.842 09:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2433374 /var/tmp/bperf.sock 00:42:04.842 09:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # '[' -z 2433374 ']' 00:42:04.842 09:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:04.842 09:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local max_retries=100 00:42:04.842 09:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:04.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:04.842 09:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # xtrace_disable 00:42:04.842 09:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:42:04.842 [2024-05-15 09:07:59.442181] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:42:04.842 [2024-05-15 09:07:59.442280] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2433374 ] 00:42:04.842 I/O size of 131072 is greater than zero copy threshold (65536). 00:42:04.842 Zero copy mechanism will not be used. 00:42:04.842 EAL: No free 2048 kB hugepages reported on node 1 00:42:04.842 [2024-05-15 09:07:59.513332] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:04.842 [2024-05-15 09:07:59.602880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:42:05.100 09:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:42:05.100 09:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@861 -- # return 0 00:42:05.100 09:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:42:05.100 09:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:42:05.359 09:08:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:42:05.359 09:08:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:05.359 09:08:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:42:05.359 09:08:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:05.359 09:08:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:42:05.359 09:08:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:42:05.617 nvme0n1 00:42:05.617 09:08:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:42:05.617 09:08:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:05.617 09:08:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:42:05.617 09:08:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:05.617 09:08:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:42:05.617 09:08:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:05.875 I/O size of 131072 is greater than zero copy threshold (65536). 00:42:05.875 Zero copy mechanism will not be used. 00:42:05.875 Running I/O for 2 seconds... 00:42:05.875 [2024-05-15 09:08:00.472047] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:05.875 [2024-05-15 09:08:00.472107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:05.875 [2024-05-15 09:08:00.472130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:05.875 [2024-05-15 09:08:00.480426] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:05.875 [2024-05-15 09:08:00.480459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:05.875 [2024-05-15 09:08:00.480478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:05.875 [2024-05-15 09:08:00.488180] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:05.876 [2024-05-15 09:08:00.488224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:05.876 [2024-05-15 09:08:00.488261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:05.876 [2024-05-15 09:08:00.495974] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:05.876 [2024-05-15 09:08:00.496009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:05.876 [2024-05-15 09:08:00.496028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:05.876 [2024-05-15 09:08:00.503685] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:05.876 [2024-05-15 09:08:00.503720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:05.876 [2024-05-15 09:08:00.503739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:05.876 [2024-05-15 09:08:00.510496] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:05.876 [2024-05-15 09:08:00.510544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:05.876 [2024-05-15 09:08:00.510565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:05.876 [2024-05-15 09:08:00.517532] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:05.876 [2024-05-15 09:08:00.517566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:05.876 [2024-05-15 09:08:00.517584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:05.876 [2024-05-15 09:08:00.524710] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:05.876 [2024-05-15 09:08:00.524743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:05.876 [2024-05-15 09:08:00.524762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:05.876 [2024-05-15 09:08:00.531887] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:05.876 [2024-05-15 09:08:00.531920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:05.876 [2024-05-15 09:08:00.531938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:05.876 [2024-05-15 09:08:00.539322] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:05.876 [2024-05-15 09:08:00.539366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:05.876 [2024-05-15 09:08:00.539382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:05.876 [2024-05-15 09:08:00.546040] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:05.876 [2024-05-15 09:08:00.546072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:05.876 [2024-05-15 09:08:00.546102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:05.876 [2024-05-15 09:08:00.553114] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:05.876 [2024-05-15 09:08:00.553147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:05.876 [2024-05-15 09:08:00.553166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:05.876 [2024-05-15 09:08:00.560155] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:05.876 [2024-05-15 09:08:00.560187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:05.876 [2024-05-15 09:08:00.560205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:05.876 [2024-05-15 09:08:00.567171] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:05.876 [2024-05-15 09:08:00.567203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:05.876 [2024-05-15 09:08:00.567250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:05.876 [2024-05-15 09:08:00.574159] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:05.876 [2024-05-15 09:08:00.574191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:05.876 [2024-05-15 09:08:00.574209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:05.876 [2024-05-15 09:08:00.581233] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:05.876 [2024-05-15 09:08:00.581265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:05.876 [2024-05-15 09:08:00.581298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:05.876 [2024-05-15 09:08:00.588351] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:05.876 [2024-05-15 09:08:00.588380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:05.876 [2024-05-15 09:08:00.588396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:05.876 [2024-05-15 09:08:00.595427] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:05.876 [2024-05-15 09:08:00.595456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:05.876 [2024-05-15 09:08:00.595472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:05.876 [2024-05-15 09:08:00.602533] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:05.876 [2024-05-15 09:08:00.602567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:05.876 [2024-05-15 09:08:00.602586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:05.876 [2024-05-15 09:08:00.609642] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:05.876 [2024-05-15 09:08:00.609680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:05.876 [2024-05-15 09:08:00.609700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:05.876 [2024-05-15 09:08:00.616754] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:05.876 [2024-05-15 09:08:00.616787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:05.876 [2024-05-15 09:08:00.616806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:05.876 [2024-05-15 09:08:00.623856] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:05.876 [2024-05-15 09:08:00.623890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:05.876 [2024-05-15 09:08:00.623908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:05.876 [2024-05-15 09:08:00.630899] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:05.876 [2024-05-15 09:08:00.630932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:05.876 [2024-05-15 09:08:00.630951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:05.876 [2024-05-15 09:08:00.638018] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:05.876 [2024-05-15 09:08:00.638050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:05.876 [2024-05-15 09:08:00.638069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:05.876 [2024-05-15 09:08:00.645112] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:05.876 [2024-05-15 09:08:00.645144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:05.876 [2024-05-15 09:08:00.645162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:05.876 [2024-05-15 09:08:00.652161] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:05.876 [2024-05-15 09:08:00.652193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:05.876 [2024-05-15 09:08:00.652211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:05.876 [2024-05-15 09:08:00.659200] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:05.876 [2024-05-15 09:08:00.659239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:05.876 [2024-05-15 09:08:00.659273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:05.877 [2024-05-15 09:08:00.666273] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:05.877 [2024-05-15 09:08:00.666320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:05.877 [2024-05-15 09:08:00.666338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:06.135 [2024-05-15 09:08:00.673384] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.135 [2024-05-15 09:08:00.673416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.136 [2024-05-15 09:08:00.673434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:06.136 [2024-05-15 09:08:00.680613] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.136 [2024-05-15 09:08:00.680648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.136 [2024-05-15 09:08:00.680667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:06.136 [2024-05-15 09:08:00.687751] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.136 [2024-05-15 09:08:00.687783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.136 [2024-05-15 09:08:00.687802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:06.136 [2024-05-15 09:08:00.694916] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.136 [2024-05-15 09:08:00.694948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.136 [2024-05-15 09:08:00.694967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:06.136 [2024-05-15 09:08:00.701530] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.136 [2024-05-15 09:08:00.701564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.136 [2024-05-15 09:08:00.701583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:06.136 [2024-05-15 09:08:00.708748] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.136 [2024-05-15 09:08:00.708781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.136 [2024-05-15 09:08:00.708800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:06.136 [2024-05-15 09:08:00.715906] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.136 [2024-05-15 09:08:00.715939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.136 [2024-05-15 09:08:00.715957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:06.136 [2024-05-15 09:08:00.723026] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.136 [2024-05-15 09:08:00.723059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.136 [2024-05-15 09:08:00.723077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:06.136 [2024-05-15 09:08:00.730158] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.136 [2024-05-15 09:08:00.730191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.136 [2024-05-15 09:08:00.730229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:06.136 [2024-05-15 09:08:00.737325] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.136 [2024-05-15 09:08:00.737355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.136 [2024-05-15 09:08:00.737372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:06.136 [2024-05-15 09:08:00.744417] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.136 [2024-05-15 09:08:00.744447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.136 [2024-05-15 09:08:00.744464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:06.136 [2024-05-15 09:08:00.751433] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.136 [2024-05-15 09:08:00.751463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.136 [2024-05-15 09:08:00.751480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:06.136 [2024-05-15 09:08:00.758596] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.136 [2024-05-15 09:08:00.758629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.136 [2024-05-15 09:08:00.758648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:06.136 [2024-05-15 09:08:00.765849] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.136 [2024-05-15 09:08:00.765882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.136 [2024-05-15 09:08:00.765900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:06.136 [2024-05-15 09:08:00.773154] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.136 [2024-05-15 09:08:00.773186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.136 [2024-05-15 09:08:00.773207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:06.136 [2024-05-15 09:08:00.780385] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.136 [2024-05-15 09:08:00.780416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.136 [2024-05-15 09:08:00.780433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:06.136 [2024-05-15 09:08:00.787445] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.136 [2024-05-15 09:08:00.787475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.136 [2024-05-15 09:08:00.787491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:06.136 [2024-05-15 09:08:00.794449] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.136 [2024-05-15 09:08:00.794484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.136 [2024-05-15 09:08:00.794501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:06.136 [2024-05-15 09:08:00.801433] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.136 [2024-05-15 09:08:00.801462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.136 [2024-05-15 09:08:00.801479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:06.136 [2024-05-15 09:08:00.808611] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.136 [2024-05-15 09:08:00.808644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.136 [2024-05-15 09:08:00.808662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:06.136 [2024-05-15 09:08:00.815714] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.136 [2024-05-15 09:08:00.815746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.136 [2024-05-15 09:08:00.815765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:06.137 [2024-05-15 09:08:00.822887] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.137 [2024-05-15 09:08:00.822920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.137 [2024-05-15 09:08:00.822938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:06.137 [2024-05-15 09:08:00.829985] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.137 [2024-05-15 09:08:00.830017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.137 [2024-05-15 09:08:00.830035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:06.137 [2024-05-15 09:08:00.836973] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.137 [2024-05-15 09:08:00.837004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.137 [2024-05-15 09:08:00.837023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:06.137 [2024-05-15 09:08:00.843950] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.137 [2024-05-15 09:08:00.843981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.137 [2024-05-15 09:08:00.844000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:06.137 [2024-05-15 09:08:00.851097] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.137 [2024-05-15 09:08:00.851129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.137 [2024-05-15 09:08:00.851147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:06.137 [2024-05-15 09:08:00.858245] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.137 [2024-05-15 09:08:00.858292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.137 [2024-05-15 09:08:00.858309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:06.137 [2024-05-15 09:08:00.865349] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.137 [2024-05-15 09:08:00.865377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.137 [2024-05-15 09:08:00.865394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:06.137 [2024-05-15 09:08:00.872548] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.137 [2024-05-15 09:08:00.872580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.137 [2024-05-15 09:08:00.872598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:06.137 [2024-05-15 09:08:00.879602] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.137 [2024-05-15 09:08:00.879635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.137 [2024-05-15 09:08:00.879654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:06.137 [2024-05-15 09:08:00.886835] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.137 [2024-05-15 09:08:00.886867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.137 [2024-05-15 09:08:00.886886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:06.137 [2024-05-15 09:08:00.893924] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.137 [2024-05-15 09:08:00.893956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.137 [2024-05-15 09:08:00.893975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:06.137 [2024-05-15 09:08:00.900948] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.137 [2024-05-15 09:08:00.900981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.137 [2024-05-15 09:08:00.900998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:06.137 [2024-05-15 09:08:00.907987] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.137 [2024-05-15 09:08:00.908020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.137 [2024-05-15 09:08:00.908039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:06.137 [2024-05-15 09:08:00.915159] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.137 [2024-05-15 09:08:00.915199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.137 [2024-05-15 09:08:00.915227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:06.137 [2024-05-15 09:08:00.922465] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.137 [2024-05-15 09:08:00.922497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.137 [2024-05-15 09:08:00.922514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:06.396 [2024-05-15 09:08:00.929569] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.396 [2024-05-15 09:08:00.929605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.396 [2024-05-15 09:08:00.929625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:06.396 [2024-05-15 09:08:00.936616] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.396 [2024-05-15 09:08:00.936652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.396 [2024-05-15 09:08:00.936671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:06.396 [2024-05-15 09:08:00.943719] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.396 [2024-05-15 09:08:00.943753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.396 [2024-05-15 09:08:00.943772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:06.396 [2024-05-15 09:08:00.950923] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.396 [2024-05-15 09:08:00.950956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.396 [2024-05-15 09:08:00.950975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:06.396 [2024-05-15 09:08:00.958173] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.396 [2024-05-15 09:08:00.958223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.396 [2024-05-15 09:08:00.958244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:06.396 [2024-05-15 09:08:00.965371] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.396 [2024-05-15 09:08:00.965400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.396 [2024-05-15 09:08:00.965417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:06.396 [2024-05-15 09:08:00.972572] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.396 [2024-05-15 09:08:00.972609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.396 [2024-05-15 09:08:00.972628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:06.396 [2024-05-15 09:08:00.979639] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.396 [2024-05-15 09:08:00.979681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.396 [2024-05-15 09:08:00.979700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:06.396 [2024-05-15 09:08:00.986900] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.396 [2024-05-15 09:08:00.986935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.396 [2024-05-15 09:08:00.986954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:06.396 [2024-05-15 09:08:00.993844] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.396 [2024-05-15 09:08:00.993877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.396 [2024-05-15 09:08:00.993896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:06.396 [2024-05-15 09:08:01.000951] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.396 [2024-05-15 09:08:01.000984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.396 [2024-05-15 09:08:01.001008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:06.396 [2024-05-15 09:08:01.008315] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.396 [2024-05-15 09:08:01.008346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.397 [2024-05-15 09:08:01.008363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:06.397 [2024-05-15 09:08:01.015445] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.397 [2024-05-15 09:08:01.015476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.397 [2024-05-15 09:08:01.015492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:06.397 [2024-05-15 09:08:01.022572] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.397 [2024-05-15 09:08:01.022605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.397 [2024-05-15 09:08:01.022634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:06.397 [2024-05-15 09:08:01.029987] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.397 [2024-05-15 09:08:01.030021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.397 [2024-05-15 09:08:01.030043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:06.397 [2024-05-15 09:08:01.037100] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.397 [2024-05-15 09:08:01.037133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.397 [2024-05-15 09:08:01.037160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:06.397 [2024-05-15 09:08:01.044282] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.397 [2024-05-15 09:08:01.044313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.397 [2024-05-15 09:08:01.044333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:06.397 [2024-05-15 09:08:01.051406] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.397 [2024-05-15 09:08:01.051436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.397 [2024-05-15 09:08:01.051458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:06.397 [2024-05-15 09:08:01.058493] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.397 [2024-05-15 09:08:01.058549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.397 [2024-05-15 09:08:01.058574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:06.397 [2024-05-15 09:08:01.065572] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.397 [2024-05-15 09:08:01.065604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.397 [2024-05-15 09:08:01.065623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:06.397 [2024-05-15 09:08:01.072764] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.397 [2024-05-15 09:08:01.072796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.397 [2024-05-15 09:08:01.072815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:06.397 [2024-05-15 09:08:01.080009] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.397 [2024-05-15 09:08:01.080043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.397 [2024-05-15 09:08:01.080062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:06.397 [2024-05-15 09:08:01.087269] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.397 [2024-05-15 09:08:01.087300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.397 [2024-05-15 09:08:01.087318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:06.397 [2024-05-15 09:08:01.094693] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.397 [2024-05-15 09:08:01.094727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.397 [2024-05-15 09:08:01.094747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:06.397 [2024-05-15 09:08:01.101788] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.397 [2024-05-15 09:08:01.101828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.397 [2024-05-15 09:08:01.101853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:06.397 [2024-05-15 09:08:01.109040] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.397 [2024-05-15 09:08:01.109075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.397 [2024-05-15 09:08:01.109104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:06.397 [2024-05-15 09:08:01.116181] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.397 [2024-05-15 09:08:01.116223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.397 [2024-05-15 09:08:01.116259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:06.397 [2024-05-15 09:08:01.123407] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.397 [2024-05-15 09:08:01.123438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.397 [2024-05-15 09:08:01.123460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:06.397 [2024-05-15 09:08:01.130479] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.397 [2024-05-15 09:08:01.130509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.397 [2024-05-15 09:08:01.130525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:06.397 [2024-05-15 09:08:01.137557] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.397 [2024-05-15 09:08:01.137590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.397 [2024-05-15 09:08:01.137610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:06.397 [2024-05-15 09:08:01.144671] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.397 [2024-05-15 09:08:01.144704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.397 [2024-05-15 09:08:01.144729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:06.397 [2024-05-15 09:08:01.151786] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.397 [2024-05-15 09:08:01.151819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.397 [2024-05-15 09:08:01.151838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:06.397 [2024-05-15 09:08:01.158821] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.397 [2024-05-15 09:08:01.158853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.397 [2024-05-15 09:08:01.158872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:06.397 [2024-05-15 09:08:01.165931] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.397 [2024-05-15 09:08:01.165964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.397 [2024-05-15 09:08:01.165983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:06.397 [2024-05-15 09:08:01.173096] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.397 [2024-05-15 09:08:01.173130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.397 [2024-05-15 09:08:01.173148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:06.397 [2024-05-15 09:08:01.180468] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.397 [2024-05-15 09:08:01.180524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.397 [2024-05-15 09:08:01.180544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:06.397 [2024-05-15 09:08:01.187667] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.397 [2024-05-15 09:08:01.187716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.398 [2024-05-15 09:08:01.187758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:06.656 [2024-05-15 09:08:01.194976] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.656 [2024-05-15 09:08:01.195013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.656 [2024-05-15 09:08:01.195033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:06.656 [2024-05-15 09:08:01.202052] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.656 [2024-05-15 09:08:01.202085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.656 [2024-05-15 09:08:01.202110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:06.656 [2024-05-15 09:08:01.209152] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.656 [2024-05-15 09:08:01.209184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.656 [2024-05-15 09:08:01.209204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:06.657 [2024-05-15 09:08:01.216329] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.657 [2024-05-15 09:08:01.216359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.657 [2024-05-15 09:08:01.216379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:06.657 [2024-05-15 09:08:01.223446] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.657 [2024-05-15 09:08:01.223475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.657 [2024-05-15 09:08:01.223504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:06.657 [2024-05-15 09:08:01.230581] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.657 [2024-05-15 09:08:01.230615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.657 [2024-05-15 09:08:01.230633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:06.657 [2024-05-15 09:08:01.237692] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.657 [2024-05-15 09:08:01.237725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.657 [2024-05-15 09:08:01.237744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:06.657 [2024-05-15 09:08:01.244864] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.657 [2024-05-15 09:08:01.244897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.657 [2024-05-15 09:08:01.244915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:06.657 [2024-05-15 09:08:01.252044] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.657 [2024-05-15 09:08:01.252078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.657 [2024-05-15 09:08:01.252106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:06.657 [2024-05-15 09:08:01.259126] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.657 [2024-05-15 09:08:01.259161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.657 [2024-05-15 09:08:01.259180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:06.657 [2024-05-15 09:08:01.266259] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.657 [2024-05-15 09:08:01.266290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.657 [2024-05-15 09:08:01.266310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:06.657 [2024-05-15 09:08:01.273384] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.657 [2024-05-15 09:08:01.273415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.657 [2024-05-15 09:08:01.273432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:06.657 [2024-05-15 09:08:01.280492] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.657 [2024-05-15 09:08:01.280532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.657 [2024-05-15 09:08:01.280576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:06.657 [2024-05-15 09:08:01.287458] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.657 [2024-05-15 09:08:01.287488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.657 [2024-05-15 09:08:01.287522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:06.657 [2024-05-15 09:08:01.294430] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.657 [2024-05-15 09:08:01.294460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.657 [2024-05-15 09:08:01.294478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:06.657 [2024-05-15 09:08:01.301427] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.657 [2024-05-15 09:08:01.301456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.657 [2024-05-15 09:08:01.301475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:06.657 [2024-05-15 09:08:01.308488] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.657 [2024-05-15 09:08:01.308543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.657 [2024-05-15 09:08:01.308566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:06.657 [2024-05-15 09:08:01.315657] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.657 [2024-05-15 09:08:01.315690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.657 [2024-05-15 09:08:01.315708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:06.657 [2024-05-15 09:08:01.322865] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.657 [2024-05-15 09:08:01.322898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.657 [2024-05-15 09:08:01.322918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:06.657 [2024-05-15 09:08:01.329973] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.657 [2024-05-15 09:08:01.330005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.657 [2024-05-15 09:08:01.330023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:06.657 [2024-05-15 09:08:01.337029] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.657 [2024-05-15 09:08:01.337061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.657 [2024-05-15 09:08:01.337080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:06.657 [2024-05-15 09:08:01.344073] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.657 [2024-05-15 09:08:01.344105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.657 [2024-05-15 09:08:01.344129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:06.657 [2024-05-15 09:08:01.351238] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.657 [2024-05-15 09:08:01.351285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.657 [2024-05-15 09:08:01.351302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:06.657 [2024-05-15 09:08:01.358369] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.657 [2024-05-15 09:08:01.358398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.658 [2024-05-15 09:08:01.358419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:06.658 [2024-05-15 09:08:01.365463] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.658 [2024-05-15 09:08:01.365493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.658 [2024-05-15 09:08:01.365512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:06.658 [2024-05-15 09:08:01.372532] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.658 [2024-05-15 09:08:01.372565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.658 [2024-05-15 09:08:01.372583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:06.658 [2024-05-15 09:08:01.379658] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.658 [2024-05-15 09:08:01.379691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.658 [2024-05-15 09:08:01.379712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:06.658 [2024-05-15 09:08:01.386823] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.658 [2024-05-15 09:08:01.386856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.658 [2024-05-15 09:08:01.386874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:06.658 [2024-05-15 09:08:01.393936] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.658 [2024-05-15 09:08:01.393968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.658 [2024-05-15 09:08:01.393989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:06.658 [2024-05-15 09:08:01.401016] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.658 [2024-05-15 09:08:01.401048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.658 [2024-05-15 09:08:01.401069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:06.658 [2024-05-15 09:08:01.408213] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.658 [2024-05-15 09:08:01.408274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.658 [2024-05-15 09:08:01.408293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:06.658 [2024-05-15 09:08:01.415463] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.658 [2024-05-15 09:08:01.415492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.658 [2024-05-15 09:08:01.415511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:06.658 [2024-05-15 09:08:01.422638] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.658 [2024-05-15 09:08:01.422670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.658 [2024-05-15 09:08:01.422690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:06.658 [2024-05-15 09:08:01.429760] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.658 [2024-05-15 09:08:01.429793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.658 [2024-05-15 09:08:01.429818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:06.658 [2024-05-15 09:08:01.436868] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.658 [2024-05-15 09:08:01.436901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.658 [2024-05-15 09:08:01.436920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:06.658 [2024-05-15 09:08:01.441450] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.658 [2024-05-15 09:08:01.441479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.658 [2024-05-15 09:08:01.441496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:06.658 [2024-05-15 09:08:01.447174] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.658 [2024-05-15 09:08:01.447211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.658 [2024-05-15 09:08:01.447265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:06.917 [2024-05-15 09:08:01.454172] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.917 [2024-05-15 09:08:01.454208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.917 [2024-05-15 09:08:01.454261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:06.917 [2024-05-15 09:08:01.461102] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.917 [2024-05-15 09:08:01.461136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.917 [2024-05-15 09:08:01.461156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:06.917 [2024-05-15 09:08:01.468368] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.917 [2024-05-15 09:08:01.468398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.917 [2024-05-15 09:08:01.468414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:06.917 [2024-05-15 09:08:01.475415] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.917 [2024-05-15 09:08:01.475445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.917 [2024-05-15 09:08:01.475461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:06.917 [2024-05-15 09:08:01.482591] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.917 [2024-05-15 09:08:01.482624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.917 [2024-05-15 09:08:01.482643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:06.917 [2024-05-15 09:08:01.489714] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.917 [2024-05-15 09:08:01.489746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.917 [2024-05-15 09:08:01.489769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:06.917 [2024-05-15 09:08:01.496866] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.917 [2024-05-15 09:08:01.496899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.917 [2024-05-15 09:08:01.496917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:06.917 [2024-05-15 09:08:01.504102] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.917 [2024-05-15 09:08:01.504135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.917 [2024-05-15 09:08:01.504154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:06.917 [2024-05-15 09:08:01.511236] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.917 [2024-05-15 09:08:01.511281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.917 [2024-05-15 09:08:01.511301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:06.917 [2024-05-15 09:08:01.518261] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.917 [2024-05-15 09:08:01.518306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.917 [2024-05-15 09:08:01.518331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:06.917 [2024-05-15 09:08:01.525291] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.917 [2024-05-15 09:08:01.525322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.917 [2024-05-15 09:08:01.525345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:06.917 [2024-05-15 09:08:01.532975] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.917 [2024-05-15 09:08:01.533010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.918 [2024-05-15 09:08:01.533029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:06.918 [2024-05-15 09:08:01.539699] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.918 [2024-05-15 09:08:01.539728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.918 [2024-05-15 09:08:01.539768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:06.918 [2024-05-15 09:08:01.546515] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.918 [2024-05-15 09:08:01.546560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.918 [2024-05-15 09:08:01.546582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:06.918 [2024-05-15 09:08:01.553689] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.918 [2024-05-15 09:08:01.553721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.918 [2024-05-15 09:08:01.553740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:06.918 [2024-05-15 09:08:01.560885] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.918 [2024-05-15 09:08:01.560917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.918 [2024-05-15 09:08:01.560936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:06.918 [2024-05-15 09:08:01.567958] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.918 [2024-05-15 09:08:01.567990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.918 [2024-05-15 09:08:01.568009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:06.918 [2024-05-15 09:08:01.575428] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.918 [2024-05-15 09:08:01.575468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.918 [2024-05-15 09:08:01.575501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:06.918 [2024-05-15 09:08:01.582635] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.918 [2024-05-15 09:08:01.582669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.918 [2024-05-15 09:08:01.582688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:06.918 [2024-05-15 09:08:01.589322] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.918 [2024-05-15 09:08:01.589357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.918 [2024-05-15 09:08:01.589381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:06.918 [2024-05-15 09:08:01.596505] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.918 [2024-05-15 09:08:01.596553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.918 [2024-05-15 09:08:01.596572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:06.918 [2024-05-15 09:08:01.603599] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.918 [2024-05-15 09:08:01.603631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.918 [2024-05-15 09:08:01.603650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:06.918 [2024-05-15 09:08:01.610738] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.918 [2024-05-15 09:08:01.610770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.918 [2024-05-15 09:08:01.610789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:06.918 [2024-05-15 09:08:01.617855] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.918 [2024-05-15 09:08:01.617888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.918 [2024-05-15 09:08:01.617906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:06.918 [2024-05-15 09:08:01.624938] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.918 [2024-05-15 09:08:01.624970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.918 [2024-05-15 09:08:01.624990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:06.918 [2024-05-15 09:08:01.631964] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.918 [2024-05-15 09:08:01.631997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.918 [2024-05-15 09:08:01.632016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:06.918 [2024-05-15 09:08:01.639227] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.918 [2024-05-15 09:08:01.639272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.918 [2024-05-15 09:08:01.639292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:06.918 [2024-05-15 09:08:01.646362] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.918 [2024-05-15 09:08:01.646390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.918 [2024-05-15 09:08:01.646411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:06.918 [2024-05-15 09:08:01.653488] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.918 [2024-05-15 09:08:01.653536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.918 [2024-05-15 09:08:01.653555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:06.918 [2024-05-15 09:08:01.660634] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.918 [2024-05-15 09:08:01.660667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.918 [2024-05-15 09:08:01.660688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:06.918 [2024-05-15 09:08:01.668128] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.918 [2024-05-15 09:08:01.668162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.918 [2024-05-15 09:08:01.668181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:06.918 [2024-05-15 09:08:01.675226] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.918 [2024-05-15 09:08:01.675270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.918 [2024-05-15 09:08:01.675292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:06.918 [2024-05-15 09:08:01.682464] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.918 [2024-05-15 09:08:01.682511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.918 [2024-05-15 09:08:01.682530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:06.918 [2024-05-15 09:08:01.689641] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.918 [2024-05-15 09:08:01.689675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.918 [2024-05-15 09:08:01.689694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:06.918 [2024-05-15 09:08:01.696899] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.918 [2024-05-15 09:08:01.696933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.918 [2024-05-15 09:08:01.696952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:06.918 [2024-05-15 09:08:01.704093] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:06.918 [2024-05-15 09:08:01.704126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:06.918 [2024-05-15 09:08:01.704144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:07.177 [2024-05-15 09:08:01.711358] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.177 [2024-05-15 09:08:01.711406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.177 [2024-05-15 09:08:01.711431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:07.177 [2024-05-15 09:08:01.718478] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.177 [2024-05-15 09:08:01.718527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.177 [2024-05-15 09:08:01.718547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:07.177 [2024-05-15 09:08:01.726233] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.177 [2024-05-15 09:08:01.726280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.177 [2024-05-15 09:08:01.726299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:07.177 [2024-05-15 09:08:01.733269] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.178 [2024-05-15 09:08:01.733298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.178 [2024-05-15 09:08:01.733325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:07.178 [2024-05-15 09:08:01.739857] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.178 [2024-05-15 09:08:01.739891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.178 [2024-05-15 09:08:01.739910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:07.178 [2024-05-15 09:08:01.746871] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.178 [2024-05-15 09:08:01.746904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.178 [2024-05-15 09:08:01.746924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:07.178 [2024-05-15 09:08:01.754030] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.178 [2024-05-15 09:08:01.754063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.178 [2024-05-15 09:08:01.754082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:07.178 [2024-05-15 09:08:01.761320] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.178 [2024-05-15 09:08:01.761350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.178 [2024-05-15 09:08:01.761367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:07.178 [2024-05-15 09:08:01.768493] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.178 [2024-05-15 09:08:01.768540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.178 [2024-05-15 09:08:01.768559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:07.178 [2024-05-15 09:08:01.775574] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.178 [2024-05-15 09:08:01.775608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.178 [2024-05-15 09:08:01.775627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:07.178 [2024-05-15 09:08:01.782968] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.178 [2024-05-15 09:08:01.783003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.178 [2024-05-15 09:08:01.783022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:07.178 [2024-05-15 09:08:01.790171] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.178 [2024-05-15 09:08:01.790205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.178 [2024-05-15 09:08:01.790233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:07.178 [2024-05-15 09:08:01.797331] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.178 [2024-05-15 09:08:01.797362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.178 [2024-05-15 09:08:01.797379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:07.178 [2024-05-15 09:08:01.804454] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.178 [2024-05-15 09:08:01.804486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.178 [2024-05-15 09:08:01.804520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:07.178 [2024-05-15 09:08:01.811634] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.178 [2024-05-15 09:08:01.811668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.178 [2024-05-15 09:08:01.811687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:07.178 [2024-05-15 09:08:01.818781] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.178 [2024-05-15 09:08:01.818815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.178 [2024-05-15 09:08:01.818834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:07.178 [2024-05-15 09:08:01.825903] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.178 [2024-05-15 09:08:01.825936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.178 [2024-05-15 09:08:01.825955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:07.178 [2024-05-15 09:08:01.832908] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.178 [2024-05-15 09:08:01.832942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.178 [2024-05-15 09:08:01.832967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:07.178 [2024-05-15 09:08:01.840179] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.178 [2024-05-15 09:08:01.840222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.178 [2024-05-15 09:08:01.840244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:07.178 [2024-05-15 09:08:01.847315] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.178 [2024-05-15 09:08:01.847345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.178 [2024-05-15 09:08:01.847362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:07.178 [2024-05-15 09:08:01.854839] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.178 [2024-05-15 09:08:01.854873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.178 [2024-05-15 09:08:01.854892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:07.178 [2024-05-15 09:08:01.862131] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.178 [2024-05-15 09:08:01.862166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.178 [2024-05-15 09:08:01.862185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:07.178 [2024-05-15 09:08:01.869228] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.178 [2024-05-15 09:08:01.869274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.178 [2024-05-15 09:08:01.869291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:07.178 [2024-05-15 09:08:01.876417] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.178 [2024-05-15 09:08:01.876446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.178 [2024-05-15 09:08:01.876480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:07.178 [2024-05-15 09:08:01.883480] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.178 [2024-05-15 09:08:01.883528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.178 [2024-05-15 09:08:01.883547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:07.178 [2024-05-15 09:08:01.890570] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.178 [2024-05-15 09:08:01.890603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.178 [2024-05-15 09:08:01.890622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:07.179 [2024-05-15 09:08:01.897673] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.179 [2024-05-15 09:08:01.897712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.179 [2024-05-15 09:08:01.897732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:07.179 [2024-05-15 09:08:01.904945] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.179 [2024-05-15 09:08:01.904980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.179 [2024-05-15 09:08:01.904999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:07.179 [2024-05-15 09:08:01.912050] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.179 [2024-05-15 09:08:01.912083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.179 [2024-05-15 09:08:01.912102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:07.179 [2024-05-15 09:08:01.919075] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.179 [2024-05-15 09:08:01.919109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.179 [2024-05-15 09:08:01.919128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:07.179 [2024-05-15 09:08:01.926213] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.179 [2024-05-15 09:08:01.926250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.179 [2024-05-15 09:08:01.926283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:07.179 [2024-05-15 09:08:01.933372] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.179 [2024-05-15 09:08:01.933417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.179 [2024-05-15 09:08:01.933434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:07.179 [2024-05-15 09:08:01.940451] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.179 [2024-05-15 09:08:01.940481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.179 [2024-05-15 09:08:01.940498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:07.179 [2024-05-15 09:08:01.947464] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.179 [2024-05-15 09:08:01.947510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.179 [2024-05-15 09:08:01.947530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:07.179 [2024-05-15 09:08:01.954673] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.179 [2024-05-15 09:08:01.954706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.179 [2024-05-15 09:08:01.954725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:07.179 [2024-05-15 09:08:01.961696] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.179 [2024-05-15 09:08:01.961729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.179 [2024-05-15 09:08:01.961748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:07.179 [2024-05-15 09:08:01.968872] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.179 [2024-05-15 09:08:01.968919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.179 [2024-05-15 09:08:01.968953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:07.438 [2024-05-15 09:08:01.976007] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.438 [2024-05-15 09:08:01.976044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.438 [2024-05-15 09:08:01.976064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:07.438 [2024-05-15 09:08:01.983078] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.438 [2024-05-15 09:08:01.983113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.438 [2024-05-15 09:08:01.983132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:07.438 [2024-05-15 09:08:01.990093] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.438 [2024-05-15 09:08:01.990127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.438 [2024-05-15 09:08:01.990146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:07.438 [2024-05-15 09:08:01.997130] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.438 [2024-05-15 09:08:01.997164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.438 [2024-05-15 09:08:01.997183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:07.438 [2024-05-15 09:08:02.004261] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.438 [2024-05-15 09:08:02.004291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.438 [2024-05-15 09:08:02.004308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:07.438 [2024-05-15 09:08:02.011433] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.438 [2024-05-15 09:08:02.011464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.438 [2024-05-15 09:08:02.011496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:07.438 [2024-05-15 09:08:02.018626] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.438 [2024-05-15 09:08:02.018667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.438 [2024-05-15 09:08:02.018692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:07.438 [2024-05-15 09:08:02.025699] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.438 [2024-05-15 09:08:02.025733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.438 [2024-05-15 09:08:02.025752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:07.438 [2024-05-15 09:08:02.032745] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.439 [2024-05-15 09:08:02.032780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.439 [2024-05-15 09:08:02.032800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:07.439 [2024-05-15 09:08:02.039828] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.439 [2024-05-15 09:08:02.039862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.439 [2024-05-15 09:08:02.039881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:07.439 [2024-05-15 09:08:02.047054] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.439 [2024-05-15 09:08:02.047089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.439 [2024-05-15 09:08:02.047109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:07.439 [2024-05-15 09:08:02.054119] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.439 [2024-05-15 09:08:02.054153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.439 [2024-05-15 09:08:02.054172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:07.439 [2024-05-15 09:08:02.061126] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.439 [2024-05-15 09:08:02.061160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.439 [2024-05-15 09:08:02.061179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:07.439 [2024-05-15 09:08:02.068266] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.439 [2024-05-15 09:08:02.068297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.439 [2024-05-15 09:08:02.068314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:07.439 [2024-05-15 09:08:02.075240] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.439 [2024-05-15 09:08:02.075284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.439 [2024-05-15 09:08:02.075301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:07.439 [2024-05-15 09:08:02.082372] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.439 [2024-05-15 09:08:02.082422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.439 [2024-05-15 09:08:02.082440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:07.439 [2024-05-15 09:08:02.089380] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.439 [2024-05-15 09:08:02.089426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.439 [2024-05-15 09:08:02.089443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:07.439 [2024-05-15 09:08:02.096630] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.439 [2024-05-15 09:08:02.096664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.439 [2024-05-15 09:08:02.096682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:07.439 [2024-05-15 09:08:02.103981] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.439 [2024-05-15 09:08:02.104016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.439 [2024-05-15 09:08:02.104035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:07.439 [2024-05-15 09:08:02.110956] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.439 [2024-05-15 09:08:02.110989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.439 [2024-05-15 09:08:02.111008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:07.439 [2024-05-15 09:08:02.117956] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.439 [2024-05-15 09:08:02.117986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.439 [2024-05-15 09:08:02.118018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:07.439 [2024-05-15 09:08:02.124891] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.439 [2024-05-15 09:08:02.124924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.439 [2024-05-15 09:08:02.124942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:07.439 [2024-05-15 09:08:02.132136] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.439 [2024-05-15 09:08:02.132167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.439 [2024-05-15 09:08:02.132184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:07.439 [2024-05-15 09:08:02.139357] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.439 [2024-05-15 09:08:02.139403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.439 [2024-05-15 09:08:02.139419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:07.439 [2024-05-15 09:08:02.146567] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.439 [2024-05-15 09:08:02.146601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.440 [2024-05-15 09:08:02.146620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:07.440 [2024-05-15 09:08:02.153788] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.440 [2024-05-15 09:08:02.153822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.440 [2024-05-15 09:08:02.153841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:07.440 [2024-05-15 09:08:02.161116] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.440 [2024-05-15 09:08:02.161150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.440 [2024-05-15 09:08:02.161169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:07.440 [2024-05-15 09:08:02.168294] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.440 [2024-05-15 09:08:02.168341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.440 [2024-05-15 09:08:02.168357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:07.440 [2024-05-15 09:08:02.175416] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.440 [2024-05-15 09:08:02.175446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.440 [2024-05-15 09:08:02.175463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:07.440 [2024-05-15 09:08:02.182395] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.440 [2024-05-15 09:08:02.182440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.440 [2024-05-15 09:08:02.182456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:07.440 [2024-05-15 09:08:02.189438] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.440 [2024-05-15 09:08:02.189468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.440 [2024-05-15 09:08:02.189485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:07.440 [2024-05-15 09:08:02.196384] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.440 [2024-05-15 09:08:02.196414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.440 [2024-05-15 09:08:02.196430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:07.440 [2024-05-15 09:08:02.203407] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.440 [2024-05-15 09:08:02.203436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.440 [2024-05-15 09:08:02.203459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:07.440 [2024-05-15 09:08:02.210431] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.440 [2024-05-15 09:08:02.210461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.440 [2024-05-15 09:08:02.210478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:07.440 [2024-05-15 09:08:02.217489] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.440 [2024-05-15 09:08:02.217518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.440 [2024-05-15 09:08:02.217535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:07.440 [2024-05-15 09:08:02.224448] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.440 [2024-05-15 09:08:02.224497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.440 [2024-05-15 09:08:02.224514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:07.700 [2024-05-15 09:08:02.231644] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.700 [2024-05-15 09:08:02.231681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.700 [2024-05-15 09:08:02.231702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:07.700 [2024-05-15 09:08:02.238714] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.700 [2024-05-15 09:08:02.238751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.700 [2024-05-15 09:08:02.238771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:07.700 [2024-05-15 09:08:02.245839] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.700 [2024-05-15 09:08:02.245873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.700 [2024-05-15 09:08:02.245893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:07.700 [2024-05-15 09:08:02.253043] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.700 [2024-05-15 09:08:02.253077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.700 [2024-05-15 09:08:02.253096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:07.700 [2024-05-15 09:08:02.257648] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.700 [2024-05-15 09:08:02.257681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.700 [2024-05-15 09:08:02.257700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:07.700 [2024-05-15 09:08:02.263522] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.700 [2024-05-15 09:08:02.263551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.700 [2024-05-15 09:08:02.263585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:07.700 [2024-05-15 09:08:02.270462] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.700 [2024-05-15 09:08:02.270507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.700 [2024-05-15 09:08:02.270524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:07.700 [2024-05-15 09:08:02.277457] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.700 [2024-05-15 09:08:02.277500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.700 [2024-05-15 09:08:02.277517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:07.700 [2024-05-15 09:08:02.284463] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.700 [2024-05-15 09:08:02.284491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.700 [2024-05-15 09:08:02.284507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:07.700 [2024-05-15 09:08:02.291459] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.700 [2024-05-15 09:08:02.291505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.700 [2024-05-15 09:08:02.291522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:07.700 [2024-05-15 09:08:02.298613] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.700 [2024-05-15 09:08:02.298646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.700 [2024-05-15 09:08:02.298665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:07.700 [2024-05-15 09:08:02.305925] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.700 [2024-05-15 09:08:02.305960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.700 [2024-05-15 09:08:02.305979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:07.700 [2024-05-15 09:08:02.313022] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.700 [2024-05-15 09:08:02.313055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.700 [2024-05-15 09:08:02.313074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:07.700 [2024-05-15 09:08:02.320197] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.701 [2024-05-15 09:08:02.320239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.701 [2024-05-15 09:08:02.320265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:07.701 [2024-05-15 09:08:02.327760] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.701 [2024-05-15 09:08:02.327794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.701 [2024-05-15 09:08:02.327813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:07.701 [2024-05-15 09:08:02.334910] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.701 [2024-05-15 09:08:02.334944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.701 [2024-05-15 09:08:02.334962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:07.701 [2024-05-15 09:08:02.341965] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.701 [2024-05-15 09:08:02.341999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.701 [2024-05-15 09:08:02.342018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:07.701 [2024-05-15 09:08:02.348925] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.701 [2024-05-15 09:08:02.348958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.701 [2024-05-15 09:08:02.348977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:07.701 [2024-05-15 09:08:02.355935] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.701 [2024-05-15 09:08:02.355968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.701 [2024-05-15 09:08:02.355986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:07.701 [2024-05-15 09:08:02.362881] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.701 [2024-05-15 09:08:02.362914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.701 [2024-05-15 09:08:02.362933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:07.701 [2024-05-15 09:08:02.369977] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.701 [2024-05-15 09:08:02.370010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.701 [2024-05-15 09:08:02.370029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:07.701 [2024-05-15 09:08:02.377110] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.701 [2024-05-15 09:08:02.377145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.701 [2024-05-15 09:08:02.377164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:07.701 [2024-05-15 09:08:02.384235] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.701 [2024-05-15 09:08:02.384293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.701 [2024-05-15 09:08:02.384311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:07.701 [2024-05-15 09:08:02.391347] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.701 [2024-05-15 09:08:02.391376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.701 [2024-05-15 09:08:02.391408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:07.701 [2024-05-15 09:08:02.398701] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.701 [2024-05-15 09:08:02.398735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.701 [2024-05-15 09:08:02.398754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:07.701 [2024-05-15 09:08:02.405817] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.701 [2024-05-15 09:08:02.405851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.701 [2024-05-15 09:08:02.405870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:07.701 [2024-05-15 09:08:02.412871] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.701 [2024-05-15 09:08:02.412904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.701 [2024-05-15 09:08:02.412923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:07.701 [2024-05-15 09:08:02.419957] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.701 [2024-05-15 09:08:02.419991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.701 [2024-05-15 09:08:02.420010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:07.701 [2024-05-15 09:08:02.426989] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.701 [2024-05-15 09:08:02.427021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.701 [2024-05-15 09:08:02.427040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:07.701 [2024-05-15 09:08:02.433961] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.701 [2024-05-15 09:08:02.433994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.701 [2024-05-15 09:08:02.434012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:07.701 [2024-05-15 09:08:02.441153] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.701 [2024-05-15 09:08:02.441187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.701 [2024-05-15 09:08:02.441206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:07.701 [2024-05-15 09:08:02.448240] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.701 [2024-05-15 09:08:02.448283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.701 [2024-05-15 09:08:02.448299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:07.701 [2024-05-15 09:08:02.455385] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.701 [2024-05-15 09:08:02.455414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.701 [2024-05-15 09:08:02.455429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:07.701 [2024-05-15 09:08:02.462555] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb91f0) 00:42:07.701 [2024-05-15 09:08:02.462588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:07.701 [2024-05-15 09:08:02.462607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:07.701 00:42:07.701 Latency(us) 00:42:07.701 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:07.701 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:42:07.701 nvme0n1 : 2.00 4363.56 545.45 0.00 0.00 3661.84 916.29 8786.68 00:42:07.701 =================================================================================================================== 00:42:07.701 Total : 4363.56 545.45 0.00 0.00 3661.84 916.29 8786.68 00:42:07.701 0 00:42:07.701 09:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:42:07.701 09:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:42:07.701 09:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:42:07.701 | .driver_specific 00:42:07.701 | .nvme_error 00:42:07.701 | .status_code 00:42:07.701 | .command_transient_transport_error' 00:42:07.701 09:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:42:07.960 09:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 281 > 0 )) 00:42:07.960 09:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2433374 00:42:07.960 09:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' -z 2433374 ']' 00:42:07.960 09:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # kill -0 2433374 00:42:07.960 09:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # uname 00:42:07.960 09:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:42:07.960 09:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2433374 00:42:08.218 09:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:42:08.218 09:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:42:08.218 09:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2433374' 00:42:08.218 killing process with pid 2433374 00:42:08.218 09:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # kill 2433374 00:42:08.218 Received shutdown signal, test time was about 2.000000 seconds 00:42:08.218 00:42:08.218 Latency(us) 00:42:08.218 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:08.218 =================================================================================================================== 00:42:08.218 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:08.218 09:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # wait 2433374 00:42:08.218 09:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:42:08.218 09:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:42:08.218 09:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:42:08.218 09:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:42:08.218 09:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:42:08.218 09:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2433784 00:42:08.218 09:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:42:08.218 09:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2433784 /var/tmp/bperf.sock 00:42:08.218 09:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # '[' -z 2433784 ']' 00:42:08.218 09:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:08.218 09:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local max_retries=100 00:42:08.218 09:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:08.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:08.218 09:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # xtrace_disable 00:42:08.218 09:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:42:08.218 [2024-05-15 09:08:03.002004] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:42:08.218 [2024-05-15 09:08:03.002096] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2433784 ] 00:42:08.476 EAL: No free 2048 kB hugepages reported on node 1 00:42:08.476 [2024-05-15 09:08:03.072184] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:08.476 [2024-05-15 09:08:03.154221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:42:08.476 09:08:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:42:08.476 09:08:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@861 -- # return 0 00:42:08.476 09:08:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:42:08.476 09:08:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:42:08.735 09:08:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:42:08.735 09:08:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:08.735 09:08:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:42:08.735 09:08:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:08.735 09:08:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:42:08.735 09:08:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:42:09.301 nvme0n1 00:42:09.301 09:08:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:42:09.301 09:08:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:09.301 09:08:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:42:09.301 09:08:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:09.301 09:08:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:42:09.301 09:08:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:09.301 Running I/O for 2 seconds... 00:42:09.561 [2024-05-15 09:08:04.105933] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190ee5c8 00:42:09.561 [2024-05-15 09:08:04.106976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:09.561 [2024-05-15 09:08:04.107023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:42:09.561 [2024-05-15 09:08:04.119331] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190fbcf0 00:42:09.561 [2024-05-15 09:08:04.120564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:09.561 [2024-05-15 09:08:04.120599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:42:09.561 [2024-05-15 09:08:04.131383] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3060 00:42:09.561 [2024-05-15 09:08:04.132611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:09.561 [2024-05-15 09:08:04.132645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:42:09.562 [2024-05-15 09:08:04.144720] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e12d8 00:42:09.562 [2024-05-15 09:08:04.146053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:09.562 [2024-05-15 09:08:04.146088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:42:09.562 [2024-05-15 09:08:04.158041] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e7c50 00:42:09.562 [2024-05-15 09:08:04.159669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:8303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:09.562 [2024-05-15 09:08:04.159703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:42:09.562 [2024-05-15 09:08:04.171392] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190ddc00 00:42:09.562 [2024-05-15 09:08:04.173066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:09.562 [2024-05-15 09:08:04.173100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:42:09.562 [2024-05-15 09:08:04.183261] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190f0bc0 00:42:09.562 [2024-05-15 09:08:04.184531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:09.562 [2024-05-15 09:08:04.184564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:42:09.562 [2024-05-15 09:08:04.196091] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190f7970 00:42:09.562 [2024-05-15 09:08:04.197109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:09.562 [2024-05-15 09:08:04.197142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:42:09.562 [2024-05-15 09:08:04.208105] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190f6890 00:42:09.562 [2024-05-15 09:08:04.210022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:19896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:09.562 [2024-05-15 09:08:04.210056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:42:09.562 [2024-05-15 09:08:04.218946] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190f35f0 00:42:09.562 [2024-05-15 09:08:04.219764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:17462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:09.562 [2024-05-15 09:08:04.219797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:42:09.562 [2024-05-15 09:08:04.233629] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:09.562 [2024-05-15 09:08:04.234093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:3277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:09.562 [2024-05-15 09:08:04.234127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:09.562 [2024-05-15 09:08:04.247936] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:09.562 [2024-05-15 09:08:04.248149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:10535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:09.562 [2024-05-15 09:08:04.248181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:09.562 [2024-05-15 09:08:04.262194] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:09.562 [2024-05-15 09:08:04.262445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:11071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:09.562 [2024-05-15 09:08:04.262473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:09.562 [2024-05-15 09:08:04.276509] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:09.562 [2024-05-15 09:08:04.276734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:09.562 [2024-05-15 09:08:04.276766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:09.562 [2024-05-15 09:08:04.290862] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:09.562 [2024-05-15 09:08:04.291075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:09.562 [2024-05-15 09:08:04.291107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:09.562 [2024-05-15 09:08:04.305087] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:09.562 [2024-05-15 09:08:04.305319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:24426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:09.562 [2024-05-15 09:08:04.305353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:09.562 [2024-05-15 09:08:04.319266] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:09.562 [2024-05-15 09:08:04.319488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:09.562 [2024-05-15 09:08:04.319535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:09.562 [2024-05-15 09:08:04.333620] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:09.562 [2024-05-15 09:08:04.333801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:10182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:09.562 [2024-05-15 09:08:04.333833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:09.562 [2024-05-15 09:08:04.347795] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:09.562 [2024-05-15 09:08:04.348007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:18898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:09.562 [2024-05-15 09:08:04.348038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:09.821 [2024-05-15 09:08:04.361999] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:09.821 [2024-05-15 09:08:04.362212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:09.821 [2024-05-15 09:08:04.362270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:09.821 [2024-05-15 09:08:04.376073] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:09.821 [2024-05-15 09:08:04.376299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:09.821 [2024-05-15 09:08:04.376329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:09.821 [2024-05-15 09:08:04.390153] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:09.822 [2024-05-15 09:08:04.390383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:18559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:09.822 [2024-05-15 09:08:04.390413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:09.822 [2024-05-15 09:08:04.404262] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:09.822 [2024-05-15 09:08:04.404479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:09.822 [2024-05-15 09:08:04.404522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:09.822 [2024-05-15 09:08:04.418950] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:09.822 [2024-05-15 09:08:04.419196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:24470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:09.822 [2024-05-15 09:08:04.419236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:09.822 [2024-05-15 09:08:04.432800] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:09.822 [2024-05-15 09:08:04.433019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:09.822 [2024-05-15 09:08:04.433056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:09.822 [2024-05-15 09:08:04.447320] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:09.822 [2024-05-15 09:08:04.447561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:09.822 [2024-05-15 09:08:04.447594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:09.822 [2024-05-15 09:08:04.461802] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:09.822 [2024-05-15 09:08:04.462024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:09.822 [2024-05-15 09:08:04.462056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:09.822 [2024-05-15 09:08:04.476130] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:09.822 [2024-05-15 09:08:04.476412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:15518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:09.822 [2024-05-15 09:08:04.476441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:09.822 [2024-05-15 09:08:04.490430] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:09.822 [2024-05-15 09:08:04.490673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:09.822 [2024-05-15 09:08:04.490705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:09.822 [2024-05-15 09:08:04.504791] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:09.822 [2024-05-15 09:08:04.505003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:09.822 [2024-05-15 09:08:04.505034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:09.822 [2024-05-15 09:08:04.519134] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:09.822 [2024-05-15 09:08:04.519386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:9808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:09.822 [2024-05-15 09:08:04.519415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:09.822 [2024-05-15 09:08:04.533382] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:09.822 [2024-05-15 09:08:04.533638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:09.822 [2024-05-15 09:08:04.533671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:09.822 [2024-05-15 09:08:04.547693] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:09.822 [2024-05-15 09:08:04.547914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:09.822 [2024-05-15 09:08:04.547946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:09.822 [2024-05-15 09:08:04.562177] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:09.822 [2024-05-15 09:08:04.562407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:09.822 [2024-05-15 09:08:04.562436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:09.822 [2024-05-15 09:08:04.575984] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:09.822 [2024-05-15 09:08:04.576190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:19518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:09.822 [2024-05-15 09:08:04.576224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:09.822 [2024-05-15 09:08:04.590196] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:09.822 [2024-05-15 09:08:04.590394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:09.822 [2024-05-15 09:08:04.590437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:09.822 [2024-05-15 09:08:04.604341] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:09.822 [2024-05-15 09:08:04.604574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:09.822 [2024-05-15 09:08:04.604605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.081 [2024-05-15 09:08:04.618624] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.081 [2024-05-15 09:08:04.618847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.081 [2024-05-15 09:08:04.618880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.081 [2024-05-15 09:08:04.632947] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.081 [2024-05-15 09:08:04.633159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.081 [2024-05-15 09:08:04.633191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.081 [2024-05-15 09:08:04.647167] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.081 [2024-05-15 09:08:04.647384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:7080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.081 [2024-05-15 09:08:04.647428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.081 [2024-05-15 09:08:04.661245] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.081 [2024-05-15 09:08:04.661428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.081 [2024-05-15 09:08:04.661457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.081 [2024-05-15 09:08:04.675389] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.081 [2024-05-15 09:08:04.675612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.081 [2024-05-15 09:08:04.675652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.081 [2024-05-15 09:08:04.689726] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.081 [2024-05-15 09:08:04.689938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.081 [2024-05-15 09:08:04.689970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.081 [2024-05-15 09:08:04.704055] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.081 [2024-05-15 09:08:04.704281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:12170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.081 [2024-05-15 09:08:04.704309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.081 [2024-05-15 09:08:04.718497] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.081 [2024-05-15 09:08:04.718728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:10207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.081 [2024-05-15 09:08:04.718760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.081 [2024-05-15 09:08:04.732763] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.081 [2024-05-15 09:08:04.732981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.081 [2024-05-15 09:08:04.733013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.081 [2024-05-15 09:08:04.747103] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.081 [2024-05-15 09:08:04.747376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.081 [2024-05-15 09:08:04.747404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.081 [2024-05-15 09:08:04.761329] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.081 [2024-05-15 09:08:04.761568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.081 [2024-05-15 09:08:04.761609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.081 [2024-05-15 09:08:04.775619] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.081 [2024-05-15 09:08:04.775832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:20716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.081 [2024-05-15 09:08:04.775863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.081 [2024-05-15 09:08:04.789878] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.081 [2024-05-15 09:08:04.790099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.081 [2024-05-15 09:08:04.790129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.081 [2024-05-15 09:08:04.804230] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.081 [2024-05-15 09:08:04.804461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:15081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.081 [2024-05-15 09:08:04.804506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.081 [2024-05-15 09:08:04.818344] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.081 [2024-05-15 09:08:04.818600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.082 [2024-05-15 09:08:04.818631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.082 [2024-05-15 09:08:04.832425] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.082 [2024-05-15 09:08:04.832659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:10027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.082 [2024-05-15 09:08:04.832693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.082 [2024-05-15 09:08:04.846534] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.082 [2024-05-15 09:08:04.846745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.082 [2024-05-15 09:08:04.846778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.082 [2024-05-15 09:08:04.860766] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.082 [2024-05-15 09:08:04.860980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:19282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.082 [2024-05-15 09:08:04.861011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.341 [2024-05-15 09:08:04.873966] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.341 [2024-05-15 09:08:04.874182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.341 [2024-05-15 09:08:04.874235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.341 [2024-05-15 09:08:04.887750] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.341 [2024-05-15 09:08:04.887974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.341 [2024-05-15 09:08:04.888006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.341 [2024-05-15 09:08:04.901767] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.341 [2024-05-15 09:08:04.902023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:18319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.341 [2024-05-15 09:08:04.902067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.341 [2024-05-15 09:08:04.915597] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.341 [2024-05-15 09:08:04.915833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:18812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.341 [2024-05-15 09:08:04.915876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.341 [2024-05-15 09:08:04.929610] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.341 [2024-05-15 09:08:04.929845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:11722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.341 [2024-05-15 09:08:04.929874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.341 [2024-05-15 09:08:04.943434] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.341 [2024-05-15 09:08:04.943657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:17854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.341 [2024-05-15 09:08:04.943686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.341 [2024-05-15 09:08:04.957470] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.341 [2024-05-15 09:08:04.957751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:10229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.341 [2024-05-15 09:08:04.957794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.341 [2024-05-15 09:08:04.971341] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.341 [2024-05-15 09:08:04.971577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.341 [2024-05-15 09:08:04.971605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.341 [2024-05-15 09:08:04.985283] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.341 [2024-05-15 09:08:04.985525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.341 [2024-05-15 09:08:04.985554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.341 [2024-05-15 09:08:04.999018] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.341 [2024-05-15 09:08:04.999292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.341 [2024-05-15 09:08:04.999321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.341 [2024-05-15 09:08:05.012955] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.341 [2024-05-15 09:08:05.013188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.341 [2024-05-15 09:08:05.013237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.341 [2024-05-15 09:08:05.026794] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.341 [2024-05-15 09:08:05.027082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:20147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.341 [2024-05-15 09:08:05.027110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.341 [2024-05-15 09:08:05.040668] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.341 [2024-05-15 09:08:05.040901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:16512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.341 [2024-05-15 09:08:05.040930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.341 [2024-05-15 09:08:05.054533] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.341 [2024-05-15 09:08:05.054771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:7285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.341 [2024-05-15 09:08:05.054814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.341 [2024-05-15 09:08:05.068315] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.341 [2024-05-15 09:08:05.068509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:21071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.341 [2024-05-15 09:08:05.068554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.341 [2024-05-15 09:08:05.082079] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.341 [2024-05-15 09:08:05.082313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.341 [2024-05-15 09:08:05.082341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.341 [2024-05-15 09:08:05.095889] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.341 [2024-05-15 09:08:05.096177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.341 [2024-05-15 09:08:05.096228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.341 [2024-05-15 09:08:05.109608] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.341 [2024-05-15 09:08:05.109847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.341 [2024-05-15 09:08:05.109874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.341 [2024-05-15 09:08:05.123469] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.341 [2024-05-15 09:08:05.123703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:4531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.341 [2024-05-15 09:08:05.123731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.601 [2024-05-15 09:08:05.137042] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.601 [2024-05-15 09:08:05.137310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.601 [2024-05-15 09:08:05.137340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.601 [2024-05-15 09:08:05.150933] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.601 [2024-05-15 09:08:05.151161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:11071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.601 [2024-05-15 09:08:05.151189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.601 [2024-05-15 09:08:05.164977] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.601 [2024-05-15 09:08:05.165262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:17193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.601 [2024-05-15 09:08:05.165300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.601 [2024-05-15 09:08:05.178793] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.601 [2024-05-15 09:08:05.179079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.601 [2024-05-15 09:08:05.179123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.601 [2024-05-15 09:08:05.192715] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.601 [2024-05-15 09:08:05.192942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.601 [2024-05-15 09:08:05.192969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.601 [2024-05-15 09:08:05.206526] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.601 [2024-05-15 09:08:05.206768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:18131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.601 [2024-05-15 09:08:05.206796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.601 [2024-05-15 09:08:05.220453] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.601 [2024-05-15 09:08:05.220635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:18282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.601 [2024-05-15 09:08:05.220662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.601 [2024-05-15 09:08:05.234387] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.601 [2024-05-15 09:08:05.234593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.601 [2024-05-15 09:08:05.234620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.601 [2024-05-15 09:08:05.248320] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.601 [2024-05-15 09:08:05.248561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:23676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.601 [2024-05-15 09:08:05.248590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.601 [2024-05-15 09:08:05.262200] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.601 [2024-05-15 09:08:05.262477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.601 [2024-05-15 09:08:05.262506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.601 [2024-05-15 09:08:05.275903] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.601 [2024-05-15 09:08:05.276118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:12473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.601 [2024-05-15 09:08:05.276147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.601 [2024-05-15 09:08:05.289821] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.601 [2024-05-15 09:08:05.290094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:9153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.601 [2024-05-15 09:08:05.290123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.601 [2024-05-15 09:08:05.303868] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.601 [2024-05-15 09:08:05.304156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.601 [2024-05-15 09:08:05.304184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.601 [2024-05-15 09:08:05.317662] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.601 [2024-05-15 09:08:05.317903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.601 [2024-05-15 09:08:05.317932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.601 [2024-05-15 09:08:05.331459] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.601 [2024-05-15 09:08:05.331667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:21353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.601 [2024-05-15 09:08:05.331695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.601 [2024-05-15 09:08:05.345532] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.601 [2024-05-15 09:08:05.345776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.601 [2024-05-15 09:08:05.345802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.601 [2024-05-15 09:08:05.359365] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.601 [2024-05-15 09:08:05.359579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:69 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.601 [2024-05-15 09:08:05.359607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.601 [2024-05-15 09:08:05.373263] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.601 [2024-05-15 09:08:05.373455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:18377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.601 [2024-05-15 09:08:05.373482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.601 [2024-05-15 09:08:05.387091] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.601 [2024-05-15 09:08:05.387328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.601 [2024-05-15 09:08:05.387355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.861 [2024-05-15 09:08:05.400619] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.861 [2024-05-15 09:08:05.400910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:3920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.861 [2024-05-15 09:08:05.400956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.861 [2024-05-15 09:08:05.414373] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.861 [2024-05-15 09:08:05.414581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:24402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.861 [2024-05-15 09:08:05.414625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.861 [2024-05-15 09:08:05.428339] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.861 [2024-05-15 09:08:05.428566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:11484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.861 [2024-05-15 09:08:05.428595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.861 [2024-05-15 09:08:05.442088] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.861 [2024-05-15 09:08:05.442321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:13357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.861 [2024-05-15 09:08:05.442349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.861 [2024-05-15 09:08:05.455936] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.861 [2024-05-15 09:08:05.456166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.861 [2024-05-15 09:08:05.456194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.861 [2024-05-15 09:08:05.469781] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.861 [2024-05-15 09:08:05.470055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:6435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.861 [2024-05-15 09:08:05.470098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.861 [2024-05-15 09:08:05.483658] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.861 [2024-05-15 09:08:05.483878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:10891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.861 [2024-05-15 09:08:05.483908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.861 [2024-05-15 09:08:05.497313] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.861 [2024-05-15 09:08:05.497587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:17112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.861 [2024-05-15 09:08:05.497615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.861 [2024-05-15 09:08:05.511257] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.861 [2024-05-15 09:08:05.511485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.861 [2024-05-15 09:08:05.511514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.861 [2024-05-15 09:08:05.525248] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.861 [2024-05-15 09:08:05.525510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.861 [2024-05-15 09:08:05.525560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.861 [2024-05-15 09:08:05.539071] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.861 [2024-05-15 09:08:05.539299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:7501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.861 [2024-05-15 09:08:05.539326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.861 [2024-05-15 09:08:05.552987] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.861 [2024-05-15 09:08:05.553246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:11282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.861 [2024-05-15 09:08:05.553274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.861 [2024-05-15 09:08:05.566712] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.861 [2024-05-15 09:08:05.566920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:3269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.861 [2024-05-15 09:08:05.566948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.861 [2024-05-15 09:08:05.580625] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.861 [2024-05-15 09:08:05.580901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.861 [2024-05-15 09:08:05.580929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.861 [2024-05-15 09:08:05.594469] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.861 [2024-05-15 09:08:05.594678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:15175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.861 [2024-05-15 09:08:05.594706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.861 [2024-05-15 09:08:05.608447] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.861 [2024-05-15 09:08:05.608680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.861 [2024-05-15 09:08:05.608707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.862 [2024-05-15 09:08:05.622359] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.862 [2024-05-15 09:08:05.622553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:8647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.862 [2024-05-15 09:08:05.622580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.862 [2024-05-15 09:08:05.636206] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.862 [2024-05-15 09:08:05.636453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.862 [2024-05-15 09:08:05.636480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:10.862 [2024-05-15 09:08:05.649955] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:10.862 [2024-05-15 09:08:05.650190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:10.862 [2024-05-15 09:08:05.650227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:11.121 [2024-05-15 09:08:05.663240] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:11.121 [2024-05-15 09:08:05.663456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:11.121 [2024-05-15 09:08:05.663486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:11.121 [2024-05-15 09:08:05.677104] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:11.121 [2024-05-15 09:08:05.677313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:11.121 [2024-05-15 09:08:05.677342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:11.121 [2024-05-15 09:08:05.690872] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:11.121 [2024-05-15 09:08:05.691099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:8584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:11.121 [2024-05-15 09:08:05.691127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:11.121 [2024-05-15 09:08:05.704798] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:11.121 [2024-05-15 09:08:05.705013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:9720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:11.121 [2024-05-15 09:08:05.705040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:11.121 [2024-05-15 09:08:05.718481] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:11.121 [2024-05-15 09:08:05.718701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:20913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:11.121 [2024-05-15 09:08:05.718728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:11.121 [2024-05-15 09:08:05.732351] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:11.121 [2024-05-15 09:08:05.732616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:11.121 [2024-05-15 09:08:05.732644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:11.121 [2024-05-15 09:08:05.746350] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:11.121 [2024-05-15 09:08:05.746626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:11.121 [2024-05-15 09:08:05.746655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:11.121 [2024-05-15 09:08:05.760062] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:11.121 [2024-05-15 09:08:05.760302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:17773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:11.121 [2024-05-15 09:08:05.760330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:11.121 [2024-05-15 09:08:05.773866] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:11.121 [2024-05-15 09:08:05.774131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:10669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:11.121 [2024-05-15 09:08:05.774159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:11.121 [2024-05-15 09:08:05.787666] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:11.121 [2024-05-15 09:08:05.787886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:10611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:11.121 [2024-05-15 09:08:05.787914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:11.121 [2024-05-15 09:08:05.801558] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:11.121 [2024-05-15 09:08:05.801782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:5193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:11.121 [2024-05-15 09:08:05.801809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:11.121 [2024-05-15 09:08:05.815356] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:11.121 [2024-05-15 09:08:05.815545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:11.121 [2024-05-15 09:08:05.815572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:11.121 [2024-05-15 09:08:05.829321] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:11.121 [2024-05-15 09:08:05.829579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:11.121 [2024-05-15 09:08:05.829608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:11.121 [2024-05-15 09:08:05.843136] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:11.121 [2024-05-15 09:08:05.843360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:3566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:11.121 [2024-05-15 09:08:05.843402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:11.121 [2024-05-15 09:08:05.856948] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:11.121 [2024-05-15 09:08:05.857166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:8315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:11.121 [2024-05-15 09:08:05.857194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:11.121 [2024-05-15 09:08:05.870690] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:11.121 [2024-05-15 09:08:05.870907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:12054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:11.121 [2024-05-15 09:08:05.870935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:11.121 [2024-05-15 09:08:05.884507] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:11.121 [2024-05-15 09:08:05.884764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:4891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:11.121 [2024-05-15 09:08:05.884799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:11.121 [2024-05-15 09:08:05.898459] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:11.121 [2024-05-15 09:08:05.898733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:6214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:11.121 [2024-05-15 09:08:05.898761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:11.122 [2024-05-15 09:08:05.912348] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:11.122 [2024-05-15 09:08:05.912618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:11.122 [2024-05-15 09:08:05.912658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:11.381 [2024-05-15 09:08:05.925970] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:11.381 [2024-05-15 09:08:05.926182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:11.381 [2024-05-15 09:08:05.926212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:11.381 [2024-05-15 09:08:05.939861] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:11.381 [2024-05-15 09:08:05.940129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:11.381 [2024-05-15 09:08:05.940157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:11.381 [2024-05-15 09:08:05.953729] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:11.381 [2024-05-15 09:08:05.953933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:2267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:11.381 [2024-05-15 09:08:05.953960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:11.381 [2024-05-15 09:08:05.967737] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:11.381 [2024-05-15 09:08:05.967933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:11.381 [2024-05-15 09:08:05.967961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:11.381 [2024-05-15 09:08:05.981472] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:11.381 [2024-05-15 09:08:05.981711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:19147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:11.381 [2024-05-15 09:08:05.981739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:11.381 [2024-05-15 09:08:05.995222] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:11.381 [2024-05-15 09:08:05.995470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:24347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:11.381 [2024-05-15 09:08:05.995498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:11.381 [2024-05-15 09:08:06.009112] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:11.381 [2024-05-15 09:08:06.009356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:11.381 [2024-05-15 09:08:06.009392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:11.381 [2024-05-15 09:08:06.022813] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:11.381 [2024-05-15 09:08:06.023004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:17777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:11.381 [2024-05-15 09:08:06.023031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:11.381 [2024-05-15 09:08:06.036546] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:11.381 [2024-05-15 09:08:06.036811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:11.381 [2024-05-15 09:08:06.036839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:11.381 [2024-05-15 09:08:06.050298] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:11.381 [2024-05-15 09:08:06.050586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:11712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:11.381 [2024-05-15 09:08:06.050614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:11.381 [2024-05-15 09:08:06.064095] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:11.381 [2024-05-15 09:08:06.064306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:14759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:11.381 [2024-05-15 09:08:06.064333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:11.381 [2024-05-15 09:08:06.077870] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:11.381 [2024-05-15 09:08:06.078113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:11.381 [2024-05-15 09:08:06.078141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:11.381 [2024-05-15 09:08:06.091724] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d8c70) with pdu=0x2000190e3d08 00:42:11.381 [2024-05-15 09:08:06.091937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:20231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:11.381 [2024-05-15 09:08:06.091964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:11.381 00:42:11.381 Latency(us) 00:42:11.381 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:11.381 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:11.381 nvme0n1 : 2.01 18399.59 71.87 0.00 0.00 6939.66 3179.71 15146.10 00:42:11.381 =================================================================================================================== 00:42:11.381 Total : 18399.59 71.87 0.00 0.00 6939.66 3179.71 15146.10 00:42:11.381 0 00:42:11.381 09:08:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:42:11.381 09:08:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:42:11.381 09:08:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:42:11.381 | .driver_specific 00:42:11.381 | .nvme_error 00:42:11.381 | .status_code 00:42:11.381 | .command_transient_transport_error' 00:42:11.381 09:08:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:42:11.642 09:08:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 144 > 0 )) 00:42:11.642 09:08:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2433784 00:42:11.642 09:08:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' -z 2433784 ']' 00:42:11.642 09:08:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # kill -0 2433784 00:42:11.642 09:08:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # uname 00:42:11.642 09:08:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:42:11.642 09:08:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2433784 00:42:11.642 09:08:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:42:11.642 09:08:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:42:11.642 09:08:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2433784' 00:42:11.642 killing process with pid 2433784 00:42:11.642 09:08:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # kill 2433784 00:42:11.642 Received shutdown signal, test time was about 2.000000 seconds 00:42:11.642 00:42:11.642 Latency(us) 00:42:11.642 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:11.642 =================================================================================================================== 00:42:11.642 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:11.642 09:08:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # wait 2433784 00:42:11.900 09:08:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:42:11.900 09:08:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:42:11.900 09:08:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:42:11.900 09:08:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:42:11.900 09:08:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:42:11.900 09:08:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2434196 00:42:11.900 09:08:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:42:11.900 09:08:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2434196 /var/tmp/bperf.sock 00:42:11.900 09:08:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # '[' -z 2434196 ']' 00:42:11.901 09:08:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:11.901 09:08:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local max_retries=100 00:42:11.901 09:08:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:11.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:11.901 09:08:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # xtrace_disable 00:42:11.901 09:08:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:42:11.901 [2024-05-15 09:08:06.670275] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:42:11.901 [2024-05-15 09:08:06.670359] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2434196 ] 00:42:11.901 I/O size of 131072 is greater than zero copy threshold (65536). 00:42:11.901 Zero copy mechanism will not be used. 00:42:12.160 EAL: No free 2048 kB hugepages reported on node 1 00:42:12.160 [2024-05-15 09:08:06.742494] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:12.160 [2024-05-15 09:08:06.831950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:42:12.160 09:08:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:42:12.160 09:08:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@861 -- # return 0 00:42:12.160 09:08:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:42:12.160 09:08:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:42:12.418 09:08:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:42:12.418 09:08:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:12.418 09:08:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:42:12.418 09:08:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:12.418 09:08:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:42:12.418 09:08:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:42:12.984 nvme0n1 00:42:12.984 09:08:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:42:12.984 09:08:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:12.984 09:08:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:42:12.984 09:08:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:12.984 09:08:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:42:12.984 09:08:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:12.984 I/O size of 131072 is greater than zero copy threshold (65536). 00:42:12.984 Zero copy mechanism will not be used. 00:42:12.984 Running I/O for 2 seconds... 00:42:12.984 [2024-05-15 09:08:07.656904] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:12.984 [2024-05-15 09:08:07.657246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:12.984 [2024-05-15 09:08:07.657284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:12.984 [2024-05-15 09:08:07.664755] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:12.984 [2024-05-15 09:08:07.665085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:12.984 [2024-05-15 09:08:07.665115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:12.984 [2024-05-15 09:08:07.673373] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:12.984 [2024-05-15 09:08:07.673701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:12.984 [2024-05-15 09:08:07.673744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:12.984 [2024-05-15 09:08:07.682033] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:12.984 [2024-05-15 09:08:07.682378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:12.984 [2024-05-15 09:08:07.682408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:12.984 [2024-05-15 09:08:07.690587] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:12.984 [2024-05-15 09:08:07.690931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:12.984 [2024-05-15 09:08:07.690958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:12.984 [2024-05-15 09:08:07.699440] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:12.984 [2024-05-15 09:08:07.699747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:12.984 [2024-05-15 09:08:07.699775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:12.984 [2024-05-15 09:08:07.707938] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:12.984 [2024-05-15 09:08:07.708316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:12.984 [2024-05-15 09:08:07.708344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:12.984 [2024-05-15 09:08:07.716160] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:12.984 [2024-05-15 09:08:07.716492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:12.984 [2024-05-15 09:08:07.716521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:12.984 [2024-05-15 09:08:07.724482] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:12.984 [2024-05-15 09:08:07.724889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:12.984 [2024-05-15 09:08:07.724917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:12.984 [2024-05-15 09:08:07.733166] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:12.984 [2024-05-15 09:08:07.733512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:12.984 [2024-05-15 09:08:07.733541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:12.984 [2024-05-15 09:08:07.741597] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:12.984 [2024-05-15 09:08:07.741920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:12.984 [2024-05-15 09:08:07.741948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:12.984 [2024-05-15 09:08:07.749805] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:12.984 [2024-05-15 09:08:07.750161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:12.984 [2024-05-15 09:08:07.750198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:12.984 [2024-05-15 09:08:07.758274] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:12.984 [2024-05-15 09:08:07.758609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:12.984 [2024-05-15 09:08:07.758637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:12.984 [2024-05-15 09:08:07.766311] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:12.984 [2024-05-15 09:08:07.766628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:12.984 [2024-05-15 09:08:07.766656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:12.984 [2024-05-15 09:08:07.774603] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:12.984 [2024-05-15 09:08:07.774925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:12.984 [2024-05-15 09:08:07.774956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:13.243 [2024-05-15 09:08:07.783342] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.243 [2024-05-15 09:08:07.783665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.243 [2024-05-15 09:08:07.783719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:13.243 [2024-05-15 09:08:07.791567] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.243 [2024-05-15 09:08:07.791893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.243 [2024-05-15 09:08:07.791922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:13.243 [2024-05-15 09:08:07.799731] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.243 [2024-05-15 09:08:07.800028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.243 [2024-05-15 09:08:07.800057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:13.243 [2024-05-15 09:08:07.807144] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.243 [2024-05-15 09:08:07.807482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.243 [2024-05-15 09:08:07.807522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:13.243 [2024-05-15 09:08:07.814385] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.243 [2024-05-15 09:08:07.814701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.243 [2024-05-15 09:08:07.814743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:13.243 [2024-05-15 09:08:07.822372] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.244 [2024-05-15 09:08:07.822681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.244 [2024-05-15 09:08:07.822710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:13.244 [2024-05-15 09:08:07.828485] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.244 [2024-05-15 09:08:07.828784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.244 [2024-05-15 09:08:07.828812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:13.244 [2024-05-15 09:08:07.835638] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.244 [2024-05-15 09:08:07.835965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.244 [2024-05-15 09:08:07.836004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:13.244 [2024-05-15 09:08:07.841800] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.244 [2024-05-15 09:08:07.842110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.244 [2024-05-15 09:08:07.842137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:13.244 [2024-05-15 09:08:07.847297] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.244 [2024-05-15 09:08:07.847386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.244 [2024-05-15 09:08:07.847413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:13.244 [2024-05-15 09:08:07.853446] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.244 [2024-05-15 09:08:07.853730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.244 [2024-05-15 09:08:07.853758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:13.244 [2024-05-15 09:08:07.859085] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.244 [2024-05-15 09:08:07.859377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.244 [2024-05-15 09:08:07.859405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:13.244 [2024-05-15 09:08:07.864678] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.244 [2024-05-15 09:08:07.864961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.244 [2024-05-15 09:08:07.864989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:13.244 [2024-05-15 09:08:07.870332] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.244 [2024-05-15 09:08:07.870615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.244 [2024-05-15 09:08:07.870643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:13.244 [2024-05-15 09:08:07.876061] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.244 [2024-05-15 09:08:07.876353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.244 [2024-05-15 09:08:07.876380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:13.244 [2024-05-15 09:08:07.881915] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.244 [2024-05-15 09:08:07.882211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.244 [2024-05-15 09:08:07.882245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:13.244 [2024-05-15 09:08:07.887814] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.244 [2024-05-15 09:08:07.888097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.244 [2024-05-15 09:08:07.888124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:13.244 [2024-05-15 09:08:07.893638] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.244 [2024-05-15 09:08:07.893935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.244 [2024-05-15 09:08:07.893962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:13.244 [2024-05-15 09:08:07.899634] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.244 [2024-05-15 09:08:07.899913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.244 [2024-05-15 09:08:07.899941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:13.244 [2024-05-15 09:08:07.905768] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.244 [2024-05-15 09:08:07.906080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.244 [2024-05-15 09:08:07.906108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:13.244 [2024-05-15 09:08:07.911847] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.244 [2024-05-15 09:08:07.912130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.244 [2024-05-15 09:08:07.912158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:13.244 [2024-05-15 09:08:07.917910] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.244 [2024-05-15 09:08:07.918190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.244 [2024-05-15 09:08:07.918225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:13.244 [2024-05-15 09:08:07.923545] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.244 [2024-05-15 09:08:07.923826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.244 [2024-05-15 09:08:07.923859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:13.244 [2024-05-15 09:08:07.929700] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.244 [2024-05-15 09:08:07.930003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.244 [2024-05-15 09:08:07.930031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:13.244 [2024-05-15 09:08:07.935602] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.244 [2024-05-15 09:08:07.935883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.244 [2024-05-15 09:08:07.935910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:13.244 [2024-05-15 09:08:07.941582] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.244 [2024-05-15 09:08:07.941864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.244 [2024-05-15 09:08:07.941892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:13.244 [2024-05-15 09:08:07.948793] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.244 [2024-05-15 09:08:07.949120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.244 [2024-05-15 09:08:07.949148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:13.244 [2024-05-15 09:08:07.956160] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.244 [2024-05-15 09:08:07.956446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.244 [2024-05-15 09:08:07.956475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:13.244 [2024-05-15 09:08:07.962734] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.244 [2024-05-15 09:08:07.963015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.244 [2024-05-15 09:08:07.963043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:13.244 [2024-05-15 09:08:07.968896] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.244 [2024-05-15 09:08:07.969179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.244 [2024-05-15 09:08:07.969207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:13.244 [2024-05-15 09:08:07.976623] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.244 [2024-05-15 09:08:07.977005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.244 [2024-05-15 09:08:07.977047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:13.244 [2024-05-15 09:08:07.984602] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.244 [2024-05-15 09:08:07.984941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.244 [2024-05-15 09:08:07.984968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:13.245 [2024-05-15 09:08:07.992854] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.245 [2024-05-15 09:08:07.993268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.245 [2024-05-15 09:08:07.993296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:13.245 [2024-05-15 09:08:08.001144] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.245 [2024-05-15 09:08:08.001570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.245 [2024-05-15 09:08:08.001615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:13.245 [2024-05-15 09:08:08.009495] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.245 [2024-05-15 09:08:08.009886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.245 [2024-05-15 09:08:08.009914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:13.245 [2024-05-15 09:08:08.017176] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.245 [2024-05-15 09:08:08.017497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.245 [2024-05-15 09:08:08.017524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:13.245 [2024-05-15 09:08:08.025128] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.245 [2024-05-15 09:08:08.025510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.245 [2024-05-15 09:08:08.025539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:13.245 [2024-05-15 09:08:08.032260] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.245 [2024-05-15 09:08:08.032669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.245 [2024-05-15 09:08:08.032699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:13.504 [2024-05-15 09:08:08.039421] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.504 [2024-05-15 09:08:08.039721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.504 [2024-05-15 09:08:08.039751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:13.504 [2024-05-15 09:08:08.045807] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.504 [2024-05-15 09:08:08.046091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.504 [2024-05-15 09:08:08.046120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:13.504 [2024-05-15 09:08:08.052312] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.504 [2024-05-15 09:08:08.052595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.504 [2024-05-15 09:08:08.052624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:13.504 [2024-05-15 09:08:08.058780] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.504 [2024-05-15 09:08:08.059063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.504 [2024-05-15 09:08:08.059092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:13.504 [2024-05-15 09:08:08.065002] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.504 [2024-05-15 09:08:08.065289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.504 [2024-05-15 09:08:08.065317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:13.504 [2024-05-15 09:08:08.071260] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.504 [2024-05-15 09:08:08.071542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.504 [2024-05-15 09:08:08.071588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:13.504 [2024-05-15 09:08:08.076848] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.504 [2024-05-15 09:08:08.077131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.504 [2024-05-15 09:08:08.077159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:13.504 [2024-05-15 09:08:08.082721] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.504 [2024-05-15 09:08:08.083005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.504 [2024-05-15 09:08:08.083034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:13.504 [2024-05-15 09:08:08.088271] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.504 [2024-05-15 09:08:08.088554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.504 [2024-05-15 09:08:08.088581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:13.504 [2024-05-15 09:08:08.094403] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.504 [2024-05-15 09:08:08.094670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.504 [2024-05-15 09:08:08.094697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:13.504 [2024-05-15 09:08:08.100660] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.504 [2024-05-15 09:08:08.100948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.504 [2024-05-15 09:08:08.100997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:13.504 [2024-05-15 09:08:08.106854] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.504 [2024-05-15 09:08:08.107194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.504 [2024-05-15 09:08:08.107235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:13.504 [2024-05-15 09:08:08.113046] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.504 [2024-05-15 09:08:08.113351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.504 [2024-05-15 09:08:08.113379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:13.504 [2024-05-15 09:08:08.119209] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.505 [2024-05-15 09:08:08.119499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.505 [2024-05-15 09:08:08.119544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:13.505 [2024-05-15 09:08:08.125174] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.505 [2024-05-15 09:08:08.125467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.505 [2024-05-15 09:08:08.125496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:13.505 [2024-05-15 09:08:08.131291] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.505 [2024-05-15 09:08:08.131586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.505 [2024-05-15 09:08:08.131613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:13.505 [2024-05-15 09:08:08.137328] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.505 [2024-05-15 09:08:08.137609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.505 [2024-05-15 09:08:08.137637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:13.505 [2024-05-15 09:08:08.143365] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.505 [2024-05-15 09:08:08.143674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.505 [2024-05-15 09:08:08.143702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:13.505 [2024-05-15 09:08:08.148966] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.505 [2024-05-15 09:08:08.149255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.505 [2024-05-15 09:08:08.149283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:13.505 [2024-05-15 09:08:08.154873] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.505 [2024-05-15 09:08:08.155171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.505 [2024-05-15 09:08:08.155199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:13.505 [2024-05-15 09:08:08.161286] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.505 [2024-05-15 09:08:08.161567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.505 [2024-05-15 09:08:08.161596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:13.505 [2024-05-15 09:08:08.167190] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.505 [2024-05-15 09:08:08.167483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.505 [2024-05-15 09:08:08.167512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:13.505 [2024-05-15 09:08:08.173105] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.505 [2024-05-15 09:08:08.173397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.505 [2024-05-15 09:08:08.173441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:13.505 [2024-05-15 09:08:08.179276] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.505 [2024-05-15 09:08:08.179571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.505 [2024-05-15 09:08:08.179614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:13.505 [2024-05-15 09:08:08.184641] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.505 [2024-05-15 09:08:08.184920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.505 [2024-05-15 09:08:08.184948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:13.505 [2024-05-15 09:08:08.190313] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.505 [2024-05-15 09:08:08.190591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.505 [2024-05-15 09:08:08.190619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:13.505 [2024-05-15 09:08:08.196287] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.505 [2024-05-15 09:08:08.196566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.505 [2024-05-15 09:08:08.196594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:13.505 [2024-05-15 09:08:08.201608] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.505 [2024-05-15 09:08:08.201889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.505 [2024-05-15 09:08:08.201917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:13.505 [2024-05-15 09:08:08.207262] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.505 [2024-05-15 09:08:08.207556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.505 [2024-05-15 09:08:08.207598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:13.505 [2024-05-15 09:08:08.212941] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.505 [2024-05-15 09:08:08.213230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.505 [2024-05-15 09:08:08.213268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:13.505 [2024-05-15 09:08:08.218964] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.505 [2024-05-15 09:08:08.219250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.505 [2024-05-15 09:08:08.219292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:13.505 [2024-05-15 09:08:08.224827] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.505 [2024-05-15 09:08:08.225133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.505 [2024-05-15 09:08:08.225178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:13.505 [2024-05-15 09:08:08.231186] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.505 [2024-05-15 09:08:08.231499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.505 [2024-05-15 09:08:08.231541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:13.505 [2024-05-15 09:08:08.237550] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.505 [2024-05-15 09:08:08.237829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.505 [2024-05-15 09:08:08.237857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:13.505 [2024-05-15 09:08:08.243276] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.505 [2024-05-15 09:08:08.243555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.505 [2024-05-15 09:08:08.243584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:13.505 [2024-05-15 09:08:08.248711] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.505 [2024-05-15 09:08:08.249008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.505 [2024-05-15 09:08:08.249051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:13.505 [2024-05-15 09:08:08.254000] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.505 [2024-05-15 09:08:08.254319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.505 [2024-05-15 09:08:08.254352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:13.505 [2024-05-15 09:08:08.259620] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.505 [2024-05-15 09:08:08.259912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.505 [2024-05-15 09:08:08.259939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:13.505 [2024-05-15 09:08:08.265271] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.505 [2024-05-15 09:08:08.265550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.505 [2024-05-15 09:08:08.265591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:13.505 [2024-05-15 09:08:08.271108] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.505 [2024-05-15 09:08:08.271409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.505 [2024-05-15 09:08:08.271437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:13.505 [2024-05-15 09:08:08.278877] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.506 [2024-05-15 09:08:08.279310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.506 [2024-05-15 09:08:08.279338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:13.506 [2024-05-15 09:08:08.285295] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.506 [2024-05-15 09:08:08.285612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.506 [2024-05-15 09:08:08.285640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:13.506 [2024-05-15 09:08:08.292011] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.506 [2024-05-15 09:08:08.292299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.506 [2024-05-15 09:08:08.292330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:13.765 [2024-05-15 09:08:08.298761] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.765 [2024-05-15 09:08:08.299105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.765 [2024-05-15 09:08:08.299135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:13.765 [2024-05-15 09:08:08.306591] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.765 [2024-05-15 09:08:08.306929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.765 [2024-05-15 09:08:08.306959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:13.765 [2024-05-15 09:08:08.314355] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.765 [2024-05-15 09:08:08.314707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.765 [2024-05-15 09:08:08.314736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:13.765 [2024-05-15 09:08:08.322065] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.765 [2024-05-15 09:08:08.322400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.765 [2024-05-15 09:08:08.322429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:13.765 [2024-05-15 09:08:08.329650] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.765 [2024-05-15 09:08:08.330005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.765 [2024-05-15 09:08:08.330033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:13.765 [2024-05-15 09:08:08.337424] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.765 [2024-05-15 09:08:08.337774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.765 [2024-05-15 09:08:08.337805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:13.765 [2024-05-15 09:08:08.344821] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.765 [2024-05-15 09:08:08.345145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.765 [2024-05-15 09:08:08.345176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:13.765 [2024-05-15 09:08:08.353175] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.765 [2024-05-15 09:08:08.353553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.765 [2024-05-15 09:08:08.353586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:13.765 [2024-05-15 09:08:08.360160] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.765 [2024-05-15 09:08:08.360477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.765 [2024-05-15 09:08:08.360505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:13.765 [2024-05-15 09:08:08.366632] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.765 [2024-05-15 09:08:08.366943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.765 [2024-05-15 09:08:08.366974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:13.765 [2024-05-15 09:08:08.373117] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.765 [2024-05-15 09:08:08.373447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.765 [2024-05-15 09:08:08.373475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:13.765 [2024-05-15 09:08:08.379180] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.765 [2024-05-15 09:08:08.379494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.765 [2024-05-15 09:08:08.379539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:13.765 [2024-05-15 09:08:08.386097] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.765 [2024-05-15 09:08:08.386472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.765 [2024-05-15 09:08:08.386515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:13.765 [2024-05-15 09:08:08.394101] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.765 [2024-05-15 09:08:08.394422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.765 [2024-05-15 09:08:08.394453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:13.765 [2024-05-15 09:08:08.401770] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.765 [2024-05-15 09:08:08.402142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.765 [2024-05-15 09:08:08.402172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:13.765 [2024-05-15 09:08:08.409970] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.765 [2024-05-15 09:08:08.410346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.765 [2024-05-15 09:08:08.410375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:13.765 [2024-05-15 09:08:08.418200] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.765 [2024-05-15 09:08:08.418602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.765 [2024-05-15 09:08:08.418633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:13.765 [2024-05-15 09:08:08.426565] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.765 [2024-05-15 09:08:08.426985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.765 [2024-05-15 09:08:08.427016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:13.765 [2024-05-15 09:08:08.435072] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.765 [2024-05-15 09:08:08.435494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.765 [2024-05-15 09:08:08.435538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:13.765 [2024-05-15 09:08:08.443341] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.765 [2024-05-15 09:08:08.443748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.765 [2024-05-15 09:08:08.443784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:13.765 [2024-05-15 09:08:08.451692] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.765 [2024-05-15 09:08:08.452151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.765 [2024-05-15 09:08:08.452183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:13.765 [2024-05-15 09:08:08.460179] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.765 [2024-05-15 09:08:08.460583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.766 [2024-05-15 09:08:08.460614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:13.766 [2024-05-15 09:08:08.468602] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.766 [2024-05-15 09:08:08.468959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.766 [2024-05-15 09:08:08.468990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:13.766 [2024-05-15 09:08:08.477137] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.766 [2024-05-15 09:08:08.477561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.766 [2024-05-15 09:08:08.477592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:13.766 [2024-05-15 09:08:08.485816] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.766 [2024-05-15 09:08:08.486267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.766 [2024-05-15 09:08:08.486310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:13.766 [2024-05-15 09:08:08.494357] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.766 [2024-05-15 09:08:08.494729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.766 [2024-05-15 09:08:08.494760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:13.766 [2024-05-15 09:08:08.502486] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.766 [2024-05-15 09:08:08.502892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.766 [2024-05-15 09:08:08.502922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:13.766 [2024-05-15 09:08:08.510481] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.766 [2024-05-15 09:08:08.510891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.766 [2024-05-15 09:08:08.510922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:13.766 [2024-05-15 09:08:08.518592] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.766 [2024-05-15 09:08:08.518993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.766 [2024-05-15 09:08:08.519023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:13.766 [2024-05-15 09:08:08.526788] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.766 [2024-05-15 09:08:08.527205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.766 [2024-05-15 09:08:08.527258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:13.766 [2024-05-15 09:08:08.535276] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.766 [2024-05-15 09:08:08.535644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.766 [2024-05-15 09:08:08.535675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:13.766 [2024-05-15 09:08:08.543745] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.766 [2024-05-15 09:08:08.544150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.766 [2024-05-15 09:08:08.544181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:13.766 [2024-05-15 09:08:08.551927] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:13.766 [2024-05-15 09:08:08.552324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.766 [2024-05-15 09:08:08.552362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:14.025 [2024-05-15 09:08:08.560624] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.025 [2024-05-15 09:08:08.560965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.025 [2024-05-15 09:08:08.560999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:14.025 [2024-05-15 09:08:08.568879] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.025 [2024-05-15 09:08:08.569333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.025 [2024-05-15 09:08:08.569378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:14.025 [2024-05-15 09:08:08.577307] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.025 [2024-05-15 09:08:08.577735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.025 [2024-05-15 09:08:08.577766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:14.025 [2024-05-15 09:08:08.585945] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.025 [2024-05-15 09:08:08.586341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.025 [2024-05-15 09:08:08.586374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:14.025 [2024-05-15 09:08:08.594380] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.025 [2024-05-15 09:08:08.594813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.025 [2024-05-15 09:08:08.594844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:14.025 [2024-05-15 09:08:08.602817] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.025 [2024-05-15 09:08:08.603226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.025 [2024-05-15 09:08:08.603258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:14.025 [2024-05-15 09:08:08.611532] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.025 [2024-05-15 09:08:08.611995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.025 [2024-05-15 09:08:08.612026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:14.025 [2024-05-15 09:08:08.619988] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.025 [2024-05-15 09:08:08.620405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.025 [2024-05-15 09:08:08.620434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:14.025 [2024-05-15 09:08:08.628589] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.025 [2024-05-15 09:08:08.628982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.025 [2024-05-15 09:08:08.629012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:14.025 [2024-05-15 09:08:08.637023] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.025 [2024-05-15 09:08:08.637428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.025 [2024-05-15 09:08:08.637455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:14.025 [2024-05-15 09:08:08.645130] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.025 [2024-05-15 09:08:08.645529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.025 [2024-05-15 09:08:08.645575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:14.025 [2024-05-15 09:08:08.653470] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.025 [2024-05-15 09:08:08.653815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.025 [2024-05-15 09:08:08.653846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:14.025 [2024-05-15 09:08:08.661845] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.025 [2024-05-15 09:08:08.662255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.025 [2024-05-15 09:08:08.662299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:14.025 [2024-05-15 09:08:08.670408] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.025 [2024-05-15 09:08:08.670733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.025 [2024-05-15 09:08:08.670765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:14.025 [2024-05-15 09:08:08.679001] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.026 [2024-05-15 09:08:08.679390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.026 [2024-05-15 09:08:08.679432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:14.026 [2024-05-15 09:08:08.687294] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.026 [2024-05-15 09:08:08.687679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.026 [2024-05-15 09:08:08.687710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:14.026 [2024-05-15 09:08:08.695528] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.026 [2024-05-15 09:08:08.695958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.026 [2024-05-15 09:08:08.695989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:14.026 [2024-05-15 09:08:08.703929] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.026 [2024-05-15 09:08:08.704308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.026 [2024-05-15 09:08:08.704351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:14.026 [2024-05-15 09:08:08.712299] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.026 [2024-05-15 09:08:08.712745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.026 [2024-05-15 09:08:08.712777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:14.026 [2024-05-15 09:08:08.720664] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.026 [2024-05-15 09:08:08.721087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.026 [2024-05-15 09:08:08.721119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:14.026 [2024-05-15 09:08:08.729209] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.026 [2024-05-15 09:08:08.729577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.026 [2024-05-15 09:08:08.729608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:14.026 [2024-05-15 09:08:08.737507] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.026 [2024-05-15 09:08:08.737898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.026 [2024-05-15 09:08:08.737929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:14.026 [2024-05-15 09:08:08.745830] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.026 [2024-05-15 09:08:08.746302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.026 [2024-05-15 09:08:08.746329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:14.026 [2024-05-15 09:08:08.754269] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.026 [2024-05-15 09:08:08.754638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.026 [2024-05-15 09:08:08.754669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:14.026 [2024-05-15 09:08:08.762589] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.026 [2024-05-15 09:08:08.763011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.026 [2024-05-15 09:08:08.763043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:14.026 [2024-05-15 09:08:08.771060] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.026 [2024-05-15 09:08:08.771464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.026 [2024-05-15 09:08:08.771492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:14.026 [2024-05-15 09:08:08.779404] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.026 [2024-05-15 09:08:08.779769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.026 [2024-05-15 09:08:08.779800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:14.026 [2024-05-15 09:08:08.787661] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.026 [2024-05-15 09:08:08.788072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.026 [2024-05-15 09:08:08.788103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:14.026 [2024-05-15 09:08:08.795077] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.026 [2024-05-15 09:08:08.795396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.026 [2024-05-15 09:08:08.795424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:14.026 [2024-05-15 09:08:08.801426] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.026 [2024-05-15 09:08:08.801752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.026 [2024-05-15 09:08:08.801788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:14.026 [2024-05-15 09:08:08.808487] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.026 [2024-05-15 09:08:08.808810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.026 [2024-05-15 09:08:08.808841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:14.026 [2024-05-15 09:08:08.814999] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.026 [2024-05-15 09:08:08.815332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.026 [2024-05-15 09:08:08.815363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:14.285 [2024-05-15 09:08:08.821167] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.285 [2024-05-15 09:08:08.821482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.285 [2024-05-15 09:08:08.821512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:14.285 [2024-05-15 09:08:08.828465] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.285 [2024-05-15 09:08:08.828845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.285 [2024-05-15 09:08:08.828877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:14.285 [2024-05-15 09:08:08.836469] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.285 [2024-05-15 09:08:08.836793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.285 [2024-05-15 09:08:08.836825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:14.285 [2024-05-15 09:08:08.844347] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.285 [2024-05-15 09:08:08.844722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.285 [2024-05-15 09:08:08.844753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:14.285 [2024-05-15 09:08:08.852651] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.285 [2024-05-15 09:08:08.853057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.285 [2024-05-15 09:08:08.853088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:14.285 [2024-05-15 09:08:08.860125] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.285 [2024-05-15 09:08:08.860542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.285 [2024-05-15 09:08:08.860589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:14.285 [2024-05-15 09:08:08.868149] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.285 [2024-05-15 09:08:08.868532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.285 [2024-05-15 09:08:08.868577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:14.285 [2024-05-15 09:08:08.876364] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.285 [2024-05-15 09:08:08.876788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.285 [2024-05-15 09:08:08.876820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:14.285 [2024-05-15 09:08:08.884492] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.285 [2024-05-15 09:08:08.884866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.285 [2024-05-15 09:08:08.884897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:14.285 [2024-05-15 09:08:08.892069] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.285 [2024-05-15 09:08:08.892472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.285 [2024-05-15 09:08:08.892500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:14.285 [2024-05-15 09:08:08.899952] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.285 [2024-05-15 09:08:08.900283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.286 [2024-05-15 09:08:08.900311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:14.286 [2024-05-15 09:08:08.906595] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.286 [2024-05-15 09:08:08.906920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.286 [2024-05-15 09:08:08.906950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:14.286 [2024-05-15 09:08:08.912797] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.286 [2024-05-15 09:08:08.913107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.286 [2024-05-15 09:08:08.913138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:14.286 [2024-05-15 09:08:08.919125] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.286 [2024-05-15 09:08:08.919444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.286 [2024-05-15 09:08:08.919473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:14.286 [2024-05-15 09:08:08.925021] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.286 [2024-05-15 09:08:08.925348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.286 [2024-05-15 09:08:08.925376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:14.286 [2024-05-15 09:08:08.932708] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.286 [2024-05-15 09:08:08.933108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.286 [2024-05-15 09:08:08.933139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:14.286 [2024-05-15 09:08:08.940477] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.286 [2024-05-15 09:08:08.940797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.286 [2024-05-15 09:08:08.940828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:14.286 [2024-05-15 09:08:08.946826] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.286 [2024-05-15 09:08:08.947136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.286 [2024-05-15 09:08:08.947168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:14.286 [2024-05-15 09:08:08.953573] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.286 [2024-05-15 09:08:08.953873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.286 [2024-05-15 09:08:08.953904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:14.286 [2024-05-15 09:08:08.959799] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.286 [2024-05-15 09:08:08.960096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.286 [2024-05-15 09:08:08.960126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:14.286 [2024-05-15 09:08:08.966186] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.286 [2024-05-15 09:08:08.966511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.286 [2024-05-15 09:08:08.966539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:14.286 [2024-05-15 09:08:08.974001] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.286 [2024-05-15 09:08:08.974382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.286 [2024-05-15 09:08:08.974424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:14.286 [2024-05-15 09:08:08.981274] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.286 [2024-05-15 09:08:08.981667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.286 [2024-05-15 09:08:08.981695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:14.286 [2024-05-15 09:08:08.989428] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.286 [2024-05-15 09:08:08.989738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.286 [2024-05-15 09:08:08.989777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:14.286 [2024-05-15 09:08:08.996584] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.286 [2024-05-15 09:08:08.996990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.286 [2024-05-15 09:08:08.997019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:14.286 [2024-05-15 09:08:09.004323] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.286 [2024-05-15 09:08:09.004637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.286 [2024-05-15 09:08:09.004667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:14.286 [2024-05-15 09:08:09.012653] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.286 [2024-05-15 09:08:09.013073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.286 [2024-05-15 09:08:09.013105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:14.286 [2024-05-15 09:08:09.020701] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.286 [2024-05-15 09:08:09.021117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.286 [2024-05-15 09:08:09.021148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:14.286 [2024-05-15 09:08:09.028296] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.286 [2024-05-15 09:08:09.028671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.286 [2024-05-15 09:08:09.028703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:14.286 [2024-05-15 09:08:09.034987] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.286 [2024-05-15 09:08:09.035406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.286 [2024-05-15 09:08:09.035435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:14.286 [2024-05-15 09:08:09.041502] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.286 [2024-05-15 09:08:09.041824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.286 [2024-05-15 09:08:09.041854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:14.286 [2024-05-15 09:08:09.047796] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.286 [2024-05-15 09:08:09.048194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.286 [2024-05-15 09:08:09.048232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:14.286 [2024-05-15 09:08:09.053946] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.286 [2024-05-15 09:08:09.054247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.286 [2024-05-15 09:08:09.054290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:14.286 [2024-05-15 09:08:09.060229] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.286 [2024-05-15 09:08:09.060555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.286 [2024-05-15 09:08:09.060584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:14.286 [2024-05-15 09:08:09.066659] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.286 [2024-05-15 09:08:09.066950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.286 [2024-05-15 09:08:09.066978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:14.286 [2024-05-15 09:08:09.073151] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.286 [2024-05-15 09:08:09.073435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.286 [2024-05-15 09:08:09.073465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:14.546 [2024-05-15 09:08:09.079255] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.546 [2024-05-15 09:08:09.079539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.546 [2024-05-15 09:08:09.079569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:14.546 [2024-05-15 09:08:09.085309] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.546 [2024-05-15 09:08:09.085588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.546 [2024-05-15 09:08:09.085618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:14.546 [2024-05-15 09:08:09.091283] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.546 [2024-05-15 09:08:09.091561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.546 [2024-05-15 09:08:09.091590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:14.546 [2024-05-15 09:08:09.097304] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.546 [2024-05-15 09:08:09.097586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.546 [2024-05-15 09:08:09.097614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:14.546 [2024-05-15 09:08:09.102710] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.546 [2024-05-15 09:08:09.102990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.546 [2024-05-15 09:08:09.103018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:14.546 [2024-05-15 09:08:09.108769] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.546 [2024-05-15 09:08:09.109050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.546 [2024-05-15 09:08:09.109078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:14.546 [2024-05-15 09:08:09.114322] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.546 [2024-05-15 09:08:09.114602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.546 [2024-05-15 09:08:09.114629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:14.546 [2024-05-15 09:08:09.119659] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.546 [2024-05-15 09:08:09.119938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.546 [2024-05-15 09:08:09.119967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:14.546 [2024-05-15 09:08:09.125403] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.546 [2024-05-15 09:08:09.125680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.546 [2024-05-15 09:08:09.125708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:14.546 [2024-05-15 09:08:09.130848] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.546 [2024-05-15 09:08:09.131126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.546 [2024-05-15 09:08:09.131168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:14.546 [2024-05-15 09:08:09.136543] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.546 [2024-05-15 09:08:09.136838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.546 [2024-05-15 09:08:09.136867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:14.546 [2024-05-15 09:08:09.142634] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.546 [2024-05-15 09:08:09.142927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.546 [2024-05-15 09:08:09.142953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:14.546 [2024-05-15 09:08:09.148744] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.546 [2024-05-15 09:08:09.149024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.546 [2024-05-15 09:08:09.149051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:14.546 [2024-05-15 09:08:09.154290] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.546 [2024-05-15 09:08:09.154569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.546 [2024-05-15 09:08:09.154602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:14.546 [2024-05-15 09:08:09.160359] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.546 [2024-05-15 09:08:09.160669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.546 [2024-05-15 09:08:09.160697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:14.546 [2024-05-15 09:08:09.165786] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.546 [2024-05-15 09:08:09.166064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.546 [2024-05-15 09:08:09.166091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:14.546 [2024-05-15 09:08:09.171165] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.546 [2024-05-15 09:08:09.171455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.547 [2024-05-15 09:08:09.171483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:14.547 [2024-05-15 09:08:09.176968] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.547 [2024-05-15 09:08:09.177315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.547 [2024-05-15 09:08:09.177343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:14.547 [2024-05-15 09:08:09.184089] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.547 [2024-05-15 09:08:09.184459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.547 [2024-05-15 09:08:09.184487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:14.547 [2024-05-15 09:08:09.191505] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.547 [2024-05-15 09:08:09.191883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.547 [2024-05-15 09:08:09.191911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:14.547 [2024-05-15 09:08:09.199063] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.547 [2024-05-15 09:08:09.199455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.547 [2024-05-15 09:08:09.199484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:14.547 [2024-05-15 09:08:09.206469] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.547 [2024-05-15 09:08:09.206749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.547 [2024-05-15 09:08:09.206777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:14.547 [2024-05-15 09:08:09.213316] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.547 [2024-05-15 09:08:09.213656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.547 [2024-05-15 09:08:09.213683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:14.547 [2024-05-15 09:08:09.221316] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.547 [2024-05-15 09:08:09.221631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.547 [2024-05-15 09:08:09.221659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:14.547 [2024-05-15 09:08:09.227770] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.547 [2024-05-15 09:08:09.228052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.547 [2024-05-15 09:08:09.228080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:14.547 [2024-05-15 09:08:09.234146] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.547 [2024-05-15 09:08:09.234434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.547 [2024-05-15 09:08:09.234463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:14.547 [2024-05-15 09:08:09.240658] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.547 [2024-05-15 09:08:09.240951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.547 [2024-05-15 09:08:09.240978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:14.547 [2024-05-15 09:08:09.246032] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.547 [2024-05-15 09:08:09.246319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.547 [2024-05-15 09:08:09.246347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:14.547 [2024-05-15 09:08:09.252015] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.547 [2024-05-15 09:08:09.252302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.547 [2024-05-15 09:08:09.252329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:14.547 [2024-05-15 09:08:09.257687] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.547 [2024-05-15 09:08:09.257980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.547 [2024-05-15 09:08:09.258006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:14.547 [2024-05-15 09:08:09.263146] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.547 [2024-05-15 09:08:09.263437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.547 [2024-05-15 09:08:09.263465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:14.547 [2024-05-15 09:08:09.268943] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.547 [2024-05-15 09:08:09.269258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.547 [2024-05-15 09:08:09.269291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:14.547 [2024-05-15 09:08:09.275083] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.547 [2024-05-15 09:08:09.275371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.547 [2024-05-15 09:08:09.275399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:14.547 [2024-05-15 09:08:09.281414] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.547 [2024-05-15 09:08:09.281697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.547 [2024-05-15 09:08:09.281739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:14.547 [2024-05-15 09:08:09.287462] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.547 [2024-05-15 09:08:09.287736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.547 [2024-05-15 09:08:09.287764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:14.547 [2024-05-15 09:08:09.293582] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.547 [2024-05-15 09:08:09.293842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.547 [2024-05-15 09:08:09.293870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:14.547 [2024-05-15 09:08:09.299261] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.547 [2024-05-15 09:08:09.299512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.547 [2024-05-15 09:08:09.299554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:14.547 [2024-05-15 09:08:09.305270] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.547 [2024-05-15 09:08:09.305544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.547 [2024-05-15 09:08:09.305570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:14.547 [2024-05-15 09:08:09.311476] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.547 [2024-05-15 09:08:09.311780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.547 [2024-05-15 09:08:09.311807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:14.547 [2024-05-15 09:08:09.317736] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.547 [2024-05-15 09:08:09.318029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.547 [2024-05-15 09:08:09.318081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:14.547 [2024-05-15 09:08:09.323483] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.547 [2024-05-15 09:08:09.323732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.547 [2024-05-15 09:08:09.323773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:14.547 [2024-05-15 09:08:09.329315] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.547 [2024-05-15 09:08:09.329566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.547 [2024-05-15 09:08:09.329594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:14.547 [2024-05-15 09:08:09.335142] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.547 [2024-05-15 09:08:09.335459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.547 [2024-05-15 09:08:09.335504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:14.807 [2024-05-15 09:08:09.341285] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.807 [2024-05-15 09:08:09.341555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.807 [2024-05-15 09:08:09.341583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:14.807 [2024-05-15 09:08:09.346573] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.807 [2024-05-15 09:08:09.346863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.807 [2024-05-15 09:08:09.346893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:14.807 [2024-05-15 09:08:09.351778] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.807 [2024-05-15 09:08:09.352029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.807 [2024-05-15 09:08:09.352057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:14.807 [2024-05-15 09:08:09.357028] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.807 [2024-05-15 09:08:09.357285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.807 [2024-05-15 09:08:09.357313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:14.807 [2024-05-15 09:08:09.362236] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.807 [2024-05-15 09:08:09.362503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.807 [2024-05-15 09:08:09.362547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:14.807 [2024-05-15 09:08:09.367584] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.807 [2024-05-15 09:08:09.367855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.807 [2024-05-15 09:08:09.367882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:14.807 [2024-05-15 09:08:09.373282] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.807 [2024-05-15 09:08:09.373600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.807 [2024-05-15 09:08:09.373630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:14.807 [2024-05-15 09:08:09.378982] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.807 [2024-05-15 09:08:09.379240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.807 [2024-05-15 09:08:09.379267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:14.807 [2024-05-15 09:08:09.385372] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.807 [2024-05-15 09:08:09.385651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.807 [2024-05-15 09:08:09.385677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:14.807 [2024-05-15 09:08:09.391922] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.807 [2024-05-15 09:08:09.392189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.807 [2024-05-15 09:08:09.392225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:14.807 [2024-05-15 09:08:09.398590] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.807 [2024-05-15 09:08:09.398876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.807 [2024-05-15 09:08:09.398903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:14.807 [2024-05-15 09:08:09.404782] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.807 [2024-05-15 09:08:09.405049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.807 [2024-05-15 09:08:09.405076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:14.807 [2024-05-15 09:08:09.410196] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.807 [2024-05-15 09:08:09.410454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.807 [2024-05-15 09:08:09.410482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:14.807 [2024-05-15 09:08:09.415476] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.807 [2024-05-15 09:08:09.415738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.807 [2024-05-15 09:08:09.415765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:14.807 [2024-05-15 09:08:09.420846] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.807 [2024-05-15 09:08:09.421110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.807 [2024-05-15 09:08:09.421137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:14.807 [2024-05-15 09:08:09.426292] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.807 [2024-05-15 09:08:09.426543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.808 [2024-05-15 09:08:09.426572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:14.808 [2024-05-15 09:08:09.431743] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.808 [2024-05-15 09:08:09.432014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.808 [2024-05-15 09:08:09.432041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:14.808 [2024-05-15 09:08:09.437792] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.808 [2024-05-15 09:08:09.438045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.808 [2024-05-15 09:08:09.438073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:14.808 [2024-05-15 09:08:09.443910] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.808 [2024-05-15 09:08:09.444207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.808 [2024-05-15 09:08:09.444242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:14.808 [2024-05-15 09:08:09.450129] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.808 [2024-05-15 09:08:09.450436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.808 [2024-05-15 09:08:09.450464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:14.808 [2024-05-15 09:08:09.457101] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.808 [2024-05-15 09:08:09.457421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.808 [2024-05-15 09:08:09.457449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:14.808 [2024-05-15 09:08:09.465288] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.808 [2024-05-15 09:08:09.465634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.808 [2024-05-15 09:08:09.465661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:14.808 [2024-05-15 09:08:09.473420] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.808 [2024-05-15 09:08:09.473707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.808 [2024-05-15 09:08:09.473740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:14.808 [2024-05-15 09:08:09.481262] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.808 [2024-05-15 09:08:09.481651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.808 [2024-05-15 09:08:09.481680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:14.808 [2024-05-15 09:08:09.489269] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.808 [2024-05-15 09:08:09.489638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.808 [2024-05-15 09:08:09.489667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:14.808 [2024-05-15 09:08:09.497177] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.808 [2024-05-15 09:08:09.497579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.808 [2024-05-15 09:08:09.497608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:14.808 [2024-05-15 09:08:09.505261] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.808 [2024-05-15 09:08:09.505521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.808 [2024-05-15 09:08:09.505565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:14.808 [2024-05-15 09:08:09.513199] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.808 [2024-05-15 09:08:09.513575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.808 [2024-05-15 09:08:09.513604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:14.808 [2024-05-15 09:08:09.521187] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.808 [2024-05-15 09:08:09.521568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.808 [2024-05-15 09:08:09.521597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:14.808 [2024-05-15 09:08:09.529448] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.808 [2024-05-15 09:08:09.529808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.808 [2024-05-15 09:08:09.529836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:14.808 [2024-05-15 09:08:09.537522] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.808 [2024-05-15 09:08:09.537827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.808 [2024-05-15 09:08:09.537856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:14.808 [2024-05-15 09:08:09.545487] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.808 [2024-05-15 09:08:09.545840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.808 [2024-05-15 09:08:09.545868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:14.808 [2024-05-15 09:08:09.552621] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.808 [2024-05-15 09:08:09.553012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.808 [2024-05-15 09:08:09.553042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:14.808 [2024-05-15 09:08:09.560967] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.808 [2024-05-15 09:08:09.561278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.808 [2024-05-15 09:08:09.561306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:14.808 [2024-05-15 09:08:09.568775] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.808 [2024-05-15 09:08:09.569112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.808 [2024-05-15 09:08:09.569143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:14.808 [2024-05-15 09:08:09.577100] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.808 [2024-05-15 09:08:09.577468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.808 [2024-05-15 09:08:09.577496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:14.808 [2024-05-15 09:08:09.585353] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.808 [2024-05-15 09:08:09.585685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.808 [2024-05-15 09:08:09.585716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:14.808 [2024-05-15 09:08:09.591825] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:14.808 [2024-05-15 09:08:09.592103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:14.808 [2024-05-15 09:08:09.592134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:14.808 [2024-05-15 09:08:09.598385] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:15.067 [2024-05-15 09:08:09.598681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.067 [2024-05-15 09:08:09.598714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:15.067 [2024-05-15 09:08:09.604346] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:15.067 [2024-05-15 09:08:09.604642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.067 [2024-05-15 09:08:09.604674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.067 [2024-05-15 09:08:09.611287] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:15.067 [2024-05-15 09:08:09.611676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.067 [2024-05-15 09:08:09.611707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:15.067 [2024-05-15 09:08:09.619019] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:15.067 [2024-05-15 09:08:09.619396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.067 [2024-05-15 09:08:09.619425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:15.067 [2024-05-15 09:08:09.626742] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:15.067 [2024-05-15 09:08:09.627148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.067 [2024-05-15 09:08:09.627178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:15.067 [2024-05-15 09:08:09.634171] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:15.067 [2024-05-15 09:08:09.634516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.067 [2024-05-15 09:08:09.634561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.067 [2024-05-15 09:08:09.641271] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:15.067 [2024-05-15 09:08:09.641662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.067 [2024-05-15 09:08:09.641691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:15.067 [2024-05-15 09:08:09.648736] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5da300) with pdu=0x2000190fef90 00:42:15.067 [2024-05-15 09:08:09.649009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.067 [2024-05-15 09:08:09.649039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:15.067 00:42:15.067 Latency(us) 00:42:15.067 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:15.067 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:42:15.067 nvme0n1 : 2.00 4435.47 554.43 0.00 0.00 3598.73 2415.12 8980.86 00:42:15.067 =================================================================================================================== 00:42:15.067 Total : 4435.47 554.43 0.00 0.00 3598.73 2415.12 8980.86 00:42:15.067 0 00:42:15.067 09:08:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:42:15.067 09:08:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:42:15.067 09:08:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:42:15.067 09:08:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:42:15.067 | .driver_specific 00:42:15.067 | .nvme_error 00:42:15.067 | .status_code 00:42:15.067 | .command_transient_transport_error' 00:42:15.331 09:08:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 286 > 0 )) 00:42:15.331 09:08:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2434196 00:42:15.331 09:08:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' -z 2434196 ']' 00:42:15.331 09:08:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # kill -0 2434196 00:42:15.331 09:08:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # uname 00:42:15.331 09:08:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:42:15.331 09:08:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2434196 00:42:15.331 09:08:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:42:15.331 09:08:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:42:15.331 09:08:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2434196' 00:42:15.331 killing process with pid 2434196 00:42:15.331 09:08:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # kill 2434196 00:42:15.331 Received shutdown signal, test time was about 2.000000 seconds 00:42:15.331 00:42:15.331 Latency(us) 00:42:15.331 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:15.331 =================================================================================================================== 00:42:15.331 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:15.331 09:08:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # wait 2434196 00:42:15.597 09:08:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2432888 00:42:15.597 09:08:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' -z 2432888 ']' 00:42:15.597 09:08:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # kill -0 2432888 00:42:15.597 09:08:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # uname 00:42:15.597 09:08:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:42:15.597 09:08:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2432888 00:42:15.597 09:08:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:42:15.597 09:08:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:42:15.597 09:08:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2432888' 00:42:15.597 killing process with pid 2432888 00:42:15.597 09:08:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # kill 2432888 00:42:15.597 [2024-05-15 09:08:10.172573] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:42:15.597 09:08:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # wait 2432888 00:42:15.597 00:42:15.597 real 0m15.131s 00:42:15.597 user 0m30.204s 00:42:15.597 sys 0m4.067s 00:42:15.597 09:08:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # xtrace_disable 00:42:15.597 09:08:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:42:15.597 ************************************ 00:42:15.597 END TEST nvmf_digest_error 00:42:15.597 ************************************ 00:42:15.855 09:08:10 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:42:15.855 09:08:10 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:42:15.855 09:08:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:42:15.855 09:08:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:42:15.855 09:08:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:42:15.855 09:08:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:42:15.855 09:08:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:42:15.855 09:08:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:42:15.855 rmmod nvme_tcp 00:42:15.855 rmmod nvme_fabrics 00:42:15.855 rmmod nvme_keyring 00:42:15.855 09:08:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:42:15.855 09:08:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:42:15.855 09:08:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:42:15.855 09:08:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 2432888 ']' 00:42:15.855 09:08:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 2432888 00:42:15.855 09:08:10 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@947 -- # '[' -z 2432888 ']' 00:42:15.855 09:08:10 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@951 -- # kill -0 2432888 00:42:15.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 951: kill: (2432888) - No such process 00:42:15.855 09:08:10 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@974 -- # echo 'Process with pid 2432888 is not found' 00:42:15.855 Process with pid 2432888 is not found 00:42:15.855 09:08:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:42:15.855 09:08:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:42:15.855 09:08:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:42:15.855 09:08:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:42:15.855 09:08:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:42:15.855 09:08:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:15.855 09:08:10 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:42:15.855 09:08:10 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:17.758 09:08:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:42:17.758 00:42:17.758 real 0m35.119s 00:42:17.758 user 1m0.507s 00:42:17.758 sys 0m10.326s 00:42:17.758 09:08:12 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # xtrace_disable 00:42:17.758 09:08:12 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:42:17.758 ************************************ 00:42:17.758 END TEST nvmf_digest 00:42:17.758 ************************************ 00:42:17.758 09:08:12 nvmf_tcp -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:42:17.758 09:08:12 nvmf_tcp -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:42:17.758 09:08:12 nvmf_tcp -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:42:17.758 09:08:12 nvmf_tcp -- nvmf/nvmf.sh@121 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:42:17.758 09:08:12 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:42:17.758 09:08:12 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:42:17.758 09:08:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:18.016 ************************************ 00:42:18.016 START TEST nvmf_bdevperf 00:42:18.016 ************************************ 00:42:18.017 09:08:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:42:18.017 * Looking for test storage... 00:42:18.017 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:42:18.017 09:08:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:18.017 09:08:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:42:18.017 09:08:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:18.017 09:08:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:18.017 09:08:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:18.017 09:08:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:18.017 09:08:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:18.017 09:08:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:18.017 09:08:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:18.017 09:08:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:18.017 09:08:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:18.017 09:08:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:18.017 09:08:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:42:18.017 09:08:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:42:18.017 09:08:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:18.017 09:08:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:18.017 09:08:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:18.017 09:08:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:18.017 09:08:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:18.017 09:08:12 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:18.017 09:08:12 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:18.017 09:08:12 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:18.017 09:08:12 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:18.017 09:08:12 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:18.017 09:08:12 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:18.017 09:08:12 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:42:18.017 09:08:12 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:18.017 09:08:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:42:18.017 09:08:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:42:18.017 09:08:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:42:18.017 09:08:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:18.017 09:08:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:18.017 09:08:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:18.017 09:08:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:42:18.017 09:08:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:42:18.017 09:08:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:42:18.017 09:08:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:42:18.017 09:08:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:42:18.017 09:08:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:42:18.017 09:08:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:42:18.017 09:08:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:18.017 09:08:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:42:18.017 09:08:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:42:18.017 09:08:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:42:18.017 09:08:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:18.017 09:08:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:42:18.017 09:08:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:18.017 09:08:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:42:18.017 09:08:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:42:18.017 09:08:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:42:18.017 09:08:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:42:20.545 Found 0000:09:00.0 (0x8086 - 0x159b) 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:42:20.545 Found 0000:09:00.1 (0x8086 - 0x159b) 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:42:20.545 Found net devices under 0000:09:00.0: cvl_0_0 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:42:20.545 Found net devices under 0000:09:00.1: cvl_0_1 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:20.545 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:42:20.546 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:20.546 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:20.546 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:20.546 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:42:20.546 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:20.546 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:42:20.546 00:42:20.546 --- 10.0.0.2 ping statistics --- 00:42:20.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:20.546 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:42:20.546 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:20.546 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:20.546 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:42:20.546 00:42:20.546 --- 10.0.0.1 ping statistics --- 00:42:20.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:20.546 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:42:20.546 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:20.546 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:42:20.546 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:42:20.546 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:20.546 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:42:20.546 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:42:20.546 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:20.546 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:42:20.546 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:42:20.804 09:08:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:42:20.804 09:08:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:42:20.804 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:42:20.804 09:08:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@721 -- # xtrace_disable 00:42:20.804 09:08:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:42:20.804 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2436946 00:42:20.804 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:42:20.804 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2436946 00:42:20.804 09:08:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@828 -- # '[' -z 2436946 ']' 00:42:20.804 09:08:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:20.804 09:08:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local max_retries=100 00:42:20.804 09:08:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:20.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:20.804 09:08:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@837 -- # xtrace_disable 00:42:20.804 09:08:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:42:20.804 [2024-05-15 09:08:15.392290] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:42:20.804 [2024-05-15 09:08:15.392364] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:20.804 EAL: No free 2048 kB hugepages reported on node 1 00:42:20.804 [2024-05-15 09:08:15.471627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:42:20.804 [2024-05-15 09:08:15.563622] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:20.804 [2024-05-15 09:08:15.563684] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:20.804 [2024-05-15 09:08:15.563701] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:20.804 [2024-05-15 09:08:15.563714] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:20.804 [2024-05-15 09:08:15.563727] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:20.804 [2024-05-15 09:08:15.564120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:42:20.804 [2024-05-15 09:08:15.564179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:42:20.804 [2024-05-15 09:08:15.564175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:42:21.062 09:08:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:42:21.062 09:08:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@861 -- # return 0 00:42:21.062 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:42:21.062 09:08:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@727 -- # xtrace_disable 00:42:21.062 09:08:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:42:21.062 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:21.062 09:08:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:21.062 09:08:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:21.062 09:08:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:42:21.062 [2024-05-15 09:08:15.696446] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:21.062 09:08:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:21.062 09:08:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:42:21.062 09:08:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:21.062 09:08:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:42:21.062 Malloc0 00:42:21.062 09:08:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:21.062 09:08:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:42:21.062 09:08:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:21.062 09:08:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:42:21.062 09:08:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:21.062 09:08:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:42:21.062 09:08:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:21.062 09:08:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:42:21.062 09:08:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:21.062 09:08:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:21.062 09:08:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:21.062 09:08:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:42:21.062 [2024-05-15 09:08:15.759437] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:42:21.062 [2024-05-15 09:08:15.759777] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:21.062 09:08:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:21.062 09:08:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:42:21.062 09:08:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:42:21.062 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:42:21.062 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:42:21.062 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:42:21.062 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:42:21.062 { 00:42:21.062 "params": { 00:42:21.062 "name": "Nvme$subsystem", 00:42:21.062 "trtype": "$TEST_TRANSPORT", 00:42:21.062 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:21.062 "adrfam": "ipv4", 00:42:21.062 "trsvcid": "$NVMF_PORT", 00:42:21.062 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:21.062 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:21.062 "hdgst": ${hdgst:-false}, 00:42:21.062 "ddgst": ${ddgst:-false} 00:42:21.062 }, 00:42:21.062 "method": "bdev_nvme_attach_controller" 00:42:21.062 } 00:42:21.062 EOF 00:42:21.062 )") 00:42:21.062 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:42:21.062 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:42:21.062 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:42:21.062 09:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:42:21.062 "params": { 00:42:21.062 "name": "Nvme1", 00:42:21.062 "trtype": "tcp", 00:42:21.062 "traddr": "10.0.0.2", 00:42:21.062 "adrfam": "ipv4", 00:42:21.062 "trsvcid": "4420", 00:42:21.062 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:21.062 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:21.062 "hdgst": false, 00:42:21.062 "ddgst": false 00:42:21.062 }, 00:42:21.062 "method": "bdev_nvme_attach_controller" 00:42:21.062 }' 00:42:21.062 [2024-05-15 09:08:15.804066] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:42:21.062 [2024-05-15 09:08:15.804149] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2436987 ] 00:42:21.062 EAL: No free 2048 kB hugepages reported on node 1 00:42:21.319 [2024-05-15 09:08:15.878832] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:21.319 [2024-05-15 09:08:15.962321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:42:21.577 Running I/O for 1 seconds... 00:42:22.508 00:42:22.508 Latency(us) 00:42:22.508 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:22.508 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:42:22.508 Verification LBA range: start 0x0 length 0x4000 00:42:22.509 Nvme1n1 : 1.01 8670.38 33.87 0.00 0.00 14692.91 1796.17 15534.46 00:42:22.509 =================================================================================================================== 00:42:22.509 Total : 8670.38 33.87 0.00 0.00 14692.91 1796.17 15534.46 00:42:22.767 09:08:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2437232 00:42:22.767 09:08:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:42:22.767 09:08:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:42:22.767 09:08:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:42:22.767 09:08:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:42:22.767 09:08:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:42:22.767 09:08:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:42:22.767 09:08:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:42:22.767 { 00:42:22.767 "params": { 00:42:22.767 "name": "Nvme$subsystem", 00:42:22.767 "trtype": "$TEST_TRANSPORT", 00:42:22.767 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:22.767 "adrfam": "ipv4", 00:42:22.767 "trsvcid": "$NVMF_PORT", 00:42:22.767 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:22.767 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:22.767 "hdgst": ${hdgst:-false}, 00:42:22.767 "ddgst": ${ddgst:-false} 00:42:22.767 }, 00:42:22.767 "method": "bdev_nvme_attach_controller" 00:42:22.767 } 00:42:22.767 EOF 00:42:22.767 )") 00:42:22.767 09:08:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:42:22.767 09:08:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:42:22.767 09:08:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:42:22.767 09:08:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:42:22.767 "params": { 00:42:22.767 "name": "Nvme1", 00:42:22.767 "trtype": "tcp", 00:42:22.767 "traddr": "10.0.0.2", 00:42:22.767 "adrfam": "ipv4", 00:42:22.767 "trsvcid": "4420", 00:42:22.767 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:22.767 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:22.767 "hdgst": false, 00:42:22.767 "ddgst": false 00:42:22.767 }, 00:42:22.767 "method": "bdev_nvme_attach_controller" 00:42:22.767 }' 00:42:22.767 [2024-05-15 09:08:17.406708] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:42:22.767 [2024-05-15 09:08:17.406781] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2437232 ] 00:42:22.767 EAL: No free 2048 kB hugepages reported on node 1 00:42:22.767 [2024-05-15 09:08:17.474389] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:22.767 [2024-05-15 09:08:17.558697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:42:23.024 Running I/O for 15 seconds... 00:42:26.311 09:08:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2436946 00:42:26.311 09:08:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:42:26.311 [2024-05-15 09:08:20.379409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:50944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.311 [2024-05-15 09:08:20.379461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.311 [2024-05-15 09:08:20.379494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:51432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.311 [2024-05-15 09:08:20.379513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.311 [2024-05-15 09:08:20.379532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:51440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.311 [2024-05-15 09:08:20.379548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.311 [2024-05-15 09:08:20.379564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:51448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.311 [2024-05-15 09:08:20.379582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.311 [2024-05-15 09:08:20.379599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:51456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.311 [2024-05-15 09:08:20.379615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.311 [2024-05-15 09:08:20.379632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:51464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.311 [2024-05-15 09:08:20.379647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.311 [2024-05-15 09:08:20.379665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:51472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.311 [2024-05-15 09:08:20.379681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.311 [2024-05-15 09:08:20.379696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:51480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.311 [2024-05-15 09:08:20.379720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.311 [2024-05-15 09:08:20.379738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:51488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.311 [2024-05-15 09:08:20.379752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.311 [2024-05-15 09:08:20.379768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:51496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.311 [2024-05-15 09:08:20.379800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.311 [2024-05-15 09:08:20.379815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:51504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.311 [2024-05-15 09:08:20.379829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.311 [2024-05-15 09:08:20.379859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:51512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.311 [2024-05-15 09:08:20.379876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.311 [2024-05-15 09:08:20.379895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:51520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.311 [2024-05-15 09:08:20.379912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.311 [2024-05-15 09:08:20.379931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:51528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.311 [2024-05-15 09:08:20.379948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.311 [2024-05-15 09:08:20.379968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:51536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.311 [2024-05-15 09:08:20.379985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.311 [2024-05-15 09:08:20.380003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:51544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.311 [2024-05-15 09:08:20.380020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.311 [2024-05-15 09:08:20.380037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:51552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.311 [2024-05-15 09:08:20.380053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.311 [2024-05-15 09:08:20.380070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:51560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.311 [2024-05-15 09:08:20.380085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.311 [2024-05-15 09:08:20.380102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:51568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.311 [2024-05-15 09:08:20.380118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.311 [2024-05-15 09:08:20.380135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:51576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.311 [2024-05-15 09:08:20.380150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.311 [2024-05-15 09:08:20.380172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:51584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.311 [2024-05-15 09:08:20.380189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.311 [2024-05-15 09:08:20.380206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:51592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.311 [2024-05-15 09:08:20.380233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.311 [2024-05-15 09:08:20.380251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:51600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.311 [2024-05-15 09:08:20.380282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.311 [2024-05-15 09:08:20.380299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:51608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.311 [2024-05-15 09:08:20.380312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.311 [2024-05-15 09:08:20.380328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:51616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.311 [2024-05-15 09:08:20.380342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.312 [2024-05-15 09:08:20.380358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:51624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.312 [2024-05-15 09:08:20.380373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.312 [2024-05-15 09:08:20.380388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:51632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.312 [2024-05-15 09:08:20.380402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.312 [2024-05-15 09:08:20.380417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:51640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.312 [2024-05-15 09:08:20.380431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.312 [2024-05-15 09:08:20.380447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:51648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.312 [2024-05-15 09:08:20.380461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.312 [2024-05-15 09:08:20.380477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:51656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.312 [2024-05-15 09:08:20.380508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.312 [2024-05-15 09:08:20.380525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:51664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.312 [2024-05-15 09:08:20.380538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.312 [2024-05-15 09:08:20.380553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:51672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.312 [2024-05-15 09:08:20.380583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.312 [2024-05-15 09:08:20.380601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:51680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.312 [2024-05-15 09:08:20.380617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.312 [2024-05-15 09:08:20.380638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:51688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.312 [2024-05-15 09:08:20.380655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.312 [2024-05-15 09:08:20.380672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:51696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.312 [2024-05-15 09:08:20.380687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.312 [2024-05-15 09:08:20.380704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:51704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.312 [2024-05-15 09:08:20.380720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.312 [2024-05-15 09:08:20.380738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:51712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.312 [2024-05-15 09:08:20.380753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.312 [2024-05-15 09:08:20.380770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:51720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.312 [2024-05-15 09:08:20.380786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.312 [2024-05-15 09:08:20.380803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:51728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.312 [2024-05-15 09:08:20.380819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.312 [2024-05-15 09:08:20.380836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:51736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.312 [2024-05-15 09:08:20.380851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.312 [2024-05-15 09:08:20.380868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:51744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.312 [2024-05-15 09:08:20.380883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.312 [2024-05-15 09:08:20.380900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:51752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.312 [2024-05-15 09:08:20.380916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.312 [2024-05-15 09:08:20.380933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:51760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.312 [2024-05-15 09:08:20.380948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.312 [2024-05-15 09:08:20.380965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:51768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.312 [2024-05-15 09:08:20.380981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.312 [2024-05-15 09:08:20.380998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:51776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.312 [2024-05-15 09:08:20.381014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.312 [2024-05-15 09:08:20.381031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:51784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.312 [2024-05-15 09:08:20.381050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.312 [2024-05-15 09:08:20.381068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:51792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.312 [2024-05-15 09:08:20.381084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.312 [2024-05-15 09:08:20.381101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:51800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.312 [2024-05-15 09:08:20.381116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.312 [2024-05-15 09:08:20.381133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:51808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.312 [2024-05-15 09:08:20.381149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.312 [2024-05-15 09:08:20.381166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:51816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.312 [2024-05-15 09:08:20.381181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.312 [2024-05-15 09:08:20.381198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:51824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.312 [2024-05-15 09:08:20.381213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.312 [2024-05-15 09:08:20.381241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:51832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.312 [2024-05-15 09:08:20.381257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.312 [2024-05-15 09:08:20.381290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:51840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.312 [2024-05-15 09:08:20.381305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.312 [2024-05-15 09:08:20.381320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:51848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.312 [2024-05-15 09:08:20.381334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.312 [2024-05-15 09:08:20.381349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:51856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.312 [2024-05-15 09:08:20.381363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.312 [2024-05-15 09:08:20.381379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:51864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.312 [2024-05-15 09:08:20.381393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.312 [2024-05-15 09:08:20.381408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:51872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.312 [2024-05-15 09:08:20.381422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.312 [2024-05-15 09:08:20.381437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:51880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.312 [2024-05-15 09:08:20.381451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.312 [2024-05-15 09:08:20.381470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:51888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.313 [2024-05-15 09:08:20.381485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.313 [2024-05-15 09:08:20.381516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:51896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.313 [2024-05-15 09:08:20.381529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.313 [2024-05-15 09:08:20.381544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:51904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.313 [2024-05-15 09:08:20.381574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.313 [2024-05-15 09:08:20.381592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:51912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.313 [2024-05-15 09:08:20.381608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.313 [2024-05-15 09:08:20.381625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:51920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.313 [2024-05-15 09:08:20.381640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.313 [2024-05-15 09:08:20.381657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:51928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.313 [2024-05-15 09:08:20.381672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.313 [2024-05-15 09:08:20.381689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:51936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.313 [2024-05-15 09:08:20.381704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.313 [2024-05-15 09:08:20.381721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:51944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.313 [2024-05-15 09:08:20.381736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.313 [2024-05-15 09:08:20.381754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:50952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.313 [2024-05-15 09:08:20.381770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.313 [2024-05-15 09:08:20.381787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:50960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.313 [2024-05-15 09:08:20.381802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.313 [2024-05-15 09:08:20.381819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:50968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.313 [2024-05-15 09:08:20.381835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.313 [2024-05-15 09:08:20.381852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:50976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.313 [2024-05-15 09:08:20.381868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.313 [2024-05-15 09:08:20.381885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:50984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.313 [2024-05-15 09:08:20.381904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.313 [2024-05-15 09:08:20.381923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:50992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.313 [2024-05-15 09:08:20.381938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.313 [2024-05-15 09:08:20.381956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:51000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.313 [2024-05-15 09:08:20.381971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.313 [2024-05-15 09:08:20.381988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:51008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.313 [2024-05-15 09:08:20.382003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.313 [2024-05-15 09:08:20.382020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:51016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.313 [2024-05-15 09:08:20.382035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.313 [2024-05-15 09:08:20.382052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:51024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.313 [2024-05-15 09:08:20.382068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.313 [2024-05-15 09:08:20.382085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:51032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.313 [2024-05-15 09:08:20.382101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.313 [2024-05-15 09:08:20.382118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:51040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.313 [2024-05-15 09:08:20.382134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.313 [2024-05-15 09:08:20.382151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:51048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.313 [2024-05-15 09:08:20.382166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.313 [2024-05-15 09:08:20.382183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:51056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.313 [2024-05-15 09:08:20.382199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.313 [2024-05-15 09:08:20.382420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:51064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.313 [2024-05-15 09:08:20.382442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.313 [2024-05-15 09:08:20.382458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:51072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.313 [2024-05-15 09:08:20.382473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.313 [2024-05-15 09:08:20.382488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:51952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:26.313 [2024-05-15 09:08:20.382502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.313 [2024-05-15 09:08:20.382518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:51080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.313 [2024-05-15 09:08:20.382549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.313 [2024-05-15 09:08:20.382564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:51088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.313 [2024-05-15 09:08:20.382577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.313 [2024-05-15 09:08:20.382608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:51096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.313 [2024-05-15 09:08:20.382624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.313 [2024-05-15 09:08:20.382642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:51104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.313 [2024-05-15 09:08:20.382657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.313 [2024-05-15 09:08:20.382674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:51112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.313 [2024-05-15 09:08:20.382690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.313 [2024-05-15 09:08:20.382707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:51120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.313 [2024-05-15 09:08:20.382723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.313 [2024-05-15 09:08:20.382740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:51128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.313 [2024-05-15 09:08:20.382759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.313 [2024-05-15 09:08:20.382777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:51136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.313 [2024-05-15 09:08:20.382793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.313 [2024-05-15 09:08:20.382811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:51144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.313 [2024-05-15 09:08:20.382827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.313 [2024-05-15 09:08:20.382845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:51152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.313 [2024-05-15 09:08:20.382861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.313 [2024-05-15 09:08:20.382878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:51160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.314 [2024-05-15 09:08:20.382894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.314 [2024-05-15 09:08:20.382912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:51168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.314 [2024-05-15 09:08:20.382929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.314 [2024-05-15 09:08:20.382946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:51176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.314 [2024-05-15 09:08:20.382963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.314 [2024-05-15 09:08:20.382984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:51184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.314 [2024-05-15 09:08:20.383001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.314 [2024-05-15 09:08:20.383018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:51192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.314 [2024-05-15 09:08:20.383034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.314 [2024-05-15 09:08:20.383051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:51200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.314 [2024-05-15 09:08:20.383067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.314 [2024-05-15 09:08:20.383084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:51208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.314 [2024-05-15 09:08:20.383100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.314 [2024-05-15 09:08:20.383118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:51216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.314 [2024-05-15 09:08:20.383134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.314 [2024-05-15 09:08:20.383152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:51224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.314 [2024-05-15 09:08:20.383167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.314 [2024-05-15 09:08:20.383185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:51232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.314 [2024-05-15 09:08:20.383201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.314 [2024-05-15 09:08:20.383228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:51240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.314 [2024-05-15 09:08:20.383247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.314 [2024-05-15 09:08:20.383281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:51248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.314 [2024-05-15 09:08:20.383295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.314 [2024-05-15 09:08:20.383311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:51256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.314 [2024-05-15 09:08:20.383326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.314 [2024-05-15 09:08:20.383342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:51264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.314 [2024-05-15 09:08:20.383357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.314 [2024-05-15 09:08:20.383372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:51272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.314 [2024-05-15 09:08:20.383386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.314 [2024-05-15 09:08:20.383402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:51280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.314 [2024-05-15 09:08:20.383421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.314 [2024-05-15 09:08:20.383437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:51288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.314 [2024-05-15 09:08:20.383452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.314 [2024-05-15 09:08:20.383467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:51296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.314 [2024-05-15 09:08:20.383482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.314 [2024-05-15 09:08:20.383497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:51304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.314 [2024-05-15 09:08:20.383529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.314 [2024-05-15 09:08:20.383547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.314 [2024-05-15 09:08:20.383563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.314 [2024-05-15 09:08:20.383582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:51320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.314 [2024-05-15 09:08:20.383598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.314 [2024-05-15 09:08:20.383615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:51328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.314 [2024-05-15 09:08:20.383631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.314 [2024-05-15 09:08:20.383648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:51336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.314 [2024-05-15 09:08:20.383664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.314 [2024-05-15 09:08:20.383682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:51344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.314 [2024-05-15 09:08:20.383699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.314 [2024-05-15 09:08:20.383716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:51352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.314 [2024-05-15 09:08:20.383732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.314 [2024-05-15 09:08:20.383750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:51360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.314 [2024-05-15 09:08:20.383765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.314 [2024-05-15 09:08:20.383783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:51368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.314 [2024-05-15 09:08:20.383799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.314 [2024-05-15 09:08:20.383816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:51376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.314 [2024-05-15 09:08:20.383832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.314 [2024-05-15 09:08:20.383852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:51384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.314 [2024-05-15 09:08:20.383869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.314 [2024-05-15 09:08:20.383886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:51392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.314 [2024-05-15 09:08:20.383902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.314 [2024-05-15 09:08:20.383919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:51400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.314 [2024-05-15 09:08:20.383934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.314 [2024-05-15 09:08:20.383951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:51408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.314 [2024-05-15 09:08:20.383967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.314 [2024-05-15 09:08:20.383984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:51416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.314 [2024-05-15 09:08:20.383999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.314 [2024-05-15 09:08:20.384016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:51424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.314 [2024-05-15 09:08:20.384031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.314 [2024-05-15 09:08:20.384048] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeda010 is same with the state(5) to be set 00:42:26.314 [2024-05-15 09:08:20.384068] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:42:26.314 [2024-05-15 09:08:20.384081] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:42:26.314 [2024-05-15 09:08:20.384094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51960 len:8 PRP1 0x0 PRP2 0x0 00:42:26.314 [2024-05-15 09:08:20.384108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:26.314 [2024-05-15 09:08:20.384174] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xeda010 was disconnected and freed. reset controller. 00:42:26.314 [2024-05-15 09:08:20.387875] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.314 [2024-05-15 09:08:20.387948] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.314 [2024-05-15 09:08:20.388666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.314 [2024-05-15 09:08:20.388878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.314 [2024-05-15 09:08:20.388909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.314 [2024-05-15 09:08:20.388927] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.314 [2024-05-15 09:08:20.389172] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.315 [2024-05-15 09:08:20.389428] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.315 [2024-05-15 09:08:20.389451] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.315 [2024-05-15 09:08:20.389469] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.315 [2024-05-15 09:08:20.393161] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.315 [2024-05-15 09:08:20.402309] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.315 [2024-05-15 09:08:20.402694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.315 [2024-05-15 09:08:20.402829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.315 [2024-05-15 09:08:20.402855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.315 [2024-05-15 09:08:20.402871] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.315 [2024-05-15 09:08:20.403130] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.315 [2024-05-15 09:08:20.403391] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.315 [2024-05-15 09:08:20.403413] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.315 [2024-05-15 09:08:20.403428] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.315 [2024-05-15 09:08:20.407080] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.315 [2024-05-15 09:08:20.416334] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.315 [2024-05-15 09:08:20.416749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.315 [2024-05-15 09:08:20.416979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.315 [2024-05-15 09:08:20.417004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.315 [2024-05-15 09:08:20.417034] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.315 [2024-05-15 09:08:20.417301] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.315 [2024-05-15 09:08:20.417548] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.315 [2024-05-15 09:08:20.417572] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.315 [2024-05-15 09:08:20.417587] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.315 [2024-05-15 09:08:20.421228] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.315 [2024-05-15 09:08:20.430249] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.315 [2024-05-15 09:08:20.430670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.315 [2024-05-15 09:08:20.430851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.315 [2024-05-15 09:08:20.430901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.315 [2024-05-15 09:08:20.430919] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.315 [2024-05-15 09:08:20.431161] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.315 [2024-05-15 09:08:20.431420] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.315 [2024-05-15 09:08:20.431444] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.315 [2024-05-15 09:08:20.431460] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.315 [2024-05-15 09:08:20.435098] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.315 [2024-05-15 09:08:20.444347] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.315 [2024-05-15 09:08:20.444772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.315 [2024-05-15 09:08:20.444926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.315 [2024-05-15 09:08:20.444951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.315 [2024-05-15 09:08:20.444968] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.315 [2024-05-15 09:08:20.445245] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.315 [2024-05-15 09:08:20.445492] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.315 [2024-05-15 09:08:20.445516] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.315 [2024-05-15 09:08:20.445531] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.315 [2024-05-15 09:08:20.449163] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.315 [2024-05-15 09:08:20.458386] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.315 [2024-05-15 09:08:20.458817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.315 [2024-05-15 09:08:20.459037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.315 [2024-05-15 09:08:20.459062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.315 [2024-05-15 09:08:20.459093] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.315 [2024-05-15 09:08:20.459346] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.315 [2024-05-15 09:08:20.459592] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.315 [2024-05-15 09:08:20.459616] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.315 [2024-05-15 09:08:20.459632] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.315 [2024-05-15 09:08:20.463268] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.315 [2024-05-15 09:08:20.472483] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.315 [2024-05-15 09:08:20.472886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.315 [2024-05-15 09:08:20.473069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.315 [2024-05-15 09:08:20.473116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.315 [2024-05-15 09:08:20.473134] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.315 [2024-05-15 09:08:20.473387] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.315 [2024-05-15 09:08:20.473634] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.315 [2024-05-15 09:08:20.473658] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.315 [2024-05-15 09:08:20.473673] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.315 [2024-05-15 09:08:20.477311] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.315 [2024-05-15 09:08:20.486522] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.315 [2024-05-15 09:08:20.486933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.315 [2024-05-15 09:08:20.487116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.315 [2024-05-15 09:08:20.487141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.315 [2024-05-15 09:08:20.487157] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.315 [2024-05-15 09:08:20.487409] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.315 [2024-05-15 09:08:20.487656] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.315 [2024-05-15 09:08:20.487679] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.315 [2024-05-15 09:08:20.487695] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.315 [2024-05-15 09:08:20.491332] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.315 [2024-05-15 09:08:20.500549] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.315 [2024-05-15 09:08:20.500968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.315 [2024-05-15 09:08:20.501140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.315 [2024-05-15 09:08:20.501169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.315 [2024-05-15 09:08:20.501186] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.315 [2024-05-15 09:08:20.501440] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.315 [2024-05-15 09:08:20.501686] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.315 [2024-05-15 09:08:20.501709] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.315 [2024-05-15 09:08:20.501725] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.315 [2024-05-15 09:08:20.505363] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.315 [2024-05-15 09:08:20.514629] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.315 [2024-05-15 09:08:20.515096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.315 [2024-05-15 09:08:20.515246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.315 [2024-05-15 09:08:20.515275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.315 [2024-05-15 09:08:20.515292] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.315 [2024-05-15 09:08:20.515535] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.315 [2024-05-15 09:08:20.515781] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.316 [2024-05-15 09:08:20.515804] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.316 [2024-05-15 09:08:20.515820] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.316 [2024-05-15 09:08:20.519462] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.316 [2024-05-15 09:08:20.528676] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.316 [2024-05-15 09:08:20.529064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.316 [2024-05-15 09:08:20.529212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.316 [2024-05-15 09:08:20.529254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.316 [2024-05-15 09:08:20.529273] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.316 [2024-05-15 09:08:20.529515] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.316 [2024-05-15 09:08:20.529760] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.316 [2024-05-15 09:08:20.529784] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.316 [2024-05-15 09:08:20.529800] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.316 [2024-05-15 09:08:20.533437] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.316 [2024-05-15 09:08:20.542650] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.316 [2024-05-15 09:08:20.543123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.316 [2024-05-15 09:08:20.543266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.316 [2024-05-15 09:08:20.543292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.316 [2024-05-15 09:08:20.543308] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.316 [2024-05-15 09:08:20.543548] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.316 [2024-05-15 09:08:20.543795] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.316 [2024-05-15 09:08:20.543818] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.316 [2024-05-15 09:08:20.543834] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.316 [2024-05-15 09:08:20.547473] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.316 [2024-05-15 09:08:20.556685] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.316 [2024-05-15 09:08:20.557087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.316 [2024-05-15 09:08:20.557272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.316 [2024-05-15 09:08:20.557298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.316 [2024-05-15 09:08:20.557314] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.316 [2024-05-15 09:08:20.557556] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.316 [2024-05-15 09:08:20.557802] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.316 [2024-05-15 09:08:20.557825] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.316 [2024-05-15 09:08:20.557841] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.316 [2024-05-15 09:08:20.561479] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.316 [2024-05-15 09:08:20.570693] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.316 [2024-05-15 09:08:20.571124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.316 [2024-05-15 09:08:20.571264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.316 [2024-05-15 09:08:20.571309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.316 [2024-05-15 09:08:20.571333] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.316 [2024-05-15 09:08:20.571576] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.316 [2024-05-15 09:08:20.571822] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.316 [2024-05-15 09:08:20.571845] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.316 [2024-05-15 09:08:20.571861] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.316 [2024-05-15 09:08:20.575497] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.316 [2024-05-15 09:08:20.584707] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.316 [2024-05-15 09:08:20.585097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.316 [2024-05-15 09:08:20.585265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.316 [2024-05-15 09:08:20.585291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.316 [2024-05-15 09:08:20.585323] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.316 [2024-05-15 09:08:20.585565] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.316 [2024-05-15 09:08:20.585810] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.316 [2024-05-15 09:08:20.585833] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.316 [2024-05-15 09:08:20.585849] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.316 [2024-05-15 09:08:20.589484] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.316 [2024-05-15 09:08:20.598699] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.316 [2024-05-15 09:08:20.599112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.316 [2024-05-15 09:08:20.599273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.316 [2024-05-15 09:08:20.599301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.316 [2024-05-15 09:08:20.599319] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.316 [2024-05-15 09:08:20.599561] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.316 [2024-05-15 09:08:20.599806] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.316 [2024-05-15 09:08:20.599829] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.316 [2024-05-15 09:08:20.599845] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.316 [2024-05-15 09:08:20.603484] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.316 [2024-05-15 09:08:20.612701] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.316 [2024-05-15 09:08:20.613100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.316 [2024-05-15 09:08:20.613272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.316 [2024-05-15 09:08:20.613301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.316 [2024-05-15 09:08:20.613319] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.316 [2024-05-15 09:08:20.613567] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.316 [2024-05-15 09:08:20.613813] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.316 [2024-05-15 09:08:20.613836] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.316 [2024-05-15 09:08:20.613852] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.316 [2024-05-15 09:08:20.617500] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.316 [2024-05-15 09:08:20.626713] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.316 [2024-05-15 09:08:20.627116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.316 [2024-05-15 09:08:20.627284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.317 [2024-05-15 09:08:20.627314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.317 [2024-05-15 09:08:20.627331] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.317 [2024-05-15 09:08:20.627572] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.317 [2024-05-15 09:08:20.627818] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.317 [2024-05-15 09:08:20.627842] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.317 [2024-05-15 09:08:20.627858] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.317 [2024-05-15 09:08:20.631496] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.317 [2024-05-15 09:08:20.640711] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.317 [2024-05-15 09:08:20.641114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.317 [2024-05-15 09:08:20.641239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.317 [2024-05-15 09:08:20.641267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.317 [2024-05-15 09:08:20.641284] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.317 [2024-05-15 09:08:20.641525] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.317 [2024-05-15 09:08:20.641771] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.317 [2024-05-15 09:08:20.641794] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.317 [2024-05-15 09:08:20.641810] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.317 [2024-05-15 09:08:20.645449] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.317 [2024-05-15 09:08:20.654668] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.317 [2024-05-15 09:08:20.655049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.317 [2024-05-15 09:08:20.655193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.317 [2024-05-15 09:08:20.655228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.317 [2024-05-15 09:08:20.655248] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.317 [2024-05-15 09:08:20.655489] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.317 [2024-05-15 09:08:20.655741] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.317 [2024-05-15 09:08:20.655765] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.317 [2024-05-15 09:08:20.655780] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.317 [2024-05-15 09:08:20.659418] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.317 [2024-05-15 09:08:20.668631] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.317 [2024-05-15 09:08:20.669017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.317 [2024-05-15 09:08:20.669186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.317 [2024-05-15 09:08:20.669222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.317 [2024-05-15 09:08:20.669241] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.317 [2024-05-15 09:08:20.669483] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.317 [2024-05-15 09:08:20.669729] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.317 [2024-05-15 09:08:20.669752] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.317 [2024-05-15 09:08:20.669768] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.317 [2024-05-15 09:08:20.673405] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.317 [2024-05-15 09:08:20.682621] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.317 [2024-05-15 09:08:20.683049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.317 [2024-05-15 09:08:20.683220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.317 [2024-05-15 09:08:20.683248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.317 [2024-05-15 09:08:20.683265] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.317 [2024-05-15 09:08:20.683507] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.317 [2024-05-15 09:08:20.683753] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.317 [2024-05-15 09:08:20.683776] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.317 [2024-05-15 09:08:20.683792] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.317 [2024-05-15 09:08:20.687437] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.317 [2024-05-15 09:08:20.696657] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.317 [2024-05-15 09:08:20.697079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.317 [2024-05-15 09:08:20.697194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.317 [2024-05-15 09:08:20.697233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.317 [2024-05-15 09:08:20.697252] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.317 [2024-05-15 09:08:20.697494] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.317 [2024-05-15 09:08:20.697740] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.317 [2024-05-15 09:08:20.697768] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.317 [2024-05-15 09:08:20.697785] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.317 [2024-05-15 09:08:20.701423] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.317 [2024-05-15 09:08:20.710678] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.317 [2024-05-15 09:08:20.711112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.317 [2024-05-15 09:08:20.711281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.317 [2024-05-15 09:08:20.711320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.317 [2024-05-15 09:08:20.711338] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.317 [2024-05-15 09:08:20.711580] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.317 [2024-05-15 09:08:20.711827] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.317 [2024-05-15 09:08:20.711850] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.317 [2024-05-15 09:08:20.711866] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.317 [2024-05-15 09:08:20.715506] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.317 [2024-05-15 09:08:20.724756] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.317 [2024-05-15 09:08:20.725142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.317 [2024-05-15 09:08:20.725317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.317 [2024-05-15 09:08:20.725348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.317 [2024-05-15 09:08:20.725365] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.317 [2024-05-15 09:08:20.725608] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.317 [2024-05-15 09:08:20.725854] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.317 [2024-05-15 09:08:20.725878] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.317 [2024-05-15 09:08:20.725894] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.317 [2024-05-15 09:08:20.729538] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.317 [2024-05-15 09:08:20.738750] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.317 [2024-05-15 09:08:20.739136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.317 [2024-05-15 09:08:20.739267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.317 [2024-05-15 09:08:20.739297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.317 [2024-05-15 09:08:20.739315] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.317 [2024-05-15 09:08:20.739557] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.317 [2024-05-15 09:08:20.739803] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.317 [2024-05-15 09:08:20.739827] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.317 [2024-05-15 09:08:20.739849] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.317 [2024-05-15 09:08:20.743490] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.317 [2024-05-15 09:08:20.752717] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.317 [2024-05-15 09:08:20.753105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.317 [2024-05-15 09:08:20.753270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.317 [2024-05-15 09:08:20.753299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.317 [2024-05-15 09:08:20.753316] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.317 [2024-05-15 09:08:20.753558] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.317 [2024-05-15 09:08:20.753804] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.318 [2024-05-15 09:08:20.753827] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.318 [2024-05-15 09:08:20.753843] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.318 [2024-05-15 09:08:20.757484] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.318 [2024-05-15 09:08:20.766713] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.318 [2024-05-15 09:08:20.767131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.318 [2024-05-15 09:08:20.767279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.318 [2024-05-15 09:08:20.767308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.318 [2024-05-15 09:08:20.767325] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.318 [2024-05-15 09:08:20.767567] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.318 [2024-05-15 09:08:20.767812] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.318 [2024-05-15 09:08:20.767836] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.318 [2024-05-15 09:08:20.767852] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.318 [2024-05-15 09:08:20.771510] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.318 [2024-05-15 09:08:20.780720] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.318 [2024-05-15 09:08:20.781100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.318 [2024-05-15 09:08:20.781280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.318 [2024-05-15 09:08:20.781309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.318 [2024-05-15 09:08:20.781326] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.318 [2024-05-15 09:08:20.781568] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.318 [2024-05-15 09:08:20.781813] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.318 [2024-05-15 09:08:20.781836] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.318 [2024-05-15 09:08:20.781852] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.318 [2024-05-15 09:08:20.785505] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.318 [2024-05-15 09:08:20.794728] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.318 [2024-05-15 09:08:20.795132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.318 [2024-05-15 09:08:20.795266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.318 [2024-05-15 09:08:20.795296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.318 [2024-05-15 09:08:20.795313] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.318 [2024-05-15 09:08:20.795555] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.318 [2024-05-15 09:08:20.795801] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.318 [2024-05-15 09:08:20.795825] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.318 [2024-05-15 09:08:20.795841] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.318 [2024-05-15 09:08:20.799480] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.318 [2024-05-15 09:08:20.808691] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.318 [2024-05-15 09:08:20.809097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.318 [2024-05-15 09:08:20.809247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.318 [2024-05-15 09:08:20.809275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.318 [2024-05-15 09:08:20.809293] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.318 [2024-05-15 09:08:20.809534] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.318 [2024-05-15 09:08:20.809780] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.318 [2024-05-15 09:08:20.809803] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.318 [2024-05-15 09:08:20.809819] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.318 [2024-05-15 09:08:20.813459] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.318 [2024-05-15 09:08:20.822674] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.318 [2024-05-15 09:08:20.823079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.318 [2024-05-15 09:08:20.823228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.318 [2024-05-15 09:08:20.823256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.318 [2024-05-15 09:08:20.823274] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.318 [2024-05-15 09:08:20.823515] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.318 [2024-05-15 09:08:20.823761] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.318 [2024-05-15 09:08:20.823784] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.318 [2024-05-15 09:08:20.823800] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.318 [2024-05-15 09:08:20.827437] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.318 [2024-05-15 09:08:20.836646] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.318 [2024-05-15 09:08:20.837056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.318 [2024-05-15 09:08:20.837201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.318 [2024-05-15 09:08:20.837237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.318 [2024-05-15 09:08:20.837256] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.318 [2024-05-15 09:08:20.837498] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.318 [2024-05-15 09:08:20.837743] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.318 [2024-05-15 09:08:20.837766] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.318 [2024-05-15 09:08:20.837782] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.318 [2024-05-15 09:08:20.841419] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.318 [2024-05-15 09:08:20.850631] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.318 [2024-05-15 09:08:20.851037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.318 [2024-05-15 09:08:20.851181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.318 [2024-05-15 09:08:20.851210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.318 [2024-05-15 09:08:20.851237] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.318 [2024-05-15 09:08:20.851480] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.318 [2024-05-15 09:08:20.851726] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.318 [2024-05-15 09:08:20.851749] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.318 [2024-05-15 09:08:20.851765] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.318 [2024-05-15 09:08:20.855402] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.318 [2024-05-15 09:08:20.864612] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.318 [2024-05-15 09:08:20.865020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.318 [2024-05-15 09:08:20.865140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.318 [2024-05-15 09:08:20.865168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.318 [2024-05-15 09:08:20.865186] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.318 [2024-05-15 09:08:20.865439] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.318 [2024-05-15 09:08:20.865686] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.318 [2024-05-15 09:08:20.865709] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.318 [2024-05-15 09:08:20.865725] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.318 [2024-05-15 09:08:20.869361] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.318 [2024-05-15 09:08:20.878571] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.318 [2024-05-15 09:08:20.878974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.318 [2024-05-15 09:08:20.879148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.318 [2024-05-15 09:08:20.879177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.318 [2024-05-15 09:08:20.879195] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.318 [2024-05-15 09:08:20.879446] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.318 [2024-05-15 09:08:20.879692] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.318 [2024-05-15 09:08:20.879716] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.318 [2024-05-15 09:08:20.879732] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.318 [2024-05-15 09:08:20.883369] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.318 [2024-05-15 09:08:20.892599] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.319 [2024-05-15 09:08:20.892981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.319 [2024-05-15 09:08:20.893108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.319 [2024-05-15 09:08:20.893138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.319 [2024-05-15 09:08:20.893156] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.319 [2024-05-15 09:08:20.893409] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.319 [2024-05-15 09:08:20.893656] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.319 [2024-05-15 09:08:20.893680] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.319 [2024-05-15 09:08:20.893696] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.319 [2024-05-15 09:08:20.897370] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.319 [2024-05-15 09:08:20.906581] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.319 [2024-05-15 09:08:20.906984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.319 [2024-05-15 09:08:20.907127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.319 [2024-05-15 09:08:20.907156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.319 [2024-05-15 09:08:20.907173] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.319 [2024-05-15 09:08:20.907426] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.319 [2024-05-15 09:08:20.907672] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.319 [2024-05-15 09:08:20.907696] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.319 [2024-05-15 09:08:20.907712] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.319 [2024-05-15 09:08:20.911366] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.319 [2024-05-15 09:08:20.920585] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.319 [2024-05-15 09:08:20.921001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.319 [2024-05-15 09:08:20.921150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.319 [2024-05-15 09:08:20.921183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.319 [2024-05-15 09:08:20.921201] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.319 [2024-05-15 09:08:20.921451] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.319 [2024-05-15 09:08:20.921698] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.319 [2024-05-15 09:08:20.921722] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.319 [2024-05-15 09:08:20.921738] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.319 [2024-05-15 09:08:20.925376] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.319 [2024-05-15 09:08:20.934593] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.319 [2024-05-15 09:08:20.935026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.319 [2024-05-15 09:08:20.935189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.319 [2024-05-15 09:08:20.935227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.319 [2024-05-15 09:08:20.935247] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.319 [2024-05-15 09:08:20.935489] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.319 [2024-05-15 09:08:20.935736] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.319 [2024-05-15 09:08:20.935759] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.319 [2024-05-15 09:08:20.935775] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.319 [2024-05-15 09:08:20.939414] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.319 [2024-05-15 09:08:20.948642] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.319 [2024-05-15 09:08:20.949097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.319 [2024-05-15 09:08:20.949266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.319 [2024-05-15 09:08:20.949296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.319 [2024-05-15 09:08:20.949314] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.319 [2024-05-15 09:08:20.949556] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.319 [2024-05-15 09:08:20.949801] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.319 [2024-05-15 09:08:20.949825] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.319 [2024-05-15 09:08:20.949840] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.319 [2024-05-15 09:08:20.953481] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.319 [2024-05-15 09:08:20.962719] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.319 [2024-05-15 09:08:20.963108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.319 [2024-05-15 09:08:20.963256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.319 [2024-05-15 09:08:20.963286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.319 [2024-05-15 09:08:20.963309] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.319 [2024-05-15 09:08:20.963552] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.319 [2024-05-15 09:08:20.963799] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.319 [2024-05-15 09:08:20.963822] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.319 [2024-05-15 09:08:20.963838] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.319 [2024-05-15 09:08:20.967478] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.319 [2024-05-15 09:08:20.976692] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.319 [2024-05-15 09:08:20.977078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.319 [2024-05-15 09:08:20.977230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.319 [2024-05-15 09:08:20.977260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.319 [2024-05-15 09:08:20.977278] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.319 [2024-05-15 09:08:20.977519] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.319 [2024-05-15 09:08:20.977766] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.319 [2024-05-15 09:08:20.977789] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.319 [2024-05-15 09:08:20.977805] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.319 [2024-05-15 09:08:20.981448] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.319 [2024-05-15 09:08:20.990671] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.319 [2024-05-15 09:08:20.991059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.319 [2024-05-15 09:08:20.991194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.319 [2024-05-15 09:08:20.991233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.319 [2024-05-15 09:08:20.991253] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.319 [2024-05-15 09:08:20.991495] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.319 [2024-05-15 09:08:20.991741] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.319 [2024-05-15 09:08:20.991764] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.319 [2024-05-15 09:08:20.991780] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.319 [2024-05-15 09:08:20.995420] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.319 [2024-05-15 09:08:21.004639] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.319 [2024-05-15 09:08:21.005045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.319 [2024-05-15 09:08:21.005194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.319 [2024-05-15 09:08:21.005234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.319 [2024-05-15 09:08:21.005254] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.319 [2024-05-15 09:08:21.005502] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.319 [2024-05-15 09:08:21.005749] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.319 [2024-05-15 09:08:21.005772] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.319 [2024-05-15 09:08:21.005788] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.319 [2024-05-15 09:08:21.009431] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.319 [2024-05-15 09:08:21.018655] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.319 [2024-05-15 09:08:21.019062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.319 [2024-05-15 09:08:21.019195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.319 [2024-05-15 09:08:21.019249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.319 [2024-05-15 09:08:21.019268] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.320 [2024-05-15 09:08:21.019510] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.320 [2024-05-15 09:08:21.019756] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.320 [2024-05-15 09:08:21.019779] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.320 [2024-05-15 09:08:21.019795] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.320 [2024-05-15 09:08:21.023431] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.320 [2024-05-15 09:08:21.032663] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.320 [2024-05-15 09:08:21.033067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.320 [2024-05-15 09:08:21.033238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.320 [2024-05-15 09:08:21.033267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.320 [2024-05-15 09:08:21.033285] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.320 [2024-05-15 09:08:21.033527] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.320 [2024-05-15 09:08:21.033774] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.320 [2024-05-15 09:08:21.033797] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.320 [2024-05-15 09:08:21.033812] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.320 [2024-05-15 09:08:21.037454] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.320 [2024-05-15 09:08:21.046668] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.320 [2024-05-15 09:08:21.047049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.320 [2024-05-15 09:08:21.047193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.320 [2024-05-15 09:08:21.047230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.320 [2024-05-15 09:08:21.047250] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.320 [2024-05-15 09:08:21.047493] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.320 [2024-05-15 09:08:21.047744] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.320 [2024-05-15 09:08:21.047768] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.320 [2024-05-15 09:08:21.047784] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.320 [2024-05-15 09:08:21.051423] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.320 [2024-05-15 09:08:21.060647] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.320 [2024-05-15 09:08:21.061057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.320 [2024-05-15 09:08:21.061230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.320 [2024-05-15 09:08:21.061259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.320 [2024-05-15 09:08:21.061276] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.320 [2024-05-15 09:08:21.061518] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.320 [2024-05-15 09:08:21.061764] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.320 [2024-05-15 09:08:21.061788] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.320 [2024-05-15 09:08:21.061803] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.320 [2024-05-15 09:08:21.065442] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.320 [2024-05-15 09:08:21.074658] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.320 [2024-05-15 09:08:21.075045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.320 [2024-05-15 09:08:21.075192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.320 [2024-05-15 09:08:21.075229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.320 [2024-05-15 09:08:21.075249] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.320 [2024-05-15 09:08:21.075491] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.320 [2024-05-15 09:08:21.075737] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.320 [2024-05-15 09:08:21.075761] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.320 [2024-05-15 09:08:21.075776] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.320 [2024-05-15 09:08:21.079418] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.320 [2024-05-15 09:08:21.088637] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.320 [2024-05-15 09:08:21.089020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.320 [2024-05-15 09:08:21.089160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.320 [2024-05-15 09:08:21.089189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.320 [2024-05-15 09:08:21.089206] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.320 [2024-05-15 09:08:21.089459] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.320 [2024-05-15 09:08:21.089706] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.320 [2024-05-15 09:08:21.089730] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.320 [2024-05-15 09:08:21.089750] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.320 [2024-05-15 09:08:21.093391] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.579 [2024-05-15 09:08:21.102692] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.579 [2024-05-15 09:08:21.103102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.579 [2024-05-15 09:08:21.103246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.579 [2024-05-15 09:08:21.103276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.579 [2024-05-15 09:08:21.103294] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.579 [2024-05-15 09:08:21.103535] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.579 [2024-05-15 09:08:21.103781] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.579 [2024-05-15 09:08:21.103804] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.579 [2024-05-15 09:08:21.103820] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.579 [2024-05-15 09:08:21.107464] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.579 [2024-05-15 09:08:21.116707] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.579 [2024-05-15 09:08:21.117114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.579 [2024-05-15 09:08:21.117284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.579 [2024-05-15 09:08:21.117314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.579 [2024-05-15 09:08:21.117331] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.579 [2024-05-15 09:08:21.117574] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.579 [2024-05-15 09:08:21.117820] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.579 [2024-05-15 09:08:21.117843] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.579 [2024-05-15 09:08:21.117859] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.579 [2024-05-15 09:08:21.121503] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.579 [2024-05-15 09:08:21.130723] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.579 [2024-05-15 09:08:21.131128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.579 [2024-05-15 09:08:21.131273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.579 [2024-05-15 09:08:21.131302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.579 [2024-05-15 09:08:21.131320] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.579 [2024-05-15 09:08:21.131562] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.579 [2024-05-15 09:08:21.131808] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.579 [2024-05-15 09:08:21.131831] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.579 [2024-05-15 09:08:21.131846] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.580 [2024-05-15 09:08:21.135496] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.580 [2024-05-15 09:08:21.144724] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.580 [2024-05-15 09:08:21.145116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.580 [2024-05-15 09:08:21.145246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.580 [2024-05-15 09:08:21.145275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.580 [2024-05-15 09:08:21.145293] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.580 [2024-05-15 09:08:21.145534] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.580 [2024-05-15 09:08:21.145780] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.580 [2024-05-15 09:08:21.145803] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.580 [2024-05-15 09:08:21.145819] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.580 [2024-05-15 09:08:21.149464] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.580 [2024-05-15 09:08:21.158825] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.580 [2024-05-15 09:08:21.159210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.580 [2024-05-15 09:08:21.159367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.580 [2024-05-15 09:08:21.159398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.580 [2024-05-15 09:08:21.159416] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.580 [2024-05-15 09:08:21.159658] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.580 [2024-05-15 09:08:21.159904] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.580 [2024-05-15 09:08:21.159927] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.580 [2024-05-15 09:08:21.159943] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.580 [2024-05-15 09:08:21.163585] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.580 [2024-05-15 09:08:21.172815] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.580 [2024-05-15 09:08:21.173204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.580 [2024-05-15 09:08:21.173379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.580 [2024-05-15 09:08:21.173408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.580 [2024-05-15 09:08:21.173425] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.580 [2024-05-15 09:08:21.173668] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.580 [2024-05-15 09:08:21.173913] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.580 [2024-05-15 09:08:21.173936] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.580 [2024-05-15 09:08:21.173952] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.580 [2024-05-15 09:08:21.177597] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.580 [2024-05-15 09:08:21.186824] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.580 [2024-05-15 09:08:21.187238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.580 [2024-05-15 09:08:21.187384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.580 [2024-05-15 09:08:21.187412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.580 [2024-05-15 09:08:21.187430] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.580 [2024-05-15 09:08:21.187671] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.580 [2024-05-15 09:08:21.187917] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.580 [2024-05-15 09:08:21.187940] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.580 [2024-05-15 09:08:21.187956] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.580 [2024-05-15 09:08:21.191595] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.580 [2024-05-15 09:08:21.200810] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.580 [2024-05-15 09:08:21.201238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.580 [2024-05-15 09:08:21.201455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.580 [2024-05-15 09:08:21.201483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.580 [2024-05-15 09:08:21.201501] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.580 [2024-05-15 09:08:21.201743] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.580 [2024-05-15 09:08:21.201989] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.580 [2024-05-15 09:08:21.202012] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.580 [2024-05-15 09:08:21.202027] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.580 [2024-05-15 09:08:21.205667] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.580 [2024-05-15 09:08:21.214890] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.580 [2024-05-15 09:08:21.215496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.580 [2024-05-15 09:08:21.215617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.580 [2024-05-15 09:08:21.215646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.580 [2024-05-15 09:08:21.215663] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.580 [2024-05-15 09:08:21.215906] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.580 [2024-05-15 09:08:21.216151] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.580 [2024-05-15 09:08:21.216174] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.580 [2024-05-15 09:08:21.216190] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.580 [2024-05-15 09:08:21.219841] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.580 [2024-05-15 09:08:21.228857] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.580 [2024-05-15 09:08:21.229281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.580 [2024-05-15 09:08:21.229427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.580 [2024-05-15 09:08:21.229455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.580 [2024-05-15 09:08:21.229473] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.580 [2024-05-15 09:08:21.229715] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.580 [2024-05-15 09:08:21.229961] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.580 [2024-05-15 09:08:21.229984] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.580 [2024-05-15 09:08:21.229999] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.580 [2024-05-15 09:08:21.233641] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.580 [2024-05-15 09:08:21.242876] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.580 [2024-05-15 09:08:21.243294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.580 [2024-05-15 09:08:21.243434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.580 [2024-05-15 09:08:21.243462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.580 [2024-05-15 09:08:21.243480] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.580 [2024-05-15 09:08:21.243722] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.580 [2024-05-15 09:08:21.243968] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.580 [2024-05-15 09:08:21.243992] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.580 [2024-05-15 09:08:21.244007] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.580 [2024-05-15 09:08:21.247647] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.580 [2024-05-15 09:08:21.256870] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.580 [2024-05-15 09:08:21.257230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.580 [2024-05-15 09:08:21.257384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.580 [2024-05-15 09:08:21.257413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.580 [2024-05-15 09:08:21.257430] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.580 [2024-05-15 09:08:21.257672] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.580 [2024-05-15 09:08:21.257918] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.580 [2024-05-15 09:08:21.257941] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.580 [2024-05-15 09:08:21.257956] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.580 [2024-05-15 09:08:21.261597] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.580 [2024-05-15 09:08:21.270823] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.580 [2024-05-15 09:08:21.271230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.581 [2024-05-15 09:08:21.271405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.581 [2024-05-15 09:08:21.271438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.581 [2024-05-15 09:08:21.271457] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.581 [2024-05-15 09:08:21.271699] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.581 [2024-05-15 09:08:21.271945] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.581 [2024-05-15 09:08:21.271968] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.581 [2024-05-15 09:08:21.271984] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.581 [2024-05-15 09:08:21.275620] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.581 [2024-05-15 09:08:21.284838] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.581 [2024-05-15 09:08:21.285257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.581 [2024-05-15 09:08:21.285431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.581 [2024-05-15 09:08:21.285477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.581 [2024-05-15 09:08:21.285495] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.581 [2024-05-15 09:08:21.285737] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.581 [2024-05-15 09:08:21.285982] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.581 [2024-05-15 09:08:21.286006] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.581 [2024-05-15 09:08:21.286022] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.581 [2024-05-15 09:08:21.289663] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.581 [2024-05-15 09:08:21.298885] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.581 [2024-05-15 09:08:21.299275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.581 [2024-05-15 09:08:21.299446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.581 [2024-05-15 09:08:21.299475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.581 [2024-05-15 09:08:21.299492] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.581 [2024-05-15 09:08:21.299734] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.581 [2024-05-15 09:08:21.299980] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.581 [2024-05-15 09:08:21.300003] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.581 [2024-05-15 09:08:21.300019] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.581 [2024-05-15 09:08:21.303661] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.581 [2024-05-15 09:08:21.312883] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.581 [2024-05-15 09:08:21.313292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.581 [2024-05-15 09:08:21.313425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.581 [2024-05-15 09:08:21.313455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.581 [2024-05-15 09:08:21.313478] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.581 [2024-05-15 09:08:21.313721] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.581 [2024-05-15 09:08:21.313967] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.581 [2024-05-15 09:08:21.313991] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.581 [2024-05-15 09:08:21.314006] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.581 [2024-05-15 09:08:21.317651] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.581 [2024-05-15 09:08:21.326874] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.581 [2024-05-15 09:08:21.327293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.581 [2024-05-15 09:08:21.327449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.581 [2024-05-15 09:08:21.327478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.581 [2024-05-15 09:08:21.327495] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.581 [2024-05-15 09:08:21.327738] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.581 [2024-05-15 09:08:21.327983] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.581 [2024-05-15 09:08:21.328006] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.581 [2024-05-15 09:08:21.328022] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.581 [2024-05-15 09:08:21.331664] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.581 [2024-05-15 09:08:21.340911] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.581 [2024-05-15 09:08:21.341299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.581 [2024-05-15 09:08:21.341472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.581 [2024-05-15 09:08:21.341501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.581 [2024-05-15 09:08:21.341519] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.581 [2024-05-15 09:08:21.341761] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.581 [2024-05-15 09:08:21.342007] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.581 [2024-05-15 09:08:21.342030] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.581 [2024-05-15 09:08:21.342046] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.581 [2024-05-15 09:08:21.345689] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.581 [2024-05-15 09:08:21.354911] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.581 [2024-05-15 09:08:21.355321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.581 [2024-05-15 09:08:21.355491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.581 [2024-05-15 09:08:21.355519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.581 [2024-05-15 09:08:21.355537] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.581 [2024-05-15 09:08:21.355784] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.581 [2024-05-15 09:08:21.356030] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.581 [2024-05-15 09:08:21.356054] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.581 [2024-05-15 09:08:21.356069] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.581 [2024-05-15 09:08:21.359710] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.581 [2024-05-15 09:08:21.368966] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.581 [2024-05-15 09:08:21.369401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.581 [2024-05-15 09:08:21.369517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.581 [2024-05-15 09:08:21.369545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.581 [2024-05-15 09:08:21.369562] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.581 [2024-05-15 09:08:21.369805] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.581 [2024-05-15 09:08:21.370050] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.581 [2024-05-15 09:08:21.370076] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.581 [2024-05-15 09:08:21.370092] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.841 [2024-05-15 09:08:21.373763] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.841 [2024-05-15 09:08:21.383019] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.841 [2024-05-15 09:08:21.383438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.841 [2024-05-15 09:08:21.383583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.841 [2024-05-15 09:08:21.383611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.841 [2024-05-15 09:08:21.383629] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.841 [2024-05-15 09:08:21.383871] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.841 [2024-05-15 09:08:21.384115] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.841 [2024-05-15 09:08:21.384138] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.841 [2024-05-15 09:08:21.384154] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.841 [2024-05-15 09:08:21.387797] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.841 [2024-05-15 09:08:21.397036] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.841 [2024-05-15 09:08:21.397430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.841 [2024-05-15 09:08:21.397581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.841 [2024-05-15 09:08:21.397608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.841 [2024-05-15 09:08:21.397626] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.841 [2024-05-15 09:08:21.397869] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.841 [2024-05-15 09:08:21.398120] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.841 [2024-05-15 09:08:21.398143] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.841 [2024-05-15 09:08:21.398159] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.841 [2024-05-15 09:08:21.401801] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.841 [2024-05-15 09:08:21.411110] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.841 [2024-05-15 09:08:21.411510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.841 [2024-05-15 09:08:21.411657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.841 [2024-05-15 09:08:21.411686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.841 [2024-05-15 09:08:21.411703] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.841 [2024-05-15 09:08:21.411945] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.841 [2024-05-15 09:08:21.412193] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.841 [2024-05-15 09:08:21.412229] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.841 [2024-05-15 09:08:21.412249] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.841 [2024-05-15 09:08:21.415880] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.841 [2024-05-15 09:08:21.425114] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.841 [2024-05-15 09:08:21.425530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.841 [2024-05-15 09:08:21.425675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.841 [2024-05-15 09:08:21.425704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.841 [2024-05-15 09:08:21.425722] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.841 [2024-05-15 09:08:21.425964] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.841 [2024-05-15 09:08:21.426209] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.841 [2024-05-15 09:08:21.426246] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.841 [2024-05-15 09:08:21.426264] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.841 [2024-05-15 09:08:21.429897] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.841 [2024-05-15 09:08:21.439117] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.841 [2024-05-15 09:08:21.439547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.841 [2024-05-15 09:08:21.439718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.841 [2024-05-15 09:08:21.439747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.841 [2024-05-15 09:08:21.439766] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.841 [2024-05-15 09:08:21.440008] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.841 [2024-05-15 09:08:21.440271] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.841 [2024-05-15 09:08:21.440302] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.841 [2024-05-15 09:08:21.440320] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.841 [2024-05-15 09:08:21.443955] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.841 [2024-05-15 09:08:21.453175] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.841 [2024-05-15 09:08:21.453626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.841 [2024-05-15 09:08:21.453795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.841 [2024-05-15 09:08:21.453825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.841 [2024-05-15 09:08:21.453843] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.841 [2024-05-15 09:08:21.454086] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.841 [2024-05-15 09:08:21.454347] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.841 [2024-05-15 09:08:21.454372] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.841 [2024-05-15 09:08:21.454388] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.841 [2024-05-15 09:08:21.458024] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.841 [2024-05-15 09:08:21.467273] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.841 [2024-05-15 09:08:21.467702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.841 [2024-05-15 09:08:21.467862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.841 [2024-05-15 09:08:21.467891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.841 [2024-05-15 09:08:21.467909] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.841 [2024-05-15 09:08:21.468151] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.841 [2024-05-15 09:08:21.468409] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.841 [2024-05-15 09:08:21.468434] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.841 [2024-05-15 09:08:21.468450] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.842 [2024-05-15 09:08:21.472090] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.842 [2024-05-15 09:08:21.481330] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.842 [2024-05-15 09:08:21.481772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.842 [2024-05-15 09:08:21.481962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.842 [2024-05-15 09:08:21.481992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.842 [2024-05-15 09:08:21.482011] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.842 [2024-05-15 09:08:21.482265] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.842 [2024-05-15 09:08:21.482512] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.842 [2024-05-15 09:08:21.482536] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.842 [2024-05-15 09:08:21.482558] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.842 [2024-05-15 09:08:21.486197] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.842 [2024-05-15 09:08:21.495452] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.842 [2024-05-15 09:08:21.495836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.842 [2024-05-15 09:08:21.495993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.842 [2024-05-15 09:08:21.496023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.842 [2024-05-15 09:08:21.496041] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.842 [2024-05-15 09:08:21.496296] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.842 [2024-05-15 09:08:21.496545] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.842 [2024-05-15 09:08:21.496569] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.842 [2024-05-15 09:08:21.496586] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.842 [2024-05-15 09:08:21.500235] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.842 [2024-05-15 09:08:21.509467] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.842 [2024-05-15 09:08:21.509875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.842 [2024-05-15 09:08:21.510017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.842 [2024-05-15 09:08:21.510047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.842 [2024-05-15 09:08:21.510066] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.842 [2024-05-15 09:08:21.510321] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.842 [2024-05-15 09:08:21.510569] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.842 [2024-05-15 09:08:21.510594] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.842 [2024-05-15 09:08:21.510610] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.842 [2024-05-15 09:08:21.514261] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.842 [2024-05-15 09:08:21.523499] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.842 [2024-05-15 09:08:21.523892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.842 [2024-05-15 09:08:21.524036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.842 [2024-05-15 09:08:21.524065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.842 [2024-05-15 09:08:21.524084] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.842 [2024-05-15 09:08:21.524338] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.842 [2024-05-15 09:08:21.524586] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.842 [2024-05-15 09:08:21.524610] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.842 [2024-05-15 09:08:21.524627] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.842 [2024-05-15 09:08:21.528274] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.842 [2024-05-15 09:08:21.537511] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.842 [2024-05-15 09:08:21.537918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.842 [2024-05-15 09:08:21.538062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.842 [2024-05-15 09:08:21.538092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.842 [2024-05-15 09:08:21.538111] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.842 [2024-05-15 09:08:21.538366] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.842 [2024-05-15 09:08:21.538614] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.842 [2024-05-15 09:08:21.538638] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.842 [2024-05-15 09:08:21.538655] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.842 [2024-05-15 09:08:21.542308] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.842 [2024-05-15 09:08:21.551538] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.842 [2024-05-15 09:08:21.551946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.842 [2024-05-15 09:08:21.552125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.842 [2024-05-15 09:08:21.552154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.842 [2024-05-15 09:08:21.552173] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.842 [2024-05-15 09:08:21.552426] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.842 [2024-05-15 09:08:21.552674] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.842 [2024-05-15 09:08:21.552698] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.842 [2024-05-15 09:08:21.552714] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.842 [2024-05-15 09:08:21.556360] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.842 [2024-05-15 09:08:21.565595] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.842 [2024-05-15 09:08:21.566000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.842 [2024-05-15 09:08:21.566146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.842 [2024-05-15 09:08:21.566175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.842 [2024-05-15 09:08:21.566194] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.842 [2024-05-15 09:08:21.566444] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.842 [2024-05-15 09:08:21.566692] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.842 [2024-05-15 09:08:21.566717] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.842 [2024-05-15 09:08:21.566733] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.842 [2024-05-15 09:08:21.570380] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.842 [2024-05-15 09:08:21.579619] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.842 [2024-05-15 09:08:21.580030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.842 [2024-05-15 09:08:21.580198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.842 [2024-05-15 09:08:21.580236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.842 [2024-05-15 09:08:21.580256] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.842 [2024-05-15 09:08:21.580499] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.842 [2024-05-15 09:08:21.580745] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.842 [2024-05-15 09:08:21.580770] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.842 [2024-05-15 09:08:21.580787] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.842 [2024-05-15 09:08:21.584429] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.842 [2024-05-15 09:08:21.593667] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.842 [2024-05-15 09:08:21.594101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.842 [2024-05-15 09:08:21.594269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.842 [2024-05-15 09:08:21.594300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.842 [2024-05-15 09:08:21.594318] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.842 [2024-05-15 09:08:21.594562] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.842 [2024-05-15 09:08:21.594809] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.842 [2024-05-15 09:08:21.594834] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.842 [2024-05-15 09:08:21.594851] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.842 [2024-05-15 09:08:21.598503] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.842 [2024-05-15 09:08:21.607735] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.842 [2024-05-15 09:08:21.608142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.842 [2024-05-15 09:08:21.608295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.842 [2024-05-15 09:08:21.608327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.843 [2024-05-15 09:08:21.608345] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.843 [2024-05-15 09:08:21.608589] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.843 [2024-05-15 09:08:21.608836] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.843 [2024-05-15 09:08:21.608861] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.843 [2024-05-15 09:08:21.608878] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.843 [2024-05-15 09:08:21.612526] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:26.843 [2024-05-15 09:08:21.621786] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:26.843 [2024-05-15 09:08:21.622190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.843 [2024-05-15 09:08:21.622340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:26.843 [2024-05-15 09:08:21.622372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:26.843 [2024-05-15 09:08:21.622390] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:26.843 [2024-05-15 09:08:21.622633] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:26.843 [2024-05-15 09:08:21.622880] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:26.843 [2024-05-15 09:08:21.622906] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:26.843 [2024-05-15 09:08:21.622922] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:26.843 [2024-05-15 09:08:21.626572] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.103 [2024-05-15 09:08:21.635858] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.103 [2024-05-15 09:08:21.636243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.103 [2024-05-15 09:08:21.636415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.103 [2024-05-15 09:08:21.636444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.103 [2024-05-15 09:08:21.636462] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.103 [2024-05-15 09:08:21.636704] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.103 [2024-05-15 09:08:21.636953] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.103 [2024-05-15 09:08:21.636978] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.103 [2024-05-15 09:08:21.636995] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.103 [2024-05-15 09:08:21.640661] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.103 [2024-05-15 09:08:21.649891] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.103 [2024-05-15 09:08:21.650298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.103 [2024-05-15 09:08:21.650446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.103 [2024-05-15 09:08:21.650486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.103 [2024-05-15 09:08:21.650505] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.103 [2024-05-15 09:08:21.650748] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.103 [2024-05-15 09:08:21.650994] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.103 [2024-05-15 09:08:21.651017] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.103 [2024-05-15 09:08:21.651033] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.103 [2024-05-15 09:08:21.654683] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.103 [2024-05-15 09:08:21.663906] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.103 [2024-05-15 09:08:21.664288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.103 [2024-05-15 09:08:21.664461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.103 [2024-05-15 09:08:21.664494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.103 [2024-05-15 09:08:21.664513] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.103 [2024-05-15 09:08:21.664756] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.103 [2024-05-15 09:08:21.665003] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.103 [2024-05-15 09:08:21.665029] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.103 [2024-05-15 09:08:21.665045] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.103 [2024-05-15 09:08:21.668692] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.104 [2024-05-15 09:08:21.677921] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.104 [2024-05-15 09:08:21.678339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.104 [2024-05-15 09:08:21.678608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.104 [2024-05-15 09:08:21.678675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.104 [2024-05-15 09:08:21.678693] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.104 [2024-05-15 09:08:21.678938] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.104 [2024-05-15 09:08:21.679186] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.104 [2024-05-15 09:08:21.679211] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.104 [2024-05-15 09:08:21.679241] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.104 [2024-05-15 09:08:21.682880] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.104 [2024-05-15 09:08:21.691890] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.104 [2024-05-15 09:08:21.692300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.104 [2024-05-15 09:08:21.692418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.104 [2024-05-15 09:08:21.692446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.104 [2024-05-15 09:08:21.692464] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.104 [2024-05-15 09:08:21.692707] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.104 [2024-05-15 09:08:21.692952] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.104 [2024-05-15 09:08:21.692977] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.104 [2024-05-15 09:08:21.692994] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.104 [2024-05-15 09:08:21.696643] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.104 [2024-05-15 09:08:21.705866] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.104 [2024-05-15 09:08:21.706288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.104 [2024-05-15 09:08:21.706434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.104 [2024-05-15 09:08:21.706463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.104 [2024-05-15 09:08:21.706486] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.104 [2024-05-15 09:08:21.706730] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.104 [2024-05-15 09:08:21.706977] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.104 [2024-05-15 09:08:21.707002] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.104 [2024-05-15 09:08:21.707019] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.104 [2024-05-15 09:08:21.710667] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.104 [2024-05-15 09:08:21.719891] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.104 [2024-05-15 09:08:21.720298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.104 [2024-05-15 09:08:21.720469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.104 [2024-05-15 09:08:21.720497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.104 [2024-05-15 09:08:21.720514] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.104 [2024-05-15 09:08:21.720757] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.104 [2024-05-15 09:08:21.721005] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.104 [2024-05-15 09:08:21.721030] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.104 [2024-05-15 09:08:21.721047] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.104 [2024-05-15 09:08:21.724691] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.104 [2024-05-15 09:08:21.733929] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.104 [2024-05-15 09:08:21.734347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.104 [2024-05-15 09:08:21.734526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.104 [2024-05-15 09:08:21.734554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.104 [2024-05-15 09:08:21.734573] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.104 [2024-05-15 09:08:21.734816] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.104 [2024-05-15 09:08:21.735064] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.104 [2024-05-15 09:08:21.735090] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.104 [2024-05-15 09:08:21.735107] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.104 [2024-05-15 09:08:21.738757] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.104 [2024-05-15 09:08:21.747978] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.104 [2024-05-15 09:08:21.748382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.104 [2024-05-15 09:08:21.748537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.104 [2024-05-15 09:08:21.748567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.104 [2024-05-15 09:08:21.748585] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.104 [2024-05-15 09:08:21.748833] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.104 [2024-05-15 09:08:21.749082] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.104 [2024-05-15 09:08:21.749107] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.104 [2024-05-15 09:08:21.749124] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.104 [2024-05-15 09:08:21.752772] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.104 [2024-05-15 09:08:21.762005] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.104 [2024-05-15 09:08:21.762440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.104 [2024-05-15 09:08:21.762588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.104 [2024-05-15 09:08:21.762617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.104 [2024-05-15 09:08:21.762635] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.104 [2024-05-15 09:08:21.762878] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.104 [2024-05-15 09:08:21.763125] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.104 [2024-05-15 09:08:21.763149] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.104 [2024-05-15 09:08:21.763166] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.104 [2024-05-15 09:08:21.766810] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.104 [2024-05-15 09:08:21.776044] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.104 [2024-05-15 09:08:21.776448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.104 [2024-05-15 09:08:21.776622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.104 [2024-05-15 09:08:21.776651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.104 [2024-05-15 09:08:21.776669] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.104 [2024-05-15 09:08:21.776913] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.104 [2024-05-15 09:08:21.777161] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.104 [2024-05-15 09:08:21.777186] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.104 [2024-05-15 09:08:21.777202] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.104 [2024-05-15 09:08:21.780847] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.104 [2024-05-15 09:08:21.790077] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.104 [2024-05-15 09:08:21.790511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.104 [2024-05-15 09:08:21.790795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.104 [2024-05-15 09:08:21.790851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.104 [2024-05-15 09:08:21.790870] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.104 [2024-05-15 09:08:21.791114] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.104 [2024-05-15 09:08:21.791381] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.104 [2024-05-15 09:08:21.791407] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.104 [2024-05-15 09:08:21.791423] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.104 [2024-05-15 09:08:21.795056] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.104 [2024-05-15 09:08:21.804067] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.104 [2024-05-15 09:08:21.804514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.105 [2024-05-15 09:08:21.804761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.105 [2024-05-15 09:08:21.804791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.105 [2024-05-15 09:08:21.804810] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.105 [2024-05-15 09:08:21.805053] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.105 [2024-05-15 09:08:21.805312] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.105 [2024-05-15 09:08:21.805336] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.105 [2024-05-15 09:08:21.805353] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.105 [2024-05-15 09:08:21.808987] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.105 [2024-05-15 09:08:21.818005] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.105 [2024-05-15 09:08:21.818404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.105 [2024-05-15 09:08:21.818525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.105 [2024-05-15 09:08:21.818554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.105 [2024-05-15 09:08:21.818573] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.105 [2024-05-15 09:08:21.818816] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.105 [2024-05-15 09:08:21.819063] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.105 [2024-05-15 09:08:21.819087] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.105 [2024-05-15 09:08:21.819103] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.105 [2024-05-15 09:08:21.822750] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.105 [2024-05-15 09:08:21.831975] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.105 [2024-05-15 09:08:21.832383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.105 [2024-05-15 09:08:21.832532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.105 [2024-05-15 09:08:21.832560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.105 [2024-05-15 09:08:21.832578] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.105 [2024-05-15 09:08:21.832820] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.105 [2024-05-15 09:08:21.833066] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.105 [2024-05-15 09:08:21.833091] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.105 [2024-05-15 09:08:21.833115] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.105 [2024-05-15 09:08:21.836771] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.105 [2024-05-15 09:08:21.846003] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.105 [2024-05-15 09:08:21.846451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.105 [2024-05-15 09:08:21.846672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.105 [2024-05-15 09:08:21.846727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.105 [2024-05-15 09:08:21.846745] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.105 [2024-05-15 09:08:21.846988] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.105 [2024-05-15 09:08:21.847250] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.105 [2024-05-15 09:08:21.847280] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.105 [2024-05-15 09:08:21.847296] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.105 [2024-05-15 09:08:21.850930] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.105 [2024-05-15 09:08:21.859947] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.105 [2024-05-15 09:08:21.860366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.105 [2024-05-15 09:08:21.860518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.105 [2024-05-15 09:08:21.860547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.105 [2024-05-15 09:08:21.860565] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.105 [2024-05-15 09:08:21.860808] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.105 [2024-05-15 09:08:21.861055] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.105 [2024-05-15 09:08:21.861080] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.105 [2024-05-15 09:08:21.861097] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.105 [2024-05-15 09:08:21.864739] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.105 [2024-05-15 09:08:21.873977] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.105 [2024-05-15 09:08:21.874373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.105 [2024-05-15 09:08:21.874523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.105 [2024-05-15 09:08:21.874552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.105 [2024-05-15 09:08:21.874570] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.105 [2024-05-15 09:08:21.874813] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.105 [2024-05-15 09:08:21.875060] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.105 [2024-05-15 09:08:21.875084] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.105 [2024-05-15 09:08:21.875106] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.105 [2024-05-15 09:08:21.878751] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.105 [2024-05-15 09:08:21.887977] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.105 [2024-05-15 09:08:21.888392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.105 [2024-05-15 09:08:21.888602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.105 [2024-05-15 09:08:21.888666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.105 [2024-05-15 09:08:21.888685] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.105 [2024-05-15 09:08:21.888928] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.105 [2024-05-15 09:08:21.889175] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.105 [2024-05-15 09:08:21.889199] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.105 [2024-05-15 09:08:21.889227] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.105 [2024-05-15 09:08:21.892891] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.365 [2024-05-15 09:08:21.901965] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.365 [2024-05-15 09:08:21.902406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.365 [2024-05-15 09:08:21.902527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.365 [2024-05-15 09:08:21.902555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.365 [2024-05-15 09:08:21.902573] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.365 [2024-05-15 09:08:21.902815] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.365 [2024-05-15 09:08:21.903061] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.365 [2024-05-15 09:08:21.903087] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.365 [2024-05-15 09:08:21.903103] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.365 [2024-05-15 09:08:21.906753] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.366 [2024-05-15 09:08:21.915983] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.366 [2024-05-15 09:08:21.916386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.366 [2024-05-15 09:08:21.916591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.366 [2024-05-15 09:08:21.916648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.366 [2024-05-15 09:08:21.916666] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.366 [2024-05-15 09:08:21.916909] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.366 [2024-05-15 09:08:21.917156] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.366 [2024-05-15 09:08:21.917182] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.366 [2024-05-15 09:08:21.917198] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.366 [2024-05-15 09:08:21.920860] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.366 [2024-05-15 09:08:21.929900] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.366 [2024-05-15 09:08:21.930295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.366 [2024-05-15 09:08:21.930415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.366 [2024-05-15 09:08:21.930444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.366 [2024-05-15 09:08:21.930462] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.366 [2024-05-15 09:08:21.930705] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.366 [2024-05-15 09:08:21.930953] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.366 [2024-05-15 09:08:21.930979] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.366 [2024-05-15 09:08:21.930995] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.366 [2024-05-15 09:08:21.934636] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.366 [2024-05-15 09:08:21.943862] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.366 [2024-05-15 09:08:21.944277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.366 [2024-05-15 09:08:21.944427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.366 [2024-05-15 09:08:21.944455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.366 [2024-05-15 09:08:21.944473] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.366 [2024-05-15 09:08:21.944716] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.366 [2024-05-15 09:08:21.944962] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.366 [2024-05-15 09:08:21.944987] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.366 [2024-05-15 09:08:21.945004] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.366 [2024-05-15 09:08:21.948649] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.366 [2024-05-15 09:08:21.957878] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.366 [2024-05-15 09:08:21.958284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.366 [2024-05-15 09:08:21.958460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.366 [2024-05-15 09:08:21.958489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.366 [2024-05-15 09:08:21.958507] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.366 [2024-05-15 09:08:21.958750] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.366 [2024-05-15 09:08:21.958998] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.366 [2024-05-15 09:08:21.959022] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.366 [2024-05-15 09:08:21.959038] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.366 [2024-05-15 09:08:21.962683] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.366 [2024-05-15 09:08:21.971908] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.366 [2024-05-15 09:08:21.972313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.366 [2024-05-15 09:08:21.972458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.366 [2024-05-15 09:08:21.972486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.366 [2024-05-15 09:08:21.972504] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.366 [2024-05-15 09:08:21.972746] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.366 [2024-05-15 09:08:21.973003] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.366 [2024-05-15 09:08:21.973028] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.366 [2024-05-15 09:08:21.973044] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.366 [2024-05-15 09:08:21.976691] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.366 [2024-05-15 09:08:21.985913] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.366 [2024-05-15 09:08:21.986319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.366 [2024-05-15 09:08:21.986475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.366 [2024-05-15 09:08:21.986504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.366 [2024-05-15 09:08:21.986523] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.366 [2024-05-15 09:08:21.986765] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.366 [2024-05-15 09:08:21.987012] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.366 [2024-05-15 09:08:21.987037] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.366 [2024-05-15 09:08:21.987053] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.366 [2024-05-15 09:08:21.990702] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.366 [2024-05-15 09:08:21.999922] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.366 [2024-05-15 09:08:22.000304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.366 [2024-05-15 09:08:22.000456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.366 [2024-05-15 09:08:22.000494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.366 [2024-05-15 09:08:22.000512] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.366 [2024-05-15 09:08:22.000755] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.366 [2024-05-15 09:08:22.001003] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.366 [2024-05-15 09:08:22.001028] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.366 [2024-05-15 09:08:22.001044] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.366 [2024-05-15 09:08:22.004682] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.366 [2024-05-15 09:08:22.013893] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.366 [2024-05-15 09:08:22.014310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.366 [2024-05-15 09:08:22.014461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.366 [2024-05-15 09:08:22.014491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.366 [2024-05-15 09:08:22.014509] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.366 [2024-05-15 09:08:22.014753] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.366 [2024-05-15 09:08:22.015000] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.366 [2024-05-15 09:08:22.015025] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.366 [2024-05-15 09:08:22.015042] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.366 [2024-05-15 09:08:22.018695] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.366 [2024-05-15 09:08:22.027922] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.366 [2024-05-15 09:08:22.028310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.366 [2024-05-15 09:08:22.028472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.366 [2024-05-15 09:08:22.028502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.366 [2024-05-15 09:08:22.028520] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.366 [2024-05-15 09:08:22.028768] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.366 [2024-05-15 09:08:22.029015] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.366 [2024-05-15 09:08:22.029040] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.366 [2024-05-15 09:08:22.029057] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.366 [2024-05-15 09:08:22.032699] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.366 [2024-05-15 09:08:22.041915] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.366 [2024-05-15 09:08:22.042305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.366 [2024-05-15 09:08:22.042477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.366 [2024-05-15 09:08:22.042514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.367 [2024-05-15 09:08:22.042533] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.367 [2024-05-15 09:08:22.042776] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.367 [2024-05-15 09:08:22.043025] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.367 [2024-05-15 09:08:22.043050] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.367 [2024-05-15 09:08:22.043066] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.367 [2024-05-15 09:08:22.046709] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.367 [2024-05-15 09:08:22.055932] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.367 [2024-05-15 09:08:22.056364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.367 [2024-05-15 09:08:22.056506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.367 [2024-05-15 09:08:22.056535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.367 [2024-05-15 09:08:22.056558] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.367 [2024-05-15 09:08:22.056802] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.367 [2024-05-15 09:08:22.057047] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.367 [2024-05-15 09:08:22.057072] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.367 [2024-05-15 09:08:22.057089] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.367 [2024-05-15 09:08:22.060732] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.367 [2024-05-15 09:08:22.069945] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.367 [2024-05-15 09:08:22.070377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.367 [2024-05-15 09:08:22.070499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.367 [2024-05-15 09:08:22.070527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.367 [2024-05-15 09:08:22.070546] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.367 [2024-05-15 09:08:22.070789] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.367 [2024-05-15 09:08:22.071035] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.367 [2024-05-15 09:08:22.071060] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.367 [2024-05-15 09:08:22.071076] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.367 [2024-05-15 09:08:22.074715] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.367 [2024-05-15 09:08:22.083927] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.367 [2024-05-15 09:08:22.084339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.367 [2024-05-15 09:08:22.084489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.367 [2024-05-15 09:08:22.084519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.367 [2024-05-15 09:08:22.084537] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.367 [2024-05-15 09:08:22.084780] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.367 [2024-05-15 09:08:22.085029] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.367 [2024-05-15 09:08:22.085055] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.367 [2024-05-15 09:08:22.085071] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.367 [2024-05-15 09:08:22.088713] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.367 [2024-05-15 09:08:22.097924] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.367 [2024-05-15 09:08:22.098317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.367 [2024-05-15 09:08:22.098462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.367 [2024-05-15 09:08:22.098491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.367 [2024-05-15 09:08:22.098509] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.367 [2024-05-15 09:08:22.098757] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.367 [2024-05-15 09:08:22.099005] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.367 [2024-05-15 09:08:22.099030] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.367 [2024-05-15 09:08:22.099047] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.367 [2024-05-15 09:08:22.102686] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.367 [2024-05-15 09:08:22.111925] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.367 [2024-05-15 09:08:22.112345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.367 [2024-05-15 09:08:22.112469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.367 [2024-05-15 09:08:22.112498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.367 [2024-05-15 09:08:22.112517] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.367 [2024-05-15 09:08:22.112759] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.367 [2024-05-15 09:08:22.113006] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.367 [2024-05-15 09:08:22.113031] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.367 [2024-05-15 09:08:22.113048] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.367 [2024-05-15 09:08:22.116693] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.367 [2024-05-15 09:08:22.125907] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.367 [2024-05-15 09:08:22.126301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.367 [2024-05-15 09:08:22.126441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.367 [2024-05-15 09:08:22.126469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.367 [2024-05-15 09:08:22.126488] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.367 [2024-05-15 09:08:22.126730] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.367 [2024-05-15 09:08:22.126976] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.367 [2024-05-15 09:08:22.127001] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.367 [2024-05-15 09:08:22.127018] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.367 [2024-05-15 09:08:22.130656] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.367 [2024-05-15 09:08:22.139870] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.367 [2024-05-15 09:08:22.140262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.367 [2024-05-15 09:08:22.140431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.367 [2024-05-15 09:08:22.140459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.367 [2024-05-15 09:08:22.140477] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.367 [2024-05-15 09:08:22.140720] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.367 [2024-05-15 09:08:22.140973] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.367 [2024-05-15 09:08:22.140999] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.367 [2024-05-15 09:08:22.141016] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.367 [2024-05-15 09:08:22.144655] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.367 [2024-05-15 09:08:22.153883] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.367 [2024-05-15 09:08:22.154269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.367 [2024-05-15 09:08:22.154399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.367 [2024-05-15 09:08:22.154429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.367 [2024-05-15 09:08:22.154447] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.367 [2024-05-15 09:08:22.154694] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.367 [2024-05-15 09:08:22.154949] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.367 [2024-05-15 09:08:22.154973] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.367 [2024-05-15 09:08:22.154990] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.626 [2024-05-15 09:08:22.158659] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.626 [2024-05-15 09:08:22.167821] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.626 [2024-05-15 09:08:22.168221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.626 [2024-05-15 09:08:22.168363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.626 [2024-05-15 09:08:22.168388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.626 [2024-05-15 09:08:22.168404] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.626 [2024-05-15 09:08:22.168649] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.626 [2024-05-15 09:08:22.168871] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.626 [2024-05-15 09:08:22.168891] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.626 [2024-05-15 09:08:22.168905] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.626 [2024-05-15 09:08:22.172010] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.626 [2024-05-15 09:08:22.181228] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.626 [2024-05-15 09:08:22.181677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.626 [2024-05-15 09:08:22.181784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.626 [2024-05-15 09:08:22.181809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.626 [2024-05-15 09:08:22.181825] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.626 [2024-05-15 09:08:22.182083] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.626 [2024-05-15 09:08:22.182324] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.626 [2024-05-15 09:08:22.182350] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.626 [2024-05-15 09:08:22.182365] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.626 [2024-05-15 09:08:22.185428] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.626 [2024-05-15 09:08:22.194478] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.626 [2024-05-15 09:08:22.194927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.626 [2024-05-15 09:08:22.195086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.626 [2024-05-15 09:08:22.195113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.626 [2024-05-15 09:08:22.195130] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.627 [2024-05-15 09:08:22.195373] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.627 [2024-05-15 09:08:22.195626] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.627 [2024-05-15 09:08:22.195647] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.627 [2024-05-15 09:08:22.195661] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.627 [2024-05-15 09:08:22.198659] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.627 [2024-05-15 09:08:22.207719] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.627 [2024-05-15 09:08:22.208124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.627 [2024-05-15 09:08:22.208272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.627 [2024-05-15 09:08:22.208299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.627 [2024-05-15 09:08:22.208316] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.627 [2024-05-15 09:08:22.208572] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.627 [2024-05-15 09:08:22.208790] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.627 [2024-05-15 09:08:22.208811] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.627 [2024-05-15 09:08:22.208825] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.627 [2024-05-15 09:08:22.211849] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.627 [2024-05-15 09:08:22.220950] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.627 [2024-05-15 09:08:22.221320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.627 [2024-05-15 09:08:22.221454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.627 [2024-05-15 09:08:22.221479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.627 [2024-05-15 09:08:22.221496] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.627 [2024-05-15 09:08:22.221764] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.627 [2024-05-15 09:08:22.221961] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.627 [2024-05-15 09:08:22.221981] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.627 [2024-05-15 09:08:22.221999] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.627 [2024-05-15 09:08:22.225054] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.627 [2024-05-15 09:08:22.234316] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.627 [2024-05-15 09:08:22.234673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.627 [2024-05-15 09:08:22.234815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.627 [2024-05-15 09:08:22.234840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.627 [2024-05-15 09:08:22.234856] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.627 [2024-05-15 09:08:22.235111] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.627 [2024-05-15 09:08:22.235360] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.627 [2024-05-15 09:08:22.235383] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.627 [2024-05-15 09:08:22.235398] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.627 [2024-05-15 09:08:22.238422] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.627 [2024-05-15 09:08:22.247582] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.627 [2024-05-15 09:08:22.247988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.627 [2024-05-15 09:08:22.248121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.627 [2024-05-15 09:08:22.248146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.627 [2024-05-15 09:08:22.248163] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.627 [2024-05-15 09:08:22.248390] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.627 [2024-05-15 09:08:22.248636] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.627 [2024-05-15 09:08:22.248656] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.627 [2024-05-15 09:08:22.248669] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.627 [2024-05-15 09:08:22.251668] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.627 [2024-05-15 09:08:22.260958] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.627 [2024-05-15 09:08:22.261283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.627 [2024-05-15 09:08:22.261424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.627 [2024-05-15 09:08:22.261450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.627 [2024-05-15 09:08:22.261466] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.627 [2024-05-15 09:08:22.261723] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.627 [2024-05-15 09:08:22.261920] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.627 [2024-05-15 09:08:22.261940] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.627 [2024-05-15 09:08:22.261953] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.627 [2024-05-15 09:08:22.265003] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.627 [2024-05-15 09:08:22.274264] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.627 [2024-05-15 09:08:22.274688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.627 [2024-05-15 09:08:22.274849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.627 [2024-05-15 09:08:22.274874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.627 [2024-05-15 09:08:22.274890] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.627 [2024-05-15 09:08:22.275134] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.627 [2024-05-15 09:08:22.275394] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.627 [2024-05-15 09:08:22.275417] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.627 [2024-05-15 09:08:22.275431] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.627 [2024-05-15 09:08:22.278439] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.627 [2024-05-15 09:08:22.287459] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.627 [2024-05-15 09:08:22.287911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.627 [2024-05-15 09:08:22.288019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.627 [2024-05-15 09:08:22.288044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.627 [2024-05-15 09:08:22.288060] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.627 [2024-05-15 09:08:22.288333] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.627 [2024-05-15 09:08:22.288556] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.627 [2024-05-15 09:08:22.288577] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.627 [2024-05-15 09:08:22.288605] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.627 [2024-05-15 09:08:22.291606] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.627 [2024-05-15 09:08:22.300830] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.627 [2024-05-15 09:08:22.301262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.627 [2024-05-15 09:08:22.301393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.627 [2024-05-15 09:08:22.301420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.627 [2024-05-15 09:08:22.301437] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.627 [2024-05-15 09:08:22.301692] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.627 [2024-05-15 09:08:22.301889] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.627 [2024-05-15 09:08:22.301910] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.627 [2024-05-15 09:08:22.301923] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.627 [2024-05-15 09:08:22.304936] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.627 [2024-05-15 09:08:22.314056] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.627 [2024-05-15 09:08:22.314455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.627 [2024-05-15 09:08:22.314636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.627 [2024-05-15 09:08:22.314663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.627 [2024-05-15 09:08:22.314679] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.627 [2024-05-15 09:08:22.314923] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.627 [2024-05-15 09:08:22.315156] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.627 [2024-05-15 09:08:22.315177] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.628 [2024-05-15 09:08:22.315190] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.628 [2024-05-15 09:08:22.318258] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.628 [2024-05-15 09:08:22.327354] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.628 [2024-05-15 09:08:22.327714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.628 [2024-05-15 09:08:22.327880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.628 [2024-05-15 09:08:22.327906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.628 [2024-05-15 09:08:22.327923] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.628 [2024-05-15 09:08:22.328181] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.628 [2024-05-15 09:08:22.328415] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.628 [2024-05-15 09:08:22.328437] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.628 [2024-05-15 09:08:22.328451] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.628 [2024-05-15 09:08:22.331458] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.628 [2024-05-15 09:08:22.340745] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.628 [2024-05-15 09:08:22.341114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.628 [2024-05-15 09:08:22.341263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.628 [2024-05-15 09:08:22.341289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.628 [2024-05-15 09:08:22.341306] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.628 [2024-05-15 09:08:22.341553] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.628 [2024-05-15 09:08:22.341765] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.628 [2024-05-15 09:08:22.341786] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.628 [2024-05-15 09:08:22.341799] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.628 [2024-05-15 09:08:22.344877] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.628 [2024-05-15 09:08:22.354116] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.628 [2024-05-15 09:08:22.354547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.628 [2024-05-15 09:08:22.354750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.628 [2024-05-15 09:08:22.354776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.628 [2024-05-15 09:08:22.354792] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.628 [2024-05-15 09:08:22.355048] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.628 [2024-05-15 09:08:22.355302] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.628 [2024-05-15 09:08:22.355325] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.628 [2024-05-15 09:08:22.355340] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.628 [2024-05-15 09:08:22.358408] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.628 [2024-05-15 09:08:22.367525] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.628 [2024-05-15 09:08:22.367896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.628 [2024-05-15 09:08:22.368035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.628 [2024-05-15 09:08:22.368060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.628 [2024-05-15 09:08:22.368076] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.628 [2024-05-15 09:08:22.368320] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.628 [2024-05-15 09:08:22.368560] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.628 [2024-05-15 09:08:22.368596] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.628 [2024-05-15 09:08:22.368610] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.628 [2024-05-15 09:08:22.371614] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.628 [2024-05-15 09:08:22.380868] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.628 [2024-05-15 09:08:22.381315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.628 [2024-05-15 09:08:22.381450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.628 [2024-05-15 09:08:22.381476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.628 [2024-05-15 09:08:22.381492] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.628 [2024-05-15 09:08:22.381762] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.628 [2024-05-15 09:08:22.381959] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.628 [2024-05-15 09:08:22.381979] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.628 [2024-05-15 09:08:22.381992] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.628 [2024-05-15 09:08:22.385038] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.628 [2024-05-15 09:08:22.394105] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.628 [2024-05-15 09:08:22.394480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.628 [2024-05-15 09:08:22.394617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.628 [2024-05-15 09:08:22.394649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.628 [2024-05-15 09:08:22.394666] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.628 [2024-05-15 09:08:22.394922] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.628 [2024-05-15 09:08:22.395124] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.628 [2024-05-15 09:08:22.395145] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.628 [2024-05-15 09:08:22.395159] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.628 [2024-05-15 09:08:22.398485] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.628 [2024-05-15 09:08:22.407798] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.628 [2024-05-15 09:08:22.408157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.628 [2024-05-15 09:08:22.408278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.628 [2024-05-15 09:08:22.408305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.628 [2024-05-15 09:08:22.408322] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.628 [2024-05-15 09:08:22.408568] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.628 [2024-05-15 09:08:22.408786] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.628 [2024-05-15 09:08:22.408808] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.628 [2024-05-15 09:08:22.408821] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.628 [2024-05-15 09:08:22.412126] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.887 [2024-05-15 09:08:22.421388] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.887 [2024-05-15 09:08:22.421799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.887 [2024-05-15 09:08:22.421993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.887 [2024-05-15 09:08:22.422021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.887 [2024-05-15 09:08:22.422040] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.887 [2024-05-15 09:08:22.422270] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.887 [2024-05-15 09:08:22.422516] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.887 [2024-05-15 09:08:22.422538] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.887 [2024-05-15 09:08:22.422552] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.887 [2024-05-15 09:08:22.425871] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.887 [2024-05-15 09:08:22.434750] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.887 [2024-05-15 09:08:22.435091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.887 [2024-05-15 09:08:22.435229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.887 [2024-05-15 09:08:22.435256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.887 [2024-05-15 09:08:22.435280] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.887 [2024-05-15 09:08:22.435527] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.887 [2024-05-15 09:08:22.435739] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.887 [2024-05-15 09:08:22.435760] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.887 [2024-05-15 09:08:22.435774] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.887 [2024-05-15 09:08:22.438777] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.887 [2024-05-15 09:08:22.448118] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.887 [2024-05-15 09:08:22.448595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.887 [2024-05-15 09:08:22.448728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.887 [2024-05-15 09:08:22.448754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.887 [2024-05-15 09:08:22.448770] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.887 [2024-05-15 09:08:22.449024] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.887 [2024-05-15 09:08:22.449247] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.887 [2024-05-15 09:08:22.449284] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.887 [2024-05-15 09:08:22.449300] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.887 [2024-05-15 09:08:22.452338] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.887 [2024-05-15 09:08:22.461448] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.887 [2024-05-15 09:08:22.461832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.887 [2024-05-15 09:08:22.461975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.887 [2024-05-15 09:08:22.462000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.887 [2024-05-15 09:08:22.462031] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.887 [2024-05-15 09:08:22.462292] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.887 [2024-05-15 09:08:22.462501] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.887 [2024-05-15 09:08:22.462536] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.887 [2024-05-15 09:08:22.462549] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.887 [2024-05-15 09:08:22.465569] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.887 [2024-05-15 09:08:22.474793] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.888 [2024-05-15 09:08:22.475241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.888 [2024-05-15 09:08:22.475376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.888 [2024-05-15 09:08:22.475402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.888 [2024-05-15 09:08:22.475419] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.888 [2024-05-15 09:08:22.475668] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.888 [2024-05-15 09:08:22.475879] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.888 [2024-05-15 09:08:22.475900] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.888 [2024-05-15 09:08:22.475913] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.888 [2024-05-15 09:08:22.478954] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.888 [2024-05-15 09:08:22.488098] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.888 [2024-05-15 09:08:22.488432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.888 [2024-05-15 09:08:22.488585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.888 [2024-05-15 09:08:22.488610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.888 [2024-05-15 09:08:22.488626] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.888 [2024-05-15 09:08:22.488864] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.888 [2024-05-15 09:08:22.489077] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.888 [2024-05-15 09:08:22.489098] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.888 [2024-05-15 09:08:22.489110] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.888 [2024-05-15 09:08:22.492164] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.888 [2024-05-15 09:08:22.501413] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.888 [2024-05-15 09:08:22.501815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.888 [2024-05-15 09:08:22.501958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.888 [2024-05-15 09:08:22.501985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.888 [2024-05-15 09:08:22.502003] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.888 [2024-05-15 09:08:22.502269] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.888 [2024-05-15 09:08:22.502493] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.888 [2024-05-15 09:08:22.502515] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.888 [2024-05-15 09:08:22.502529] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.888 [2024-05-15 09:08:22.505564] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.888 [2024-05-15 09:08:22.514798] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.888 [2024-05-15 09:08:22.515186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.888 [2024-05-15 09:08:22.515318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.888 [2024-05-15 09:08:22.515344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.888 [2024-05-15 09:08:22.515361] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.888 [2024-05-15 09:08:22.515606] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.888 [2024-05-15 09:08:22.515806] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.888 [2024-05-15 09:08:22.515827] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.888 [2024-05-15 09:08:22.515840] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.888 [2024-05-15 09:08:22.518803] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.888 [2024-05-15 09:08:22.528065] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.888 [2024-05-15 09:08:22.528462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.888 [2024-05-15 09:08:22.528621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.888 [2024-05-15 09:08:22.528647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.888 [2024-05-15 09:08:22.528663] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.888 [2024-05-15 09:08:22.528906] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.888 [2024-05-15 09:08:22.529119] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.888 [2024-05-15 09:08:22.529139] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.888 [2024-05-15 09:08:22.529153] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.888 [2024-05-15 09:08:22.532214] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.888 [2024-05-15 09:08:22.541505] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.888 [2024-05-15 09:08:22.541957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.888 [2024-05-15 09:08:22.542074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.888 [2024-05-15 09:08:22.542100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.888 [2024-05-15 09:08:22.542117] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.888 [2024-05-15 09:08:22.542346] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.888 [2024-05-15 09:08:22.542597] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.888 [2024-05-15 09:08:22.542618] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.888 [2024-05-15 09:08:22.542631] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.888 [2024-05-15 09:08:22.545755] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.888 [2024-05-15 09:08:22.554830] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.888 [2024-05-15 09:08:22.555202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.888 [2024-05-15 09:08:22.555346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.888 [2024-05-15 09:08:22.555371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.888 [2024-05-15 09:08:22.555388] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.888 [2024-05-15 09:08:22.555632] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.888 [2024-05-15 09:08:22.555845] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.888 [2024-05-15 09:08:22.555870] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.888 [2024-05-15 09:08:22.555884] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.888 [2024-05-15 09:08:22.558913] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.888 [2024-05-15 09:08:22.568219] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.888 [2024-05-15 09:08:22.568550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.888 [2024-05-15 09:08:22.568693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.888 [2024-05-15 09:08:22.568719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.888 [2024-05-15 09:08:22.568735] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.888 [2024-05-15 09:08:22.568978] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.888 [2024-05-15 09:08:22.569192] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.888 [2024-05-15 09:08:22.569235] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.888 [2024-05-15 09:08:22.569250] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.888 [2024-05-15 09:08:22.572292] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.888 [2024-05-15 09:08:22.581496] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.888 [2024-05-15 09:08:22.581945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.888 [2024-05-15 09:08:22.582084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.888 [2024-05-15 09:08:22.582111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.888 [2024-05-15 09:08:22.582128] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.888 [2024-05-15 09:08:22.582397] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.888 [2024-05-15 09:08:22.582636] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.888 [2024-05-15 09:08:22.582657] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.888 [2024-05-15 09:08:22.582670] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.888 [2024-05-15 09:08:22.585711] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.888 [2024-05-15 09:08:22.594831] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.888 [2024-05-15 09:08:22.595174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.888 [2024-05-15 09:08:22.595339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.889 [2024-05-15 09:08:22.595365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.889 [2024-05-15 09:08:22.595381] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.889 [2024-05-15 09:08:22.595627] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.889 [2024-05-15 09:08:22.595825] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.889 [2024-05-15 09:08:22.595845] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.889 [2024-05-15 09:08:22.595863] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.889 [2024-05-15 09:08:22.598981] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.889 [2024-05-15 09:08:22.608239] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.889 [2024-05-15 09:08:22.608618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.889 [2024-05-15 09:08:22.608777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.889 [2024-05-15 09:08:22.608802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.889 [2024-05-15 09:08:22.608817] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.889 [2024-05-15 09:08:22.609072] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.889 [2024-05-15 09:08:22.609346] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.889 [2024-05-15 09:08:22.609368] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.889 [2024-05-15 09:08:22.609383] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.889 [2024-05-15 09:08:22.612439] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.889 [2024-05-15 09:08:22.621646] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.889 [2024-05-15 09:08:22.622018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.889 [2024-05-15 09:08:22.622160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.889 [2024-05-15 09:08:22.622185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.889 [2024-05-15 09:08:22.622202] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.889 [2024-05-15 09:08:22.622467] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.889 [2024-05-15 09:08:22.622693] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.889 [2024-05-15 09:08:22.622713] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.889 [2024-05-15 09:08:22.622726] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.889 [2024-05-15 09:08:22.625796] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.889 [2024-05-15 09:08:22.635089] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.889 [2024-05-15 09:08:22.635566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.889 [2024-05-15 09:08:22.635718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.889 [2024-05-15 09:08:22.635744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.889 [2024-05-15 09:08:22.635761] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.889 [2024-05-15 09:08:22.636015] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.889 [2024-05-15 09:08:22.636237] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.889 [2024-05-15 09:08:22.636259] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.889 [2024-05-15 09:08:22.636272] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.889 [2024-05-15 09:08:22.639309] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.889 [2024-05-15 09:08:22.648657] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.889 [2024-05-15 09:08:22.649063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.889 [2024-05-15 09:08:22.649226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.889 [2024-05-15 09:08:22.649252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.889 [2024-05-15 09:08:22.649269] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.889 [2024-05-15 09:08:22.649488] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.889 [2024-05-15 09:08:22.649742] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.889 [2024-05-15 09:08:22.649762] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.889 [2024-05-15 09:08:22.649776] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.889 [2024-05-15 09:08:22.653248] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.889 [2024-05-15 09:08:22.662090] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.889 [2024-05-15 09:08:22.662491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.889 [2024-05-15 09:08:22.662646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.889 [2024-05-15 09:08:22.662672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.889 [2024-05-15 09:08:22.662689] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.889 [2024-05-15 09:08:22.662941] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.889 [2024-05-15 09:08:22.663153] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.889 [2024-05-15 09:08:22.663172] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.889 [2024-05-15 09:08:22.663185] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:27.889 [2024-05-15 09:08:22.666376] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:27.889 [2024-05-15 09:08:22.675711] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:27.889 [2024-05-15 09:08:22.676159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.889 [2024-05-15 09:08:22.676320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:27.889 [2024-05-15 09:08:22.676349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:27.889 [2024-05-15 09:08:22.676368] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:27.889 [2024-05-15 09:08:22.676606] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:27.889 [2024-05-15 09:08:22.676857] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:27.889 [2024-05-15 09:08:22.676881] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:27.889 [2024-05-15 09:08:22.676913] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.148 [2024-05-15 09:08:22.680224] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.148 [2024-05-15 09:08:22.689173] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.148 [2024-05-15 09:08:22.689590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.148 [2024-05-15 09:08:22.689729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.148 [2024-05-15 09:08:22.689757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.148 [2024-05-15 09:08:22.689774] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.148 [2024-05-15 09:08:22.690029] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.148 [2024-05-15 09:08:22.690254] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.148 [2024-05-15 09:08:22.690276] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.148 [2024-05-15 09:08:22.690289] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.148 [2024-05-15 09:08:22.693304] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.148 [2024-05-15 09:08:22.702539] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.148 [2024-05-15 09:08:22.702924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.148 [2024-05-15 09:08:22.703071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.148 [2024-05-15 09:08:22.703098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.148 [2024-05-15 09:08:22.703115] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.148 [2024-05-15 09:08:22.703399] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.148 [2024-05-15 09:08:22.703603] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.148 [2024-05-15 09:08:22.703623] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.148 [2024-05-15 09:08:22.703636] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.148 [2024-05-15 09:08:22.706699] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.148 [2024-05-15 09:08:22.715887] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.148 [2024-05-15 09:08:22.716256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.148 [2024-05-15 09:08:22.716421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.148 [2024-05-15 09:08:22.716448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.148 [2024-05-15 09:08:22.716465] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.148 [2024-05-15 09:08:22.716733] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.148 [2024-05-15 09:08:22.716930] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.148 [2024-05-15 09:08:22.716950] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.148 [2024-05-15 09:08:22.716963] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.148 [2024-05-15 09:08:22.720010] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.148 [2024-05-15 09:08:22.729269] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.148 [2024-05-15 09:08:22.729711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.148 [2024-05-15 09:08:22.729855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.148 [2024-05-15 09:08:22.729881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.148 [2024-05-15 09:08:22.729898] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.148 [2024-05-15 09:08:22.730145] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.148 [2024-05-15 09:08:22.730383] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.148 [2024-05-15 09:08:22.730404] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.149 [2024-05-15 09:08:22.730417] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.149 [2024-05-15 09:08:22.733464] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.149 [2024-05-15 09:08:22.742582] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.149 [2024-05-15 09:08:22.743015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.149 [2024-05-15 09:08:22.743147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.149 [2024-05-15 09:08:22.743174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.149 [2024-05-15 09:08:22.743191] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.149 [2024-05-15 09:08:22.743431] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.149 [2024-05-15 09:08:22.743664] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.149 [2024-05-15 09:08:22.743684] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.149 [2024-05-15 09:08:22.743696] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.149 [2024-05-15 09:08:22.746735] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.149 [2024-05-15 09:08:22.755900] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.149 [2024-05-15 09:08:22.756273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.149 [2024-05-15 09:08:22.756399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.149 [2024-05-15 09:08:22.756424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.149 [2024-05-15 09:08:22.756441] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.149 [2024-05-15 09:08:22.756705] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.149 [2024-05-15 09:08:22.756902] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.149 [2024-05-15 09:08:22.756921] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.149 [2024-05-15 09:08:22.756934] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.149 [2024-05-15 09:08:22.760022] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.149 [2024-05-15 09:08:22.769608] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.149 [2024-05-15 09:08:22.770040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.149 [2024-05-15 09:08:22.770208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.149 [2024-05-15 09:08:22.770242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.149 [2024-05-15 09:08:22.770276] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.149 [2024-05-15 09:08:22.770510] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.149 [2024-05-15 09:08:22.770708] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.149 [2024-05-15 09:08:22.770728] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.149 [2024-05-15 09:08:22.770742] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.149 [2024-05-15 09:08:22.773901] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.149 [2024-05-15 09:08:22.783002] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.149 [2024-05-15 09:08:22.783371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.149 [2024-05-15 09:08:22.783506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.149 [2024-05-15 09:08:22.783533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.149 [2024-05-15 09:08:22.783549] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.149 [2024-05-15 09:08:22.783781] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.149 [2024-05-15 09:08:22.784012] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.149 [2024-05-15 09:08:22.784033] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.149 [2024-05-15 09:08:22.784046] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.149 [2024-05-15 09:08:22.787099] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.149 [2024-05-15 09:08:22.796580] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.149 [2024-05-15 09:08:22.797029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.149 [2024-05-15 09:08:22.797166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.149 [2024-05-15 09:08:22.797191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.149 [2024-05-15 09:08:22.797207] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.149 [2024-05-15 09:08:22.797446] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.149 [2024-05-15 09:08:22.797674] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.149 [2024-05-15 09:08:22.797695] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.149 [2024-05-15 09:08:22.797708] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.149 [2024-05-15 09:08:22.800840] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.149 [2024-05-15 09:08:22.810031] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.149 [2024-05-15 09:08:22.810422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.149 [2024-05-15 09:08:22.810584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.149 [2024-05-15 09:08:22.810611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.149 [2024-05-15 09:08:22.810628] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.149 [2024-05-15 09:08:22.810875] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.149 [2024-05-15 09:08:22.811088] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.149 [2024-05-15 09:08:22.811108] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.149 [2024-05-15 09:08:22.811122] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.149 [2024-05-15 09:08:22.814193] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.149 [2024-05-15 09:08:22.823433] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.149 [2024-05-15 09:08:22.823793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.149 [2024-05-15 09:08:22.823952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.149 [2024-05-15 09:08:22.823978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.149 [2024-05-15 09:08:22.823994] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.149 [2024-05-15 09:08:22.824273] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.149 [2024-05-15 09:08:22.824489] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.149 [2024-05-15 09:08:22.824511] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.149 [2024-05-15 09:08:22.824526] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.149 [2024-05-15 09:08:22.827613] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.149 [2024-05-15 09:08:22.836846] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.149 [2024-05-15 09:08:22.837185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.149 [2024-05-15 09:08:22.837365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.149 [2024-05-15 09:08:22.837392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.149 [2024-05-15 09:08:22.837408] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.149 [2024-05-15 09:08:22.837652] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.149 [2024-05-15 09:08:22.837864] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.149 [2024-05-15 09:08:22.837885] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.149 [2024-05-15 09:08:22.837899] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.149 [2024-05-15 09:08:22.840986] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.149 [2024-05-15 09:08:22.850101] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.149 [2024-05-15 09:08:22.850499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.149 [2024-05-15 09:08:22.850636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.149 [2024-05-15 09:08:22.850664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.149 [2024-05-15 09:08:22.850680] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.149 [2024-05-15 09:08:22.850934] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.149 [2024-05-15 09:08:22.851136] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.149 [2024-05-15 09:08:22.851157] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.149 [2024-05-15 09:08:22.851171] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.149 [2024-05-15 09:08:22.854233] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.149 [2024-05-15 09:08:22.863399] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.149 [2024-05-15 09:08:22.863788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.149 [2024-05-15 09:08:22.863954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.149 [2024-05-15 09:08:22.863979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.149 [2024-05-15 09:08:22.863995] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.149 [2024-05-15 09:08:22.864260] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.149 [2024-05-15 09:08:22.864471] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.149 [2024-05-15 09:08:22.864492] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.149 [2024-05-15 09:08:22.864507] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.149 [2024-05-15 09:08:22.867543] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.149 [2024-05-15 09:08:22.876726] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.149 [2024-05-15 09:08:22.877159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.149 [2024-05-15 09:08:22.877292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.149 [2024-05-15 09:08:22.877318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.149 [2024-05-15 09:08:22.877334] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.149 [2024-05-15 09:08:22.877566] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.149 [2024-05-15 09:08:22.877777] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.149 [2024-05-15 09:08:22.877798] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.149 [2024-05-15 09:08:22.877812] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.149 [2024-05-15 09:08:22.880864] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.149 [2024-05-15 09:08:22.890182] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.149 [2024-05-15 09:08:22.890570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.149 [2024-05-15 09:08:22.890711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.149 [2024-05-15 09:08:22.890738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.149 [2024-05-15 09:08:22.890754] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.149 [2024-05-15 09:08:22.891014] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.149 [2024-05-15 09:08:22.891253] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.150 [2024-05-15 09:08:22.891280] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.150 [2024-05-15 09:08:22.891295] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.150 [2024-05-15 09:08:22.894383] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.150 [2024-05-15 09:08:22.903566] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.150 [2024-05-15 09:08:22.903947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.150 [2024-05-15 09:08:22.904109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.150 [2024-05-15 09:08:22.904134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.150 [2024-05-15 09:08:22.904150] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.150 [2024-05-15 09:08:22.904391] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.150 [2024-05-15 09:08:22.904630] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.150 [2024-05-15 09:08:22.904651] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.150 [2024-05-15 09:08:22.904664] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.150 [2024-05-15 09:08:22.907955] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.150 [2024-05-15 09:08:22.917137] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.150 [2024-05-15 09:08:22.917679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.150 [2024-05-15 09:08:22.917816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.150 [2024-05-15 09:08:22.917841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.150 [2024-05-15 09:08:22.917857] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.150 [2024-05-15 09:08:22.918090] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.150 [2024-05-15 09:08:22.918361] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.150 [2024-05-15 09:08:22.918383] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.150 [2024-05-15 09:08:22.918398] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.150 [2024-05-15 09:08:22.921647] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.150 [2024-05-15 09:08:22.930709] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.150 [2024-05-15 09:08:22.931035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.150 [2024-05-15 09:08:22.931189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.150 [2024-05-15 09:08:22.931223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.150 [2024-05-15 09:08:22.931242] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.150 [2024-05-15 09:08:22.931461] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.150 [2024-05-15 09:08:22.931676] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.150 [2024-05-15 09:08:22.931697] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.150 [2024-05-15 09:08:22.931715] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.150 [2024-05-15 09:08:22.934876] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.409 [2024-05-15 09:08:22.944085] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.409 [2024-05-15 09:08:22.944509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.409 [2024-05-15 09:08:22.944629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.409 [2024-05-15 09:08:22.944656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.409 [2024-05-15 09:08:22.944672] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.409 [2024-05-15 09:08:22.944924] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.409 [2024-05-15 09:08:22.945144] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.409 [2024-05-15 09:08:22.945165] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.409 [2024-05-15 09:08:22.945178] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.409 [2024-05-15 09:08:22.948309] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.409 [2024-05-15 09:08:22.957381] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.409 [2024-05-15 09:08:22.957832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.409 [2024-05-15 09:08:22.957978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.409 [2024-05-15 09:08:22.958005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.409 [2024-05-15 09:08:22.958022] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.409 [2024-05-15 09:08:22.958288] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.409 [2024-05-15 09:08:22.958491] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.409 [2024-05-15 09:08:22.958512] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.409 [2024-05-15 09:08:22.958525] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.410 [2024-05-15 09:08:22.961615] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.410 [2024-05-15 09:08:22.970658] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.410 [2024-05-15 09:08:22.971089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.410 [2024-05-15 09:08:22.971260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.410 [2024-05-15 09:08:22.971287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.410 [2024-05-15 09:08:22.971304] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.410 [2024-05-15 09:08:22.971549] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.410 [2024-05-15 09:08:22.971745] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.410 [2024-05-15 09:08:22.971765] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.410 [2024-05-15 09:08:22.971778] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.410 [2024-05-15 09:08:22.974783] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.410 [2024-05-15 09:08:22.983892] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.410 [2024-05-15 09:08:22.984241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.410 [2024-05-15 09:08:22.984376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.410 [2024-05-15 09:08:22.984401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.410 [2024-05-15 09:08:22.984417] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.410 [2024-05-15 09:08:22.984663] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.410 [2024-05-15 09:08:22.984859] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.410 [2024-05-15 09:08:22.984879] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.410 [2024-05-15 09:08:22.984892] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.410 [2024-05-15 09:08:22.987940] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.410 [2024-05-15 09:08:22.997120] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.410 [2024-05-15 09:08:22.997534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.410 [2024-05-15 09:08:22.997656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.410 [2024-05-15 09:08:22.997682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.410 [2024-05-15 09:08:22.997698] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.410 [2024-05-15 09:08:22.997955] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.410 [2024-05-15 09:08:22.998151] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.410 [2024-05-15 09:08:22.998172] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.410 [2024-05-15 09:08:22.998186] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.410 [2024-05-15 09:08:23.001278] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.410 [2024-05-15 09:08:23.010549] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.410 [2024-05-15 09:08:23.011000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.410 [2024-05-15 09:08:23.011169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.410 [2024-05-15 09:08:23.011196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.410 [2024-05-15 09:08:23.011212] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.410 [2024-05-15 09:08:23.011468] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.410 [2024-05-15 09:08:23.011701] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.410 [2024-05-15 09:08:23.011721] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.410 [2024-05-15 09:08:23.011734] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.410 [2024-05-15 09:08:23.014791] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.410 [2024-05-15 09:08:23.023986] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.410 [2024-05-15 09:08:23.024373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.410 [2024-05-15 09:08:23.024508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.410 [2024-05-15 09:08:23.024535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.410 [2024-05-15 09:08:23.024552] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.410 [2024-05-15 09:08:23.024816] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.410 [2024-05-15 09:08:23.025012] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.410 [2024-05-15 09:08:23.025033] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.410 [2024-05-15 09:08:23.025046] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.410 [2024-05-15 09:08:23.028010] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.410 [2024-05-15 09:08:23.037311] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.410 [2024-05-15 09:08:23.037698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.410 [2024-05-15 09:08:23.037822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.410 [2024-05-15 09:08:23.037849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.410 [2024-05-15 09:08:23.037866] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.410 [2024-05-15 09:08:23.038122] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.410 [2024-05-15 09:08:23.038346] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.410 [2024-05-15 09:08:23.038367] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.410 [2024-05-15 09:08:23.038381] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.410 [2024-05-15 09:08:23.041420] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.410 [2024-05-15 09:08:23.050727] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.410 [2024-05-15 09:08:23.051050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.410 [2024-05-15 09:08:23.051188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.410 [2024-05-15 09:08:23.051221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.410 [2024-05-15 09:08:23.051239] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.410 [2024-05-15 09:08:23.051488] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.410 [2024-05-15 09:08:23.051701] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.410 [2024-05-15 09:08:23.051721] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.410 [2024-05-15 09:08:23.051735] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.410 [2024-05-15 09:08:23.054775] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.410 [2024-05-15 09:08:23.064069] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.410 [2024-05-15 09:08:23.064479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.410 [2024-05-15 09:08:23.064634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.410 [2024-05-15 09:08:23.064661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.410 [2024-05-15 09:08:23.064677] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.410 [2024-05-15 09:08:23.064922] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.410 [2024-05-15 09:08:23.065135] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.410 [2024-05-15 09:08:23.065155] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.410 [2024-05-15 09:08:23.065169] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.410 [2024-05-15 09:08:23.068210] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.410 [2024-05-15 09:08:23.077372] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.410 [2024-05-15 09:08:23.077763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.410 [2024-05-15 09:08:23.077901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.410 [2024-05-15 09:08:23.077927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.410 [2024-05-15 09:08:23.077943] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.410 [2024-05-15 09:08:23.078198] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.410 [2024-05-15 09:08:23.078428] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.410 [2024-05-15 09:08:23.078451] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.410 [2024-05-15 09:08:23.078465] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.410 [2024-05-15 09:08:23.081535] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.410 [2024-05-15 09:08:23.090663] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.410 [2024-05-15 09:08:23.091002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.410 [2024-05-15 09:08:23.091139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.410 [2024-05-15 09:08:23.091165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.410 [2024-05-15 09:08:23.091181] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.410 [2024-05-15 09:08:23.091437] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.410 [2024-05-15 09:08:23.091668] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.410 [2024-05-15 09:08:23.091690] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.411 [2024-05-15 09:08:23.091704] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.411 [2024-05-15 09:08:23.094702] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.411 [2024-05-15 09:08:23.104079] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.411 [2024-05-15 09:08:23.104469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.411 [2024-05-15 09:08:23.104584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.411 [2024-05-15 09:08:23.104614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.411 [2024-05-15 09:08:23.104631] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.411 [2024-05-15 09:08:23.104890] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.411 [2024-05-15 09:08:23.105085] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.411 [2024-05-15 09:08:23.105106] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.411 [2024-05-15 09:08:23.105119] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.411 [2024-05-15 09:08:23.108121] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.411 [2024-05-15 09:08:23.117508] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.411 [2024-05-15 09:08:23.117891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.411 [2024-05-15 09:08:23.118059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.411 [2024-05-15 09:08:23.118084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.411 [2024-05-15 09:08:23.118101] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.411 [2024-05-15 09:08:23.118355] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.411 [2024-05-15 09:08:23.118586] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.411 [2024-05-15 09:08:23.118607] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.411 [2024-05-15 09:08:23.118621] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.411 [2024-05-15 09:08:23.121662] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.411 [2024-05-15 09:08:23.130767] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.411 [2024-05-15 09:08:23.131103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.411 [2024-05-15 09:08:23.131245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.411 [2024-05-15 09:08:23.131271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.411 [2024-05-15 09:08:23.131287] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.411 [2024-05-15 09:08:23.131533] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.411 [2024-05-15 09:08:23.131729] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.411 [2024-05-15 09:08:23.131749] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.411 [2024-05-15 09:08:23.131762] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.411 [2024-05-15 09:08:23.134826] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.411 [2024-05-15 09:08:23.144052] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.411 [2024-05-15 09:08:23.144403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.411 [2024-05-15 09:08:23.144581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.411 [2024-05-15 09:08:23.144607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.411 [2024-05-15 09:08:23.144628] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.411 [2024-05-15 09:08:23.144874] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.411 [2024-05-15 09:08:23.145088] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.411 [2024-05-15 09:08:23.145108] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.411 [2024-05-15 09:08:23.145121] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.411 [2024-05-15 09:08:23.148284] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.411 [2024-05-15 09:08:23.157411] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.411 [2024-05-15 09:08:23.157839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.411 [2024-05-15 09:08:23.157949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.411 [2024-05-15 09:08:23.157975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.411 [2024-05-15 09:08:23.157992] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.411 [2024-05-15 09:08:23.158261] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.411 [2024-05-15 09:08:23.158476] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.411 [2024-05-15 09:08:23.158513] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.411 [2024-05-15 09:08:23.158528] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.411 [2024-05-15 09:08:23.162069] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.411 [2024-05-15 09:08:23.170803] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.411 [2024-05-15 09:08:23.171173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.411 [2024-05-15 09:08:23.171312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.411 [2024-05-15 09:08:23.171338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.411 [2024-05-15 09:08:23.171354] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.411 [2024-05-15 09:08:23.171595] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.411 [2024-05-15 09:08:23.171792] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.411 [2024-05-15 09:08:23.171812] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.411 [2024-05-15 09:08:23.171825] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.411 [2024-05-15 09:08:23.174946] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.411 [2024-05-15 09:08:23.184105] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.411 [2024-05-15 09:08:23.184501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.411 [2024-05-15 09:08:23.184640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.411 [2024-05-15 09:08:23.184668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.411 [2024-05-15 09:08:23.184685] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.411 [2024-05-15 09:08:23.184943] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.411 [2024-05-15 09:08:23.185141] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.411 [2024-05-15 09:08:23.185161] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.411 [2024-05-15 09:08:23.185175] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.411 [2024-05-15 09:08:23.188366] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.411 [2024-05-15 09:08:23.197826] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.411 [2024-05-15 09:08:23.198224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.411 [2024-05-15 09:08:23.198409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.411 [2024-05-15 09:08:23.198437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.411 [2024-05-15 09:08:23.198454] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.411 [2024-05-15 09:08:23.198699] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.411 [2024-05-15 09:08:23.198966] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.411 [2024-05-15 09:08:23.198992] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.411 [2024-05-15 09:08:23.199008] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.670 [2024-05-15 09:08:23.202687] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.670 [2024-05-15 09:08:23.211935] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.670 [2024-05-15 09:08:23.212355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.670 [2024-05-15 09:08:23.212463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.670 [2024-05-15 09:08:23.212487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.670 [2024-05-15 09:08:23.212502] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.670 [2024-05-15 09:08:23.212746] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.670 [2024-05-15 09:08:23.212992] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.670 [2024-05-15 09:08:23.213016] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.670 [2024-05-15 09:08:23.213032] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.670 [2024-05-15 09:08:23.216679] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.670 [2024-05-15 09:08:23.225901] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.670 [2024-05-15 09:08:23.226327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.670 [2024-05-15 09:08:23.226487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.670 [2024-05-15 09:08:23.226512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.670 [2024-05-15 09:08:23.226528] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.670 [2024-05-15 09:08:23.226781] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.670 [2024-05-15 09:08:23.227035] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.670 [2024-05-15 09:08:23.227060] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.670 [2024-05-15 09:08:23.227077] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.670 [2024-05-15 09:08:23.230724] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.670 [2024-05-15 09:08:23.239945] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.670 [2024-05-15 09:08:23.240348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.670 [2024-05-15 09:08:23.240617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.670 [2024-05-15 09:08:23.240684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.670 [2024-05-15 09:08:23.240702] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.670 [2024-05-15 09:08:23.240945] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.670 [2024-05-15 09:08:23.241191] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.670 [2024-05-15 09:08:23.241225] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.670 [2024-05-15 09:08:23.241244] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.670 [2024-05-15 09:08:23.244877] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.670 [2024-05-15 09:08:23.253899] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.670 [2024-05-15 09:08:23.254317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.670 [2024-05-15 09:08:23.254495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.670 [2024-05-15 09:08:23.254523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.670 [2024-05-15 09:08:23.254540] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.670 [2024-05-15 09:08:23.254783] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.670 [2024-05-15 09:08:23.255031] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.670 [2024-05-15 09:08:23.255055] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.670 [2024-05-15 09:08:23.255071] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.670 [2024-05-15 09:08:23.258719] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.670 [2024-05-15 09:08:23.267943] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.670 [2024-05-15 09:08:23.268335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.670 [2024-05-15 09:08:23.268454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.670 [2024-05-15 09:08:23.268483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.670 [2024-05-15 09:08:23.268501] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.670 [2024-05-15 09:08:23.268744] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.670 [2024-05-15 09:08:23.268990] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.670 [2024-05-15 09:08:23.269020] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.670 [2024-05-15 09:08:23.269037] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.670 [2024-05-15 09:08:23.272685] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.670 [2024-05-15 09:08:23.281910] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.670 [2024-05-15 09:08:23.282328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.670 [2024-05-15 09:08:23.282477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.670 [2024-05-15 09:08:23.282505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.670 [2024-05-15 09:08:23.282522] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.670 [2024-05-15 09:08:23.282765] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.670 [2024-05-15 09:08:23.283013] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.670 [2024-05-15 09:08:23.283037] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.670 [2024-05-15 09:08:23.283053] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.670 [2024-05-15 09:08:23.286701] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.670 [2024-05-15 09:08:23.295927] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.670 [2024-05-15 09:08:23.296314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.670 [2024-05-15 09:08:23.296463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.670 [2024-05-15 09:08:23.296492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.670 [2024-05-15 09:08:23.296510] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.670 [2024-05-15 09:08:23.296753] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.670 [2024-05-15 09:08:23.296999] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.670 [2024-05-15 09:08:23.297025] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.670 [2024-05-15 09:08:23.297041] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.670 [2024-05-15 09:08:23.300689] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.670 [2024-05-15 09:08:23.309918] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.670 [2024-05-15 09:08:23.310302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.670 [2024-05-15 09:08:23.310447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.670 [2024-05-15 09:08:23.310475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.671 [2024-05-15 09:08:23.310493] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.671 [2024-05-15 09:08:23.310735] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.671 [2024-05-15 09:08:23.310983] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.671 [2024-05-15 09:08:23.311008] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.671 [2024-05-15 09:08:23.311030] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.671 [2024-05-15 09:08:23.314676] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.671 [2024-05-15 09:08:23.323901] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.671 [2024-05-15 09:08:23.324307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.671 [2024-05-15 09:08:23.324455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.671 [2024-05-15 09:08:23.324484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.671 [2024-05-15 09:08:23.324502] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.671 [2024-05-15 09:08:23.324744] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.671 [2024-05-15 09:08:23.324992] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.671 [2024-05-15 09:08:23.325017] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.671 [2024-05-15 09:08:23.325034] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.671 [2024-05-15 09:08:23.328680] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.671 [2024-05-15 09:08:23.337920] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.671 [2024-05-15 09:08:23.338329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.671 [2024-05-15 09:08:23.338485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.671 [2024-05-15 09:08:23.338513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.671 [2024-05-15 09:08:23.338532] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.671 [2024-05-15 09:08:23.338775] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.671 [2024-05-15 09:08:23.339023] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.671 [2024-05-15 09:08:23.339048] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.671 [2024-05-15 09:08:23.339065] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.671 [2024-05-15 09:08:23.342710] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.671 [2024-05-15 09:08:23.351932] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.671 [2024-05-15 09:08:23.352357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.671 [2024-05-15 09:08:23.352616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.671 [2024-05-15 09:08:23.352677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.671 [2024-05-15 09:08:23.352696] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.671 [2024-05-15 09:08:23.352939] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.671 [2024-05-15 09:08:23.353187] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.671 [2024-05-15 09:08:23.353212] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.671 [2024-05-15 09:08:23.353243] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.671 [2024-05-15 09:08:23.356876] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.671 [2024-05-15 09:08:23.365887] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.671 [2024-05-15 09:08:23.366270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.671 [2024-05-15 09:08:23.366443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.671 [2024-05-15 09:08:23.366471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.671 [2024-05-15 09:08:23.366489] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.671 [2024-05-15 09:08:23.366732] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.671 [2024-05-15 09:08:23.366980] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.671 [2024-05-15 09:08:23.367005] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.671 [2024-05-15 09:08:23.367022] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2436946 Killed "${NVMF_APP[@]}" "$@" 00:42:28.671 09:08:23 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:42:28.671 09:08:23 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:42:28.671 [2024-05-15 09:08:23.370671] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.671 09:08:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:42:28.671 09:08:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@721 -- # xtrace_disable 00:42:28.671 09:08:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:42:28.671 09:08:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2437899 00:42:28.671 09:08:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:42:28.671 09:08:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2437899 00:42:28.671 09:08:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@828 -- # '[' -z 2437899 ']' 00:42:28.671 09:08:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:28.671 09:08:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local max_retries=100 00:42:28.671 09:08:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:28.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:28.671 09:08:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@837 -- # xtrace_disable 00:42:28.671 09:08:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:42:28.671 [2024-05-15 09:08:23.379897] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.671 [2024-05-15 09:08:23.380303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.671 [2024-05-15 09:08:23.380443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.671 [2024-05-15 09:08:23.380472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.671 [2024-05-15 09:08:23.380490] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.671 [2024-05-15 09:08:23.380732] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.671 [2024-05-15 09:08:23.380979] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.671 [2024-05-15 09:08:23.381002] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.671 [2024-05-15 09:08:23.381018] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.671 [2024-05-15 09:08:23.384662] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.671 [2024-05-15 09:08:23.393882] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.671 [2024-05-15 09:08:23.394271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.671 [2024-05-15 09:08:23.394390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.671 [2024-05-15 09:08:23.394419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.671 [2024-05-15 09:08:23.394437] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.671 [2024-05-15 09:08:23.394680] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.671 [2024-05-15 09:08:23.394927] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.671 [2024-05-15 09:08:23.394950] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.671 [2024-05-15 09:08:23.394966] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.671 [2024-05-15 09:08:23.398605] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.671 [2024-05-15 09:08:23.407825] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.671 [2024-05-15 09:08:23.408230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.671 [2024-05-15 09:08:23.408373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.671 [2024-05-15 09:08:23.408402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.671 [2024-05-15 09:08:23.408420] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.671 [2024-05-15 09:08:23.408662] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.671 [2024-05-15 09:08:23.408909] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.671 [2024-05-15 09:08:23.408932] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.671 [2024-05-15 09:08:23.408948] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.671 [2024-05-15 09:08:23.412589] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.671 [2024-05-15 09:08:23.421257] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:42:28.671 [2024-05-15 09:08:23.421324] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:28.671 [2024-05-15 09:08:23.421812] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.671 [2024-05-15 09:08:23.422196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.671 [2024-05-15 09:08:23.422359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.671 [2024-05-15 09:08:23.422390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.671 [2024-05-15 09:08:23.422409] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.671 [2024-05-15 09:08:23.422651] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.671 [2024-05-15 09:08:23.422900] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.671 [2024-05-15 09:08:23.422930] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.671 [2024-05-15 09:08:23.422947] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.671 [2024-05-15 09:08:23.426597] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.671 [2024-05-15 09:08:23.435818] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.671 [2024-05-15 09:08:23.436248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.671 [2024-05-15 09:08:23.436376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.671 [2024-05-15 09:08:23.436404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.671 [2024-05-15 09:08:23.436423] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.671 [2024-05-15 09:08:23.436665] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.671 [2024-05-15 09:08:23.436911] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.671 [2024-05-15 09:08:23.436935] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.671 [2024-05-15 09:08:23.436951] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.671 [2024-05-15 09:08:23.440592] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.671 [2024-05-15 09:08:23.449811] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.671 [2024-05-15 09:08:23.450226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.671 [2024-05-15 09:08:23.450347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.671 [2024-05-15 09:08:23.450378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.671 [2024-05-15 09:08:23.450396] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.671 [2024-05-15 09:08:23.450638] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.671 [2024-05-15 09:08:23.450885] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.671 [2024-05-15 09:08:23.450909] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.671 [2024-05-15 09:08:23.450925] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.671 [2024-05-15 09:08:23.454729] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.931 [2024-05-15 09:08:23.463808] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.931 [2024-05-15 09:08:23.464191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.931 [2024-05-15 09:08:23.464359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.931 [2024-05-15 09:08:23.464389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.931 [2024-05-15 09:08:23.464408] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.931 [2024-05-15 09:08:23.464651] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.931 [2024-05-15 09:08:23.464898] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.931 [2024-05-15 09:08:23.464925] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.931 [2024-05-15 09:08:23.464948] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.931 [2024-05-15 09:08:23.468608] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.931 EAL: No free 2048 kB hugepages reported on node 1 00:42:28.931 [2024-05-15 09:08:23.477832] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.931 [2024-05-15 09:08:23.478251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.931 [2024-05-15 09:08:23.478429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.931 [2024-05-15 09:08:23.478458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.931 [2024-05-15 09:08:23.478477] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.931 [2024-05-15 09:08:23.478720] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.931 [2024-05-15 09:08:23.478967] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.931 [2024-05-15 09:08:23.478991] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.931 [2024-05-15 09:08:23.479008] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.931 [2024-05-15 09:08:23.482646] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.931 [2024-05-15 09:08:23.491862] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.931 [2024-05-15 09:08:23.492266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.931 [2024-05-15 09:08:23.492440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.931 [2024-05-15 09:08:23.492469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.931 [2024-05-15 09:08:23.492487] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.931 [2024-05-15 09:08:23.492729] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.931 [2024-05-15 09:08:23.492976] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.931 [2024-05-15 09:08:23.493000] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.931 [2024-05-15 09:08:23.493017] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.931 [2024-05-15 09:08:23.496656] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.931 [2024-05-15 09:08:23.505871] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.931 [2024-05-15 09:08:23.506256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.931 [2024-05-15 09:08:23.506409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.931 [2024-05-15 09:08:23.506438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.931 [2024-05-15 09:08:23.506457] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.931 [2024-05-15 09:08:23.506699] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.931 [2024-05-15 09:08:23.506946] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.931 [2024-05-15 09:08:23.506971] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.931 [2024-05-15 09:08:23.506988] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.931 [2024-05-15 09:08:23.510639] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.931 [2024-05-15 09:08:23.513310] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:42:28.931 [2024-05-15 09:08:23.519871] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.931 [2024-05-15 09:08:23.520362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.931 [2024-05-15 09:08:23.520492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.931 [2024-05-15 09:08:23.520521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.931 [2024-05-15 09:08:23.520542] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.931 [2024-05-15 09:08:23.520789] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.931 [2024-05-15 09:08:23.521040] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.931 [2024-05-15 09:08:23.521065] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.931 [2024-05-15 09:08:23.521083] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.931 [2024-05-15 09:08:23.524732] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.931 [2024-05-15 09:08:23.533967] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.931 [2024-05-15 09:08:23.534507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.931 [2024-05-15 09:08:23.534625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.931 [2024-05-15 09:08:23.534655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.931 [2024-05-15 09:08:23.534678] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.931 [2024-05-15 09:08:23.534930] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.931 [2024-05-15 09:08:23.535181] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.931 [2024-05-15 09:08:23.535207] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.931 [2024-05-15 09:08:23.535236] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.932 [2024-05-15 09:08:23.538866] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.932 [2024-05-15 09:08:23.548080] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.932 [2024-05-15 09:08:23.548479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.932 [2024-05-15 09:08:23.548637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.932 [2024-05-15 09:08:23.548668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.932 [2024-05-15 09:08:23.548688] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.932 [2024-05-15 09:08:23.548933] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.932 [2024-05-15 09:08:23.549181] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.932 [2024-05-15 09:08:23.549205] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.932 [2024-05-15 09:08:23.549234] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.932 [2024-05-15 09:08:23.552867] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.932 [2024-05-15 09:08:23.562100] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.932 [2024-05-15 09:08:23.562577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.932 [2024-05-15 09:08:23.562773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.932 [2024-05-15 09:08:23.562803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.932 [2024-05-15 09:08:23.562823] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.932 [2024-05-15 09:08:23.563070] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.932 [2024-05-15 09:08:23.563329] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.932 [2024-05-15 09:08:23.563355] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.932 [2024-05-15 09:08:23.563373] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.932 [2024-05-15 09:08:23.567012] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.932 [2024-05-15 09:08:23.576042] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.932 [2024-05-15 09:08:23.576616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.932 [2024-05-15 09:08:23.576815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.932 [2024-05-15 09:08:23.576846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.932 [2024-05-15 09:08:23.576868] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.932 [2024-05-15 09:08:23.577120] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.932 [2024-05-15 09:08:23.577383] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.932 [2024-05-15 09:08:23.577410] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.932 [2024-05-15 09:08:23.577429] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.932 [2024-05-15 09:08:23.581066] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.932 [2024-05-15 09:08:23.590068] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.932 [2024-05-15 09:08:23.590471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.932 [2024-05-15 09:08:23.590629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.932 [2024-05-15 09:08:23.590658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.932 [2024-05-15 09:08:23.590678] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.932 [2024-05-15 09:08:23.590921] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.932 [2024-05-15 09:08:23.591169] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.932 [2024-05-15 09:08:23.591194] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.932 [2024-05-15 09:08:23.591212] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.932 [2024-05-15 09:08:23.594857] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.932 [2024-05-15 09:08:23.604066] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.932 [2024-05-15 09:08:23.604489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.932 [2024-05-15 09:08:23.604626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.932 [2024-05-15 09:08:23.604655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.932 [2024-05-15 09:08:23.604674] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.932 [2024-05-15 09:08:23.604919] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.932 [2024-05-15 09:08:23.605167] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.932 [2024-05-15 09:08:23.605191] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.932 [2024-05-15 09:08:23.605208] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.932 [2024-05-15 09:08:23.606016] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:28.932 [2024-05-15 09:08:23.606052] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:28.932 [2024-05-15 09:08:23.606068] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:28.932 [2024-05-15 09:08:23.606082] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:28.932 [2024-05-15 09:08:23.606094] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:28.932 [2024-05-15 09:08:23.606180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:42:28.932 [2024-05-15 09:08:23.606242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:42:28.932 [2024-05-15 09:08:23.606246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:42:28.932 [2024-05-15 09:08:23.608848] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.932 [2024-05-15 09:08:23.618101] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.932 [2024-05-15 09:08:23.618685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.932 [2024-05-15 09:08:23.618820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.932 [2024-05-15 09:08:23.618850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.932 [2024-05-15 09:08:23.618873] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.932 [2024-05-15 09:08:23.619129] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.932 [2024-05-15 09:08:23.619391] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.932 [2024-05-15 09:08:23.619418] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.932 [2024-05-15 09:08:23.619439] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.932 [2024-05-15 09:08:23.623075] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.932 [2024-05-15 09:08:23.632101] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.932 [2024-05-15 09:08:23.632682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.932 [2024-05-15 09:08:23.632853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.932 [2024-05-15 09:08:23.632884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.932 [2024-05-15 09:08:23.632907] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.932 [2024-05-15 09:08:23.633161] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.932 [2024-05-15 09:08:23.633434] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.932 [2024-05-15 09:08:23.633460] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.932 [2024-05-15 09:08:23.633481] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.932 [2024-05-15 09:08:23.637116] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.932 [2024-05-15 09:08:23.646147] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.932 [2024-05-15 09:08:23.646755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.932 [2024-05-15 09:08:23.646924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.932 [2024-05-15 09:08:23.646955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.932 [2024-05-15 09:08:23.646978] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.932 [2024-05-15 09:08:23.647241] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.932 [2024-05-15 09:08:23.647495] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.932 [2024-05-15 09:08:23.647520] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.932 [2024-05-15 09:08:23.647541] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.932 [2024-05-15 09:08:23.651180] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.932 [2024-05-15 09:08:23.660202] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.932 [2024-05-15 09:08:23.660742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.932 [2024-05-15 09:08:23.660943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.932 [2024-05-15 09:08:23.660972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.932 [2024-05-15 09:08:23.660995] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.933 [2024-05-15 09:08:23.661259] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.933 [2024-05-15 09:08:23.661512] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.933 [2024-05-15 09:08:23.661537] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.933 [2024-05-15 09:08:23.661557] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.933 [2024-05-15 09:08:23.665189] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.933 [2024-05-15 09:08:23.674211] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.933 [2024-05-15 09:08:23.674769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.933 [2024-05-15 09:08:23.674946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.933 [2024-05-15 09:08:23.674975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.933 [2024-05-15 09:08:23.674998] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.933 [2024-05-15 09:08:23.675263] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.933 [2024-05-15 09:08:23.675518] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.933 [2024-05-15 09:08:23.675563] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.933 [2024-05-15 09:08:23.675585] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.933 [2024-05-15 09:08:23.679230] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.933 [2024-05-15 09:08:23.688260] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.933 [2024-05-15 09:08:23.688819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.933 [2024-05-15 09:08:23.688990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.933 [2024-05-15 09:08:23.689020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.933 [2024-05-15 09:08:23.689043] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.933 [2024-05-15 09:08:23.689308] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.933 [2024-05-15 09:08:23.689562] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.933 [2024-05-15 09:08:23.689588] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.933 [2024-05-15 09:08:23.689609] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.933 [2024-05-15 09:08:23.693250] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.933 [2024-05-15 09:08:23.702250] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.933 [2024-05-15 09:08:23.702616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.933 [2024-05-15 09:08:23.702776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.933 [2024-05-15 09:08:23.702807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.933 [2024-05-15 09:08:23.702826] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.933 [2024-05-15 09:08:23.703070] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.933 [2024-05-15 09:08:23.703331] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.933 [2024-05-15 09:08:23.703357] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.933 [2024-05-15 09:08:23.703375] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.933 [2024-05-15 09:08:23.707005] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.933 [2024-05-15 09:08:23.715998] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:28.933 [2024-05-15 09:08:23.716388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.933 [2024-05-15 09:08:23.716534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:28.933 [2024-05-15 09:08:23.716560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:28.933 [2024-05-15 09:08:23.716577] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:28.933 [2024-05-15 09:08:23.716796] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:28.933 [2024-05-15 09:08:23.717027] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:28.933 [2024-05-15 09:08:23.717049] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:28.933 [2024-05-15 09:08:23.717073] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:28.933 [2024-05-15 09:08:23.720500] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:28.933 09:08:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:42:28.933 09:08:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@861 -- # return 0 00:42:28.933 09:08:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:42:28.933 09:08:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@727 -- # xtrace_disable 00:42:28.933 09:08:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:42:29.192 [2024-05-15 09:08:23.729678] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:29.192 [2024-05-15 09:08:23.730070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:29.192 [2024-05-15 09:08:23.730184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:29.192 [2024-05-15 09:08:23.730210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:29.192 [2024-05-15 09:08:23.730241] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:29.192 [2024-05-15 09:08:23.730461] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:29.192 [2024-05-15 09:08:23.730703] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:29.192 [2024-05-15 09:08:23.730724] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:29.192 [2024-05-15 09:08:23.730738] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:29.192 [2024-05-15 09:08:23.733970] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:29.192 [2024-05-15 09:08:23.743268] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:29.192 [2024-05-15 09:08:23.743649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:29.192 [2024-05-15 09:08:23.743766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:29.192 [2024-05-15 09:08:23.743793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:29.192 [2024-05-15 09:08:23.743809] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:29.192 [2024-05-15 09:08:23.744043] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:29.192 [2024-05-15 09:08:23.744303] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:29.192 [2024-05-15 09:08:23.744326] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:29.192 [2024-05-15 09:08:23.744341] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:29.192 09:08:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:29.192 [2024-05-15 09:08:23.747673] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:29.192 09:08:23 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:29.192 09:08:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:29.192 09:08:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:42:29.192 [2024-05-15 09:08:23.752660] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:29.192 [2024-05-15 09:08:23.756956] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:29.192 [2024-05-15 09:08:23.757361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:29.192 [2024-05-15 09:08:23.757537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:29.192 [2024-05-15 09:08:23.757563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:29.192 [2024-05-15 09:08:23.757579] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:29.192 [2024-05-15 09:08:23.757826] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:29.192 [2024-05-15 09:08:23.758050] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:29.192 [2024-05-15 09:08:23.758073] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:29.192 [2024-05-15 09:08:23.758088] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:29.192 09:08:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:29.192 09:08:23 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:42:29.192 09:08:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:29.192 09:08:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:42:29.192 [2024-05-15 09:08:23.761425] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:29.192 [2024-05-15 09:08:23.770395] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:29.192 [2024-05-15 09:08:23.770860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:29.192 [2024-05-15 09:08:23.770994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:29.192 [2024-05-15 09:08:23.771020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:29.192 [2024-05-15 09:08:23.771037] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:29.192 [2024-05-15 09:08:23.771293] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:29.192 [2024-05-15 09:08:23.771516] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:29.192 [2024-05-15 09:08:23.771538] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:29.192 [2024-05-15 09:08:23.771569] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:29.192 [2024-05-15 09:08:23.774793] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:29.192 [2024-05-15 09:08:23.783891] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:29.192 [2024-05-15 09:08:23.784408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:29.192 [2024-05-15 09:08:23.784566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:29.192 [2024-05-15 09:08:23.784594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:29.192 [2024-05-15 09:08:23.784615] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:29.192 [2024-05-15 09:08:23.784869] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:29.192 [2024-05-15 09:08:23.785083] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:29.192 [2024-05-15 09:08:23.785104] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:29.192 [2024-05-15 09:08:23.785122] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:29.192 [2024-05-15 09:08:23.788388] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:29.192 Malloc0 00:42:29.192 09:08:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:29.192 09:08:23 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:42:29.192 09:08:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:29.192 09:08:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:42:29.192 [2024-05-15 09:08:23.797632] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:29.192 [2024-05-15 09:08:23.798082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:29.192 [2024-05-15 09:08:23.798257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:29.192 [2024-05-15 09:08:23.798285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:29.193 [2024-05-15 09:08:23.798305] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:29.193 [2024-05-15 09:08:23.798546] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:29.193 [2024-05-15 09:08:23.798760] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:29.193 [2024-05-15 09:08:23.798781] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:29.193 [2024-05-15 09:08:23.798798] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:29.193 [2024-05-15 09:08:23.802086] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:29.193 09:08:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:29.193 09:08:23 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:42:29.193 09:08:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:29.193 09:08:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:42:29.193 [2024-05-15 09:08:23.811261] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:29.193 09:08:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:29.193 [2024-05-15 09:08:23.811615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:29.193 09:08:23 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:29.193 [2024-05-15 09:08:23.811755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:29.193 [2024-05-15 09:08:23.811782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedfd30 with addr=10.0.0.2, port=4420 00:42:29.193 [2024-05-15 09:08:23.811799] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedfd30 is same with the state(5) to be set 00:42:29.193 09:08:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:29.193 09:08:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:42:29.193 [2024-05-15 09:08:23.812018] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfd30 (9): Bad file descriptor 00:42:29.193 [2024-05-15 09:08:23.812250] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:29.193 [2024-05-15 09:08:23.812274] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:42:29.193 [2024-05-15 09:08:23.812289] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:29.193 [2024-05-15 09:08:23.815286] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:42:29.193 [2024-05-15 09:08:23.815586] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:29.193 [2024-05-15 09:08:23.815611] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:29.193 09:08:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:29.193 09:08:23 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2437232 00:42:29.193 [2024-05-15 09:08:23.824991] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:29.193 [2024-05-15 09:08:23.943822] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:42:39.196 00:42:39.196 Latency(us) 00:42:39.196 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:39.196 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:42:39.196 Verification LBA range: start 0x0 length 0x4000 00:42:39.196 Nvme1n1 : 15.02 6623.72 25.87 8952.73 0.00 8193.09 585.58 19806.44 00:42:39.196 =================================================================================================================== 00:42:39.196 Total : 6623.72 25.87 8952.73 0.00 8193.09 585.58 19806.44 00:42:39.196 09:08:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:42:39.196 09:08:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:39.196 09:08:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:39.196 09:08:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:42:39.196 09:08:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:39.196 09:08:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:42:39.196 09:08:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:42:39.196 09:08:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:42:39.196 09:08:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:42:39.196 09:08:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:42:39.196 09:08:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:42:39.196 09:08:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:42:39.196 09:08:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:42:39.196 rmmod nvme_tcp 00:42:39.196 rmmod nvme_fabrics 00:42:39.196 rmmod nvme_keyring 00:42:39.196 09:08:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:42:39.196 09:08:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:42:39.196 09:08:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:42:39.196 09:08:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 2437899 ']' 00:42:39.196 09:08:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 2437899 00:42:39.196 09:08:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@947 -- # '[' -z 2437899 ']' 00:42:39.196 09:08:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # kill -0 2437899 00:42:39.196 09:08:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # uname 00:42:39.196 09:08:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:42:39.196 09:08:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2437899 00:42:39.196 09:08:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:42:39.196 09:08:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:42:39.196 09:08:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2437899' 00:42:39.196 killing process with pid 2437899 00:42:39.196 09:08:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # kill 2437899 00:42:39.196 [2024-05-15 09:08:33.084817] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:42:39.196 09:08:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@971 -- # wait 2437899 00:42:39.196 09:08:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:42:39.196 09:08:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:42:39.196 09:08:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:42:39.196 09:08:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:42:39.196 09:08:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:42:39.196 09:08:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:39.196 09:08:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:42:39.196 09:08:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:40.603 09:08:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:42:40.603 00:42:40.603 real 0m22.801s 00:42:40.603 user 0m59.636s 00:42:40.603 sys 0m4.573s 00:42:40.603 09:08:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:42:40.603 09:08:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:42:40.603 ************************************ 00:42:40.603 END TEST nvmf_bdevperf 00:42:40.603 ************************************ 00:42:40.603 09:08:35 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:42:40.603 09:08:35 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:42:40.603 09:08:35 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:42:40.603 09:08:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:40.861 ************************************ 00:42:40.861 START TEST nvmf_target_disconnect 00:42:40.861 ************************************ 00:42:40.861 09:08:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:42:40.861 * Looking for test storage... 00:42:40.861 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:42:40.861 09:08:35 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:40.861 09:08:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:42:40.861 09:08:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:40.861 09:08:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:40.861 09:08:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:40.861 09:08:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:40.861 09:08:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:40.861 09:08:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:40.861 09:08:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:40.861 09:08:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:40.861 09:08:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:40.861 09:08:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:40.861 09:08:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:42:40.861 09:08:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:42:40.861 09:08:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:40.861 09:08:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:40.861 09:08:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:40.861 09:08:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:40.861 09:08:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:40.861 09:08:35 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:40.861 09:08:35 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:40.861 09:08:35 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:40.861 09:08:35 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:40.861 09:08:35 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:40.861 09:08:35 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:40.861 09:08:35 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:42:40.861 09:08:35 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:40.861 09:08:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:42:40.861 09:08:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:42:40.861 09:08:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:42:40.861 09:08:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:40.861 09:08:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:40.861 09:08:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:40.861 09:08:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:42:40.861 09:08:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:42:40.861 09:08:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:42:40.861 09:08:35 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:42:40.861 09:08:35 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:42:40.861 09:08:35 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:42:40.861 09:08:35 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:42:40.861 09:08:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:42:40.861 09:08:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:40.861 09:08:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:42:40.861 09:08:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:42:40.862 09:08:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:42:40.862 09:08:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:40.862 09:08:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:42:40.862 09:08:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:40.862 09:08:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:42:40.862 09:08:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:42:40.862 09:08:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:42:40.862 09:08:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:42:43.394 Found 0000:09:00.0 (0x8086 - 0x159b) 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:42:43.394 Found 0000:09:00.1 (0x8086 - 0x159b) 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:42:43.394 Found net devices under 0000:09:00.0: cvl_0_0 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:42:43.394 Found net devices under 0000:09:00.1: cvl_0_1 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:42:43.394 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:43.394 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:42:43.394 00:42:43.394 --- 10.0.0.2 ping statistics --- 00:42:43.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:43.394 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:43.394 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:43.394 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:42:43.394 00:42:43.394 --- 10.0.0.1 ping statistics --- 00:42:43.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:43.394 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:43.394 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:42:43.395 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:42:43.395 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:43.395 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:42:43.395 09:08:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:42:43.395 09:08:37 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:42:43.395 09:08:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:42:43.395 09:08:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1104 -- # xtrace_disable 00:42:43.395 09:08:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:42:43.395 ************************************ 00:42:43.395 START TEST nvmf_target_disconnect_tc1 00:42:43.395 ************************************ 00:42:43.395 09:08:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1122 -- # nvmf_target_disconnect_tc1 00:42:43.395 09:08:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:42:43.395 09:08:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@649 -- # local es=0 00:42:43.395 09:08:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:42:43.395 09:08:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:42:43.395 09:08:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:42:43.395 09:08:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:42:43.395 09:08:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:42:43.395 09:08:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:42:43.395 09:08:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:42:43.395 09:08:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:42:43.395 09:08:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:42:43.395 09:08:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:42:43.395 EAL: No free 2048 kB hugepages reported on node 1 00:42:43.395 [2024-05-15 09:08:37.995049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:43.395 [2024-05-15 09:08:37.995290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:43.395 [2024-05-15 09:08:37.995323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208f520 with addr=10.0.0.2, port=4420 00:42:43.395 [2024-05-15 09:08:37.995365] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:42:43.395 [2024-05-15 09:08:37.995393] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:42:43.395 [2024-05-15 09:08:37.995410] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:42:43.395 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:42:43.395 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:42:43.395 Initializing NVMe Controllers 00:42:43.395 09:08:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # es=1 00:42:43.395 09:08:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:42:43.395 09:08:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:42:43.395 09:08:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:42:43.395 00:42:43.395 real 0m0.102s 00:42:43.395 user 0m0.042s 00:42:43.395 sys 0m0.059s 00:42:43.395 09:08:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:42:43.395 09:08:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:42:43.395 ************************************ 00:42:43.395 END TEST nvmf_target_disconnect_tc1 00:42:43.395 ************************************ 00:42:43.395 09:08:38 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:42:43.395 09:08:38 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:42:43.395 09:08:38 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1104 -- # xtrace_disable 00:42:43.395 09:08:38 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:42:43.395 ************************************ 00:42:43.395 START TEST nvmf_target_disconnect_tc2 00:42:43.395 ************************************ 00:42:43.395 09:08:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1122 -- # nvmf_target_disconnect_tc2 00:42:43.395 09:08:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:42:43.395 09:08:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:42:43.395 09:08:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:42:43.395 09:08:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@721 -- # xtrace_disable 00:42:43.395 09:08:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:42:43.395 09:08:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2441344 00:42:43.395 09:08:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:42:43.395 09:08:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2441344 00:42:43.395 09:08:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@828 -- # '[' -z 2441344 ']' 00:42:43.395 09:08:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:43.395 09:08:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local max_retries=100 00:42:43.395 09:08:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:43.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:43.395 09:08:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # xtrace_disable 00:42:43.395 09:08:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:42:43.395 [2024-05-15 09:08:38.105411] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:42:43.395 [2024-05-15 09:08:38.105493] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:43.395 EAL: No free 2048 kB hugepages reported on node 1 00:42:43.395 [2024-05-15 09:08:38.180484] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:43.654 [2024-05-15 09:08:38.271925] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:43.654 [2024-05-15 09:08:38.272002] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:43.654 [2024-05-15 09:08:38.272015] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:43.654 [2024-05-15 09:08:38.272026] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:43.654 [2024-05-15 09:08:38.272036] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:43.654 [2024-05-15 09:08:38.272121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:42:43.654 [2024-05-15 09:08:38.272185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:42:43.654 [2024-05-15 09:08:38.272250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:42:43.654 [2024-05-15 09:08:38.272254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:42:43.654 09:08:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:42:43.654 09:08:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@861 -- # return 0 00:42:43.654 09:08:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:42:43.654 09:08:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@727 -- # xtrace_disable 00:42:43.654 09:08:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:42:43.654 09:08:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:43.654 09:08:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:42:43.654 09:08:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:43.654 09:08:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:42:43.912 Malloc0 00:42:43.912 09:08:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:43.912 09:08:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:42:43.912 09:08:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:43.912 09:08:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:42:43.912 [2024-05-15 09:08:38.451852] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:43.912 09:08:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:43.912 09:08:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:42:43.912 09:08:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:43.912 09:08:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:42:43.912 09:08:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:43.912 09:08:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:42:43.912 09:08:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:43.912 09:08:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:42:43.912 09:08:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:43.912 09:08:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:43.912 09:08:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:43.912 09:08:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:42:43.912 [2024-05-15 09:08:38.479857] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:42:43.912 [2024-05-15 09:08:38.480153] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:43.912 09:08:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:43.912 09:08:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:42:43.912 09:08:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:43.912 09:08:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:42:43.912 09:08:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:43.912 09:08:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2441366 00:42:43.912 09:08:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:42:43.912 09:08:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:42:43.912 EAL: No free 2048 kB hugepages reported on node 1 00:42:45.821 09:08:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2441344 00:42:45.821 09:08:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:42:45.821 Read completed with error (sct=0, sc=8) 00:42:45.821 starting I/O failed 00:42:45.821 Read completed with error (sct=0, sc=8) 00:42:45.821 starting I/O failed 00:42:45.821 Read completed with error (sct=0, sc=8) 00:42:45.821 starting I/O failed 00:42:45.821 Read completed with error (sct=0, sc=8) 00:42:45.821 starting I/O failed 00:42:45.821 Write completed with error (sct=0, sc=8) 00:42:45.821 starting I/O failed 00:42:45.821 Write completed with error (sct=0, sc=8) 00:42:45.821 starting I/O failed 00:42:45.821 Write completed with error (sct=0, sc=8) 00:42:45.821 starting I/O failed 00:42:45.821 Read completed with error (sct=0, sc=8) 00:42:45.821 starting I/O failed 00:42:45.821 Write completed with error (sct=0, sc=8) 00:42:45.821 starting I/O failed 00:42:45.821 Read completed with error (sct=0, sc=8) 00:42:45.821 starting I/O failed 00:42:45.821 Read completed with error (sct=0, sc=8) 00:42:45.821 starting I/O failed 00:42:45.821 Write completed with error (sct=0, sc=8) 00:42:45.821 starting I/O failed 00:42:45.821 Write completed with error (sct=0, sc=8) 00:42:45.822 starting I/O failed 00:42:45.822 Read completed with error (sct=0, sc=8) 00:42:45.822 starting I/O failed 00:42:45.822 Write completed with error (sct=0, sc=8) 00:42:45.822 starting I/O failed 00:42:45.822 Read completed with error (sct=0, sc=8) 00:42:45.822 starting I/O failed 00:42:45.822 Read completed with error (sct=0, sc=8) 00:42:45.822 starting I/O failed 00:42:45.822 Read completed with error (sct=0, sc=8) 00:42:45.822 starting I/O failed 00:42:45.822 Write completed with error (sct=0, sc=8) 00:42:45.822 starting I/O failed 00:42:45.822 Write completed with error (sct=0, sc=8) 00:42:45.822 starting I/O failed 00:42:45.822 Write completed with error (sct=0, sc=8) 00:42:45.822 starting I/O failed 00:42:45.822 Write completed with error (sct=0, sc=8) 00:42:45.822 starting I/O failed 00:42:45.822 Write completed with error (sct=0, sc=8) 00:42:45.822 starting I/O failed 00:42:45.822 Write completed with error (sct=0, sc=8) 00:42:45.822 starting I/O failed 00:42:45.822 Write completed with error (sct=0, sc=8) 00:42:45.822 starting I/O failed 00:42:45.822 Read completed with error (sct=0, sc=8) 00:42:45.822 starting I/O failed 00:42:45.822 Read completed with error (sct=0, sc=8) 00:42:45.822 starting I/O failed 00:42:45.822 Read completed with error (sct=0, sc=8) 00:42:45.822 starting I/O failed 00:42:45.822 Write completed with error (sct=0, sc=8) 00:42:45.822 starting I/O failed 00:42:45.822 Write completed with error (sct=0, sc=8) 00:42:45.822 starting I/O failed 00:42:45.822 Write completed with error (sct=0, sc=8) 00:42:45.822 starting I/O failed 00:42:45.822 Read completed with error (sct=0, sc=8) 00:42:45.822 starting I/O failed 00:42:45.822 Read completed with error (sct=0, sc=8) 00:42:45.822 starting I/O failed 00:42:45.822 [2024-05-15 09:08:40.507165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:45.822 Read completed with error (sct=0, sc=8) 00:42:45.822 starting I/O failed 00:42:45.822 Read completed with error (sct=0, sc=8) 00:42:45.822 starting I/O failed 00:42:45.822 Read completed with error (sct=0, sc=8) 00:42:45.822 starting I/O failed 00:42:45.822 Read completed with error (sct=0, sc=8) 00:42:45.822 starting I/O failed 00:42:45.822 Read completed with error (sct=0, sc=8) 00:42:45.822 starting I/O failed 00:42:45.822 Read completed with error (sct=0, sc=8) 00:42:45.822 starting I/O failed 00:42:45.822 Read completed with error (sct=0, sc=8) 00:42:45.822 starting I/O failed 00:42:45.822 Read completed with error (sct=0, sc=8) 00:42:45.822 starting I/O failed 00:42:45.822 Read completed with error (sct=0, sc=8) 00:42:45.822 starting I/O failed 00:42:45.822 Read completed with error (sct=0, sc=8) 00:42:45.822 starting I/O failed 00:42:45.822 Read completed with error (sct=0, sc=8) 00:42:45.822 starting I/O failed 00:42:45.822 Read completed with error (sct=0, sc=8) 00:42:45.822 starting I/O failed 00:42:45.822 Write completed with error (sct=0, sc=8) 00:42:45.822 starting I/O failed 00:42:45.822 Read completed with error (sct=0, sc=8) 00:42:45.822 starting I/O failed 00:42:45.822 Read completed with error (sct=0, sc=8) 00:42:45.822 starting I/O failed 00:42:45.822 Write completed with error (sct=0, sc=8) 00:42:45.822 starting I/O failed 00:42:45.822 Read completed with error (sct=0, sc=8) 00:42:45.822 starting I/O failed 00:42:45.822 Write completed with error (sct=0, sc=8) 00:42:45.822 starting I/O failed 00:42:45.822 Write completed with error (sct=0, sc=8) 00:42:45.822 starting I/O failed 00:42:45.822 Read completed with error (sct=0, sc=8) 00:42:45.822 starting I/O failed 00:42:45.822 Read completed with error (sct=0, sc=8) 00:42:45.822 starting I/O failed 00:42:45.822 Write completed with error (sct=0, sc=8) 00:42:45.822 starting I/O failed 00:42:45.822 Write completed with error (sct=0, sc=8) 00:42:45.822 starting I/O failed 00:42:45.822 Write completed with error (sct=0, sc=8) 00:42:45.822 starting I/O failed 00:42:45.822 Write completed with error (sct=0, sc=8) 00:42:45.822 starting I/O failed 00:42:45.822 Write completed with error (sct=0, sc=8) 00:42:45.822 starting I/O failed 00:42:45.822 Write completed with error (sct=0, sc=8) 00:42:45.822 starting I/O failed 00:42:45.822 Write completed with error (sct=0, sc=8) 00:42:45.822 starting I/O failed 00:42:45.822 Write completed with error (sct=0, sc=8) 00:42:45.822 starting I/O failed 00:42:45.822 Read completed with error (sct=0, sc=8) 00:42:45.822 starting I/O failed 00:42:45.822 Write completed with error (sct=0, sc=8) 00:42:45.822 starting I/O failed 00:42:45.822 [2024-05-15 09:08:40.507565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:42:45.822 [2024-05-15 09:08:40.507827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.822 [2024-05-15 09:08:40.507942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.822 [2024-05-15 09:08:40.507967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.822 qpair failed and we were unable to recover it. 00:42:45.822 [2024-05-15 09:08:40.508092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.822 [2024-05-15 09:08:40.508270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.822 [2024-05-15 09:08:40.508295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.822 qpair failed and we were unable to recover it. 00:42:45.822 [2024-05-15 09:08:40.508411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.822 [2024-05-15 09:08:40.508515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.822 [2024-05-15 09:08:40.508543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.822 qpair failed and we were unable to recover it. 00:42:45.822 [2024-05-15 09:08:40.508706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.822 [2024-05-15 09:08:40.508849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.822 [2024-05-15 09:08:40.508875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.822 qpair failed and we were unable to recover it. 00:42:45.822 [2024-05-15 09:08:40.509015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.822 [2024-05-15 09:08:40.509140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.822 [2024-05-15 09:08:40.509166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.822 qpair failed and we were unable to recover it. 00:42:45.822 [2024-05-15 09:08:40.509285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.822 [2024-05-15 09:08:40.509432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.822 [2024-05-15 09:08:40.509459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.822 qpair failed and we were unable to recover it. 00:42:45.822 [2024-05-15 09:08:40.509626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.822 [2024-05-15 09:08:40.509757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.822 [2024-05-15 09:08:40.509799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.822 qpair failed and we were unable to recover it. 00:42:45.822 [2024-05-15 09:08:40.509943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.822 [2024-05-15 09:08:40.510105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.822 [2024-05-15 09:08:40.510131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.822 qpair failed and we were unable to recover it. 00:42:45.822 [2024-05-15 09:08:40.510286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.822 [2024-05-15 09:08:40.510392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.822 [2024-05-15 09:08:40.510422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.822 qpair failed and we were unable to recover it. 00:42:45.822 [2024-05-15 09:08:40.510590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.822 [2024-05-15 09:08:40.510741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.822 [2024-05-15 09:08:40.510768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.822 qpair failed and we were unable to recover it. 00:42:45.822 [2024-05-15 09:08:40.510876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.822 [2024-05-15 09:08:40.511034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.822 [2024-05-15 09:08:40.511064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.822 qpair failed and we were unable to recover it. 00:42:45.822 [2024-05-15 09:08:40.511187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.822 [2024-05-15 09:08:40.511323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.822 [2024-05-15 09:08:40.511349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.822 qpair failed and we were unable to recover it. 00:42:45.822 [2024-05-15 09:08:40.511446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.822 [2024-05-15 09:08:40.511559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.822 [2024-05-15 09:08:40.511586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.822 qpair failed and we were unable to recover it. 00:42:45.822 [2024-05-15 09:08:40.511684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.822 [2024-05-15 09:08:40.511812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.822 [2024-05-15 09:08:40.511855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.822 qpair failed and we were unable to recover it. 00:42:45.822 [2024-05-15 09:08:40.512024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.822 [2024-05-15 09:08:40.512192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.822 [2024-05-15 09:08:40.512226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.822 qpair failed and we were unable to recover it. 00:42:45.822 [2024-05-15 09:08:40.512331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.822 [2024-05-15 09:08:40.512432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.823 [2024-05-15 09:08:40.512459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.823 qpair failed and we were unable to recover it. 00:42:45.823 [2024-05-15 09:08:40.512621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.823 [2024-05-15 09:08:40.512745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.823 [2024-05-15 09:08:40.512771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.823 qpair failed and we were unable to recover it. 00:42:45.823 [2024-05-15 09:08:40.512904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.823 [2024-05-15 09:08:40.513097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.823 [2024-05-15 09:08:40.513123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.823 qpair failed and we were unable to recover it. 00:42:45.823 [2024-05-15 09:08:40.513278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.823 [2024-05-15 09:08:40.513414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.823 [2024-05-15 09:08:40.513441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.823 qpair failed and we were unable to recover it. 00:42:45.823 [2024-05-15 09:08:40.513614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.823 [2024-05-15 09:08:40.513748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.823 [2024-05-15 09:08:40.513774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.823 qpair failed and we were unable to recover it. 00:42:45.823 [2024-05-15 09:08:40.513935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.823 [2024-05-15 09:08:40.514033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.823 [2024-05-15 09:08:40.514061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.823 qpair failed and we were unable to recover it. 00:42:45.823 [2024-05-15 09:08:40.514196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.823 [2024-05-15 09:08:40.514321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.823 [2024-05-15 09:08:40.514349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.823 qpair failed and we were unable to recover it. 00:42:45.823 Read completed with error (sct=0, sc=8) 00:42:45.823 starting I/O failed 00:42:45.823 Read completed with error (sct=0, sc=8) 00:42:45.823 starting I/O failed 00:42:45.823 Read completed with error (sct=0, sc=8) 00:42:45.823 starting I/O failed 00:42:45.823 Read completed with error (sct=0, sc=8) 00:42:45.823 starting I/O failed 00:42:45.823 Read completed with error (sct=0, sc=8) 00:42:45.823 starting I/O failed 00:42:45.823 Read completed with error (sct=0, sc=8) 00:42:45.823 starting I/O failed 00:42:45.823 Read completed with error (sct=0, sc=8) 00:42:45.823 starting I/O failed 00:42:45.823 Read completed with error (sct=0, sc=8) 00:42:45.823 starting I/O failed 00:42:45.823 Read completed with error (sct=0, sc=8) 00:42:45.823 starting I/O failed 00:42:45.823 Read completed with error (sct=0, sc=8) 00:42:45.823 starting I/O failed 00:42:45.823 Write completed with error (sct=0, sc=8) 00:42:45.823 starting I/O failed 00:42:45.823 Read completed with error (sct=0, sc=8) 00:42:45.823 starting I/O failed 00:42:45.823 Read completed with error (sct=0, sc=8) 00:42:45.823 starting I/O failed 00:42:45.823 Write completed with error (sct=0, sc=8) 00:42:45.823 starting I/O failed 00:42:45.823 Write completed with error (sct=0, sc=8) 00:42:45.823 starting I/O failed 00:42:45.823 Read completed with error (sct=0, sc=8) 00:42:45.823 starting I/O failed 00:42:45.823 Read completed with error (sct=0, sc=8) 00:42:45.823 starting I/O failed 00:42:45.823 Read completed with error (sct=0, sc=8) 00:42:45.823 starting I/O failed 00:42:45.823 Write completed with error (sct=0, sc=8) 00:42:45.823 starting I/O failed 00:42:45.823 Write completed with error (sct=0, sc=8) 00:42:45.823 starting I/O failed 00:42:45.823 Write completed with error (sct=0, sc=8) 00:42:45.823 starting I/O failed 00:42:45.823 Write completed with error (sct=0, sc=8) 00:42:45.823 starting I/O failed 00:42:45.823 Read completed with error (sct=0, sc=8) 00:42:45.823 starting I/O failed 00:42:45.823 Write completed with error (sct=0, sc=8) 00:42:45.823 starting I/O failed 00:42:45.823 Write completed with error (sct=0, sc=8) 00:42:45.823 starting I/O failed 00:42:45.823 Read completed with error (sct=0, sc=8) 00:42:45.823 starting I/O failed 00:42:45.823 Read completed with error (sct=0, sc=8) 00:42:45.823 starting I/O failed 00:42:45.823 Read completed with error (sct=0, sc=8) 00:42:45.823 starting I/O failed 00:42:45.823 Write completed with error (sct=0, sc=8) 00:42:45.823 starting I/O failed 00:42:45.823 Write completed with error (sct=0, sc=8) 00:42:45.823 starting I/O failed 00:42:45.823 Read completed with error (sct=0, sc=8) 00:42:45.823 starting I/O failed 00:42:45.823 Write completed with error (sct=0, sc=8) 00:42:45.823 starting I/O failed 00:42:45.823 [2024-05-15 09:08:40.514694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:42:45.823 [2024-05-15 09:08:40.514936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.823 [2024-05-15 09:08:40.515079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.823 [2024-05-15 09:08:40.515110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:45.823 qpair failed and we were unable to recover it. 00:42:45.823 [2024-05-15 09:08:40.515274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.823 [2024-05-15 09:08:40.515404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.823 [2024-05-15 09:08:40.515437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:45.823 qpair failed and we were unable to recover it. 00:42:45.823 [2024-05-15 09:08:40.515589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.823 [2024-05-15 09:08:40.515792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.823 [2024-05-15 09:08:40.515819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:45.823 qpair failed and we were unable to recover it. 00:42:45.823 [2024-05-15 09:08:40.515955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.823 [2024-05-15 09:08:40.516094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.823 [2024-05-15 09:08:40.516121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:45.823 qpair failed and we were unable to recover it. 00:42:45.823 [2024-05-15 09:08:40.516225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.823 [2024-05-15 09:08:40.516347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.823 [2024-05-15 09:08:40.516374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:45.823 qpair failed and we were unable to recover it. 00:42:45.823 [2024-05-15 09:08:40.516483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.823 [2024-05-15 09:08:40.516641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.823 [2024-05-15 09:08:40.516704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:45.823 qpair failed and we were unable to recover it. 00:42:45.823 [2024-05-15 09:08:40.516864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.823 [2024-05-15 09:08:40.516994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.823 [2024-05-15 09:08:40.517022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:45.823 qpair failed and we were unable to recover it. 00:42:45.823 [2024-05-15 09:08:40.517123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.823 [2024-05-15 09:08:40.517294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.823 [2024-05-15 09:08:40.517320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:45.823 qpair failed and we were unable to recover it. 00:42:45.823 [2024-05-15 09:08:40.517450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.823 [2024-05-15 09:08:40.517573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.823 [2024-05-15 09:08:40.517601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:45.823 qpair failed and we were unable to recover it. 00:42:45.823 [2024-05-15 09:08:40.517818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.823 [2024-05-15 09:08:40.517979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.823 [2024-05-15 09:08:40.518005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:45.823 qpair failed and we were unable to recover it. 00:42:45.823 [2024-05-15 09:08:40.518136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.823 [2024-05-15 09:08:40.518299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.823 [2024-05-15 09:08:40.518325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:45.823 qpair failed and we were unable to recover it. 00:42:45.823 [2024-05-15 09:08:40.518431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.823 [2024-05-15 09:08:40.518592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.823 [2024-05-15 09:08:40.518621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:45.823 qpair failed and we were unable to recover it. 00:42:45.823 [2024-05-15 09:08:40.518777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.823 [2024-05-15 09:08:40.518912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.823 [2024-05-15 09:08:40.518939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:45.823 qpair failed and we were unable to recover it. 00:42:45.823 [2024-05-15 09:08:40.519069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.823 [2024-05-15 09:08:40.519231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.824 [2024-05-15 09:08:40.519269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:45.824 qpair failed and we were unable to recover it. 00:42:45.824 [2024-05-15 09:08:40.519405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.824 [2024-05-15 09:08:40.519580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.824 [2024-05-15 09:08:40.519611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:45.824 qpair failed and we were unable to recover it. 00:42:45.824 [2024-05-15 09:08:40.519776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.824 [2024-05-15 09:08:40.519920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.824 [2024-05-15 09:08:40.519951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:45.824 qpair failed and we were unable to recover it. 00:42:45.824 [2024-05-15 09:08:40.520108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.824 [2024-05-15 09:08:40.520262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.824 [2024-05-15 09:08:40.520289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:45.824 qpair failed and we were unable to recover it. 00:42:45.824 [2024-05-15 09:08:40.520397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.824 [2024-05-15 09:08:40.520498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.824 [2024-05-15 09:08:40.520526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:45.824 qpair failed and we were unable to recover it. 00:42:45.824 [2024-05-15 09:08:40.520694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.824 [2024-05-15 09:08:40.520821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.824 [2024-05-15 09:08:40.520849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:45.824 qpair failed and we were unable to recover it. 00:42:45.824 [2024-05-15 09:08:40.520959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.824 [2024-05-15 09:08:40.521064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.824 [2024-05-15 09:08:40.521093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:45.824 qpair failed and we were unable to recover it. 00:42:45.824 [2024-05-15 09:08:40.521230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.824 [2024-05-15 09:08:40.521356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.824 [2024-05-15 09:08:40.521382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.824 qpair failed and we were unable to recover it. 00:42:45.824 [2024-05-15 09:08:40.521487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.824 [2024-05-15 09:08:40.521593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.824 [2024-05-15 09:08:40.521619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.824 qpair failed and we were unable to recover it. 00:42:45.824 [2024-05-15 09:08:40.521737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.824 [2024-05-15 09:08:40.522040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.824 [2024-05-15 09:08:40.522066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.824 qpair failed and we were unable to recover it. 00:42:45.824 [2024-05-15 09:08:40.522193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.824 [2024-05-15 09:08:40.522344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.824 [2024-05-15 09:08:40.522371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.824 qpair failed and we were unable to recover it. 00:42:45.824 [2024-05-15 09:08:40.522502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.824 [2024-05-15 09:08:40.522641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.824 [2024-05-15 09:08:40.522667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.824 qpair failed and we were unable to recover it. 00:42:45.824 [2024-05-15 09:08:40.522769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.824 [2024-05-15 09:08:40.522871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.824 [2024-05-15 09:08:40.522897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.824 qpair failed and we were unable to recover it. 00:42:45.824 [2024-05-15 09:08:40.523004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.824 [2024-05-15 09:08:40.523126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.824 [2024-05-15 09:08:40.523153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.824 qpair failed and we were unable to recover it. 00:42:45.824 [2024-05-15 09:08:40.523264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.824 [2024-05-15 09:08:40.523364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.824 [2024-05-15 09:08:40.523391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.824 qpair failed and we were unable to recover it. 00:42:45.824 [2024-05-15 09:08:40.523505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.824 [2024-05-15 09:08:40.523635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.824 [2024-05-15 09:08:40.523662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.824 qpair failed and we were unable to recover it. 00:42:45.824 [2024-05-15 09:08:40.523814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.824 [2024-05-15 09:08:40.523937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.824 [2024-05-15 09:08:40.523964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.824 qpair failed and we were unable to recover it. 00:42:45.824 [2024-05-15 09:08:40.524072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.824 [2024-05-15 09:08:40.524203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.824 [2024-05-15 09:08:40.524235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.824 qpair failed and we were unable to recover it. 00:42:45.824 [2024-05-15 09:08:40.524356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.824 [2024-05-15 09:08:40.524459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.824 [2024-05-15 09:08:40.524489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.824 qpair failed and we were unable to recover it. 00:42:45.824 [2024-05-15 09:08:40.524593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.824 [2024-05-15 09:08:40.524749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.824 [2024-05-15 09:08:40.524779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.824 qpair failed and we were unable to recover it. 00:42:45.824 [2024-05-15 09:08:40.524929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.824 [2024-05-15 09:08:40.525062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.824 [2024-05-15 09:08:40.525092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.824 qpair failed and we were unable to recover it. 00:42:45.824 [2024-05-15 09:08:40.525256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.824 [2024-05-15 09:08:40.525371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.824 [2024-05-15 09:08:40.525396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.824 qpair failed and we were unable to recover it. 00:42:45.824 [2024-05-15 09:08:40.525494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.824 [2024-05-15 09:08:40.525626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.824 [2024-05-15 09:08:40.525670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.824 qpair failed and we were unable to recover it. 00:42:45.824 [2024-05-15 09:08:40.525852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.824 [2024-05-15 09:08:40.525985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.824 [2024-05-15 09:08:40.526012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.824 qpair failed and we were unable to recover it. 00:42:45.824 [2024-05-15 09:08:40.526116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.824 [2024-05-15 09:08:40.526257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.824 [2024-05-15 09:08:40.526284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.824 qpair failed and we were unable to recover it. 00:42:45.824 [2024-05-15 09:08:40.526378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.824 [2024-05-15 09:08:40.526489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.824 [2024-05-15 09:08:40.526518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.824 qpair failed and we were unable to recover it. 00:42:45.824 [2024-05-15 09:08:40.526678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.824 [2024-05-15 09:08:40.526787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.824 [2024-05-15 09:08:40.526814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.824 qpair failed and we were unable to recover it. 00:42:45.824 [2024-05-15 09:08:40.526948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.824 [2024-05-15 09:08:40.527078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.824 [2024-05-15 09:08:40.527104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.824 qpair failed and we were unable to recover it. 00:42:45.824 [2024-05-15 09:08:40.527255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.824 [2024-05-15 09:08:40.527361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.824 [2024-05-15 09:08:40.527387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.824 qpair failed and we were unable to recover it. 00:42:45.824 [2024-05-15 09:08:40.527501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.825 [2024-05-15 09:08:40.527634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.825 [2024-05-15 09:08:40.527661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.825 qpair failed and we were unable to recover it. 00:42:45.825 [2024-05-15 09:08:40.527787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.825 [2024-05-15 09:08:40.527894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.825 [2024-05-15 09:08:40.527920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.825 qpair failed and we were unable to recover it. 00:42:45.825 [2024-05-15 09:08:40.528016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.825 [2024-05-15 09:08:40.528145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.825 [2024-05-15 09:08:40.528172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.825 qpair failed and we were unable to recover it. 00:42:45.825 [2024-05-15 09:08:40.528296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.825 [2024-05-15 09:08:40.528434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.825 [2024-05-15 09:08:40.528460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.825 qpair failed and we were unable to recover it. 00:42:45.825 [2024-05-15 09:08:40.528573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.825 [2024-05-15 09:08:40.528681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.825 [2024-05-15 09:08:40.528706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.825 qpair failed and we were unable to recover it. 00:42:45.825 [2024-05-15 09:08:40.528831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.825 [2024-05-15 09:08:40.528959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.825 [2024-05-15 09:08:40.528990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.825 qpair failed and we were unable to recover it. 00:42:45.825 [2024-05-15 09:08:40.529128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.825 [2024-05-15 09:08:40.529298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.825 [2024-05-15 09:08:40.529324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.825 qpair failed and we were unable to recover it. 00:42:45.825 [2024-05-15 09:08:40.529462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.825 [2024-05-15 09:08:40.529608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.825 [2024-05-15 09:08:40.529636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.825 qpair failed and we were unable to recover it. 00:42:45.825 [2024-05-15 09:08:40.529741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.825 [2024-05-15 09:08:40.529876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.825 [2024-05-15 09:08:40.529903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.825 qpair failed and we were unable to recover it. 00:42:45.825 [2024-05-15 09:08:40.530051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.825 [2024-05-15 09:08:40.530226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.825 [2024-05-15 09:08:40.530253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.825 qpair failed and we were unable to recover it. 00:42:45.825 [2024-05-15 09:08:40.530361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.825 [2024-05-15 09:08:40.530470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.825 [2024-05-15 09:08:40.530495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.825 qpair failed and we were unable to recover it. 00:42:45.825 [2024-05-15 09:08:40.530636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.825 [2024-05-15 09:08:40.530792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.825 [2024-05-15 09:08:40.530819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.825 qpair failed and we were unable to recover it. 00:42:45.825 [2024-05-15 09:08:40.530980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.825 [2024-05-15 09:08:40.531113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.825 [2024-05-15 09:08:40.531140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.825 qpair failed and we were unable to recover it. 00:42:45.825 [2024-05-15 09:08:40.531277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.825 [2024-05-15 09:08:40.531373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.825 [2024-05-15 09:08:40.531399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.825 qpair failed and we were unable to recover it. 00:42:45.825 [2024-05-15 09:08:40.531534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.825 [2024-05-15 09:08:40.531662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.825 [2024-05-15 09:08:40.531688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.825 qpair failed and we were unable to recover it. 00:42:45.825 [2024-05-15 09:08:40.531834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.825 [2024-05-15 09:08:40.532001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.825 [2024-05-15 09:08:40.532032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.825 qpair failed and we were unable to recover it. 00:42:45.825 [2024-05-15 09:08:40.532161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.825 [2024-05-15 09:08:40.532302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.825 [2024-05-15 09:08:40.532329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.825 qpair failed and we were unable to recover it. 00:42:45.825 [2024-05-15 09:08:40.532441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.825 [2024-05-15 09:08:40.532597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.825 [2024-05-15 09:08:40.532623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.825 qpair failed and we were unable to recover it. 00:42:45.825 [2024-05-15 09:08:40.532754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.825 [2024-05-15 09:08:40.532848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.825 [2024-05-15 09:08:40.532875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.825 qpair failed and we were unable to recover it. 00:42:45.825 [2024-05-15 09:08:40.532981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.825 [2024-05-15 09:08:40.533107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.825 [2024-05-15 09:08:40.533133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.825 qpair failed and we were unable to recover it. 00:42:45.825 [2024-05-15 09:08:40.533274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.825 [2024-05-15 09:08:40.533408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.825 [2024-05-15 09:08:40.533437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.825 qpair failed and we were unable to recover it. 00:42:45.825 [2024-05-15 09:08:40.533583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.825 [2024-05-15 09:08:40.533722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.825 [2024-05-15 09:08:40.533748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.825 qpair failed and we were unable to recover it. 00:42:45.825 [2024-05-15 09:08:40.533876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.825 [2024-05-15 09:08:40.534016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.825 [2024-05-15 09:08:40.534042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.825 qpair failed and we were unable to recover it. 00:42:45.825 [2024-05-15 09:08:40.534173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.825 [2024-05-15 09:08:40.534302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.825 [2024-05-15 09:08:40.534328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.825 qpair failed and we were unable to recover it. 00:42:45.825 [2024-05-15 09:08:40.534435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.825 [2024-05-15 09:08:40.534576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.825 [2024-05-15 09:08:40.534602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.825 qpair failed and we were unable to recover it. 00:42:45.825 [2024-05-15 09:08:40.534727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.825 [2024-05-15 09:08:40.534831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.825 [2024-05-15 09:08:40.534858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.825 qpair failed and we were unable to recover it. 00:42:45.825 [2024-05-15 09:08:40.534984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.825 [2024-05-15 09:08:40.535082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.825 [2024-05-15 09:08:40.535110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.825 qpair failed and we were unable to recover it. 00:42:45.825 [2024-05-15 09:08:40.535267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.825 [2024-05-15 09:08:40.535403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.825 [2024-05-15 09:08:40.535428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.825 qpair failed and we were unable to recover it. 00:42:45.825 [2024-05-15 09:08:40.535526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.825 [2024-05-15 09:08:40.535655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.825 [2024-05-15 09:08:40.535682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.825 qpair failed and we were unable to recover it. 00:42:45.825 [2024-05-15 09:08:40.535856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.826 [2024-05-15 09:08:40.536012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.826 [2024-05-15 09:08:40.536039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.826 qpair failed and we were unable to recover it. 00:42:45.826 [2024-05-15 09:08:40.536197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.826 [2024-05-15 09:08:40.536341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.826 [2024-05-15 09:08:40.536367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.826 qpair failed and we were unable to recover it. 00:42:45.826 [2024-05-15 09:08:40.536505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.826 [2024-05-15 09:08:40.536663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.826 [2024-05-15 09:08:40.536689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.826 qpair failed and we were unable to recover it. 00:42:45.826 [2024-05-15 09:08:40.536826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.826 [2024-05-15 09:08:40.536950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.826 [2024-05-15 09:08:40.536977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.826 qpair failed and we were unable to recover it. 00:42:45.826 [2024-05-15 09:08:40.537079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.826 [2024-05-15 09:08:40.537234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.826 [2024-05-15 09:08:40.537288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.826 qpair failed and we were unable to recover it. 00:42:45.826 [2024-05-15 09:08:40.537388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.826 [2024-05-15 09:08:40.537544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.826 [2024-05-15 09:08:40.537570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.826 qpair failed and we were unable to recover it. 00:42:45.826 [2024-05-15 09:08:40.537749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.826 [2024-05-15 09:08:40.537890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.826 [2024-05-15 09:08:40.537919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.826 qpair failed and we were unable to recover it. 00:42:45.826 [2024-05-15 09:08:40.538062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.826 [2024-05-15 09:08:40.538302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.826 [2024-05-15 09:08:40.538329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.826 qpair failed and we were unable to recover it. 00:42:45.826 [2024-05-15 09:08:40.538425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.826 [2024-05-15 09:08:40.538520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.826 [2024-05-15 09:08:40.538547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.826 qpair failed and we were unable to recover it. 00:42:45.826 [2024-05-15 09:08:40.538680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.826 [2024-05-15 09:08:40.538842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.826 [2024-05-15 09:08:40.538869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.826 qpair failed and we were unable to recover it. 00:42:45.826 [2024-05-15 09:08:40.538999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.826 [2024-05-15 09:08:40.539133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.826 [2024-05-15 09:08:40.539159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.826 qpair failed and we were unable to recover it. 00:42:45.826 [2024-05-15 09:08:40.539295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.826 [2024-05-15 09:08:40.539404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.826 [2024-05-15 09:08:40.539429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.826 qpair failed and we were unable to recover it. 00:42:45.826 [2024-05-15 09:08:40.539588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.826 [2024-05-15 09:08:40.539764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.826 [2024-05-15 09:08:40.539790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.826 qpair failed and we were unable to recover it. 00:42:45.826 [2024-05-15 09:08:40.539919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.826 [2024-05-15 09:08:40.540051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.826 [2024-05-15 09:08:40.540077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.826 qpair failed and we were unable to recover it. 00:42:45.826 [2024-05-15 09:08:40.540210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.826 [2024-05-15 09:08:40.540351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.826 [2024-05-15 09:08:40.540378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.826 qpair failed and we were unable to recover it. 00:42:45.826 [2024-05-15 09:08:40.540518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.826 [2024-05-15 09:08:40.540699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.826 [2024-05-15 09:08:40.540729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.826 qpair failed and we were unable to recover it. 00:42:45.826 [2024-05-15 09:08:40.540874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.826 [2024-05-15 09:08:40.541035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.826 [2024-05-15 09:08:40.541062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.826 qpair failed and we were unable to recover it. 00:42:45.826 [2024-05-15 09:08:40.541188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.826 [2024-05-15 09:08:40.541351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.826 [2024-05-15 09:08:40.541377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.826 qpair failed and we were unable to recover it. 00:42:45.826 [2024-05-15 09:08:40.541535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.826 [2024-05-15 09:08:40.541695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.826 [2024-05-15 09:08:40.541722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.826 qpair failed and we were unable to recover it. 00:42:45.826 [2024-05-15 09:08:40.541884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.826 [2024-05-15 09:08:40.542064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.826 [2024-05-15 09:08:40.542093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.826 qpair failed and we were unable to recover it. 00:42:45.826 [2024-05-15 09:08:40.542227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.826 [2024-05-15 09:08:40.542362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.826 [2024-05-15 09:08:40.542388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.826 qpair failed and we were unable to recover it. 00:42:45.826 [2024-05-15 09:08:40.542497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.826 [2024-05-15 09:08:40.542597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.826 [2024-05-15 09:08:40.542622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.826 qpair failed and we were unable to recover it. 00:42:45.826 [2024-05-15 09:08:40.542736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.826 [2024-05-15 09:08:40.542900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.826 [2024-05-15 09:08:40.542927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.826 qpair failed and we were unable to recover it. 00:42:45.826 [2024-05-15 09:08:40.543082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.827 [2024-05-15 09:08:40.543236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.827 [2024-05-15 09:08:40.543274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.827 qpair failed and we were unable to recover it. 00:42:45.827 [2024-05-15 09:08:40.543407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.827 [2024-05-15 09:08:40.543562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.827 [2024-05-15 09:08:40.543591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.827 qpair failed and we were unable to recover it. 00:42:45.827 [2024-05-15 09:08:40.543750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.827 [2024-05-15 09:08:40.543878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.827 [2024-05-15 09:08:40.543904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.827 qpair failed and we were unable to recover it. 00:42:45.827 [2024-05-15 09:08:40.544031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.827 [2024-05-15 09:08:40.544158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.827 [2024-05-15 09:08:40.544185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.827 qpair failed and we were unable to recover it. 00:42:45.827 [2024-05-15 09:08:40.544355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.827 [2024-05-15 09:08:40.544496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.827 [2024-05-15 09:08:40.544525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.827 qpair failed and we were unable to recover it. 00:42:45.827 [2024-05-15 09:08:40.544687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.827 [2024-05-15 09:08:40.544837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.827 [2024-05-15 09:08:40.544863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.827 qpair failed and we were unable to recover it. 00:42:45.827 [2024-05-15 09:08:40.545021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.827 [2024-05-15 09:08:40.545153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.827 [2024-05-15 09:08:40.545198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.827 qpair failed and we were unable to recover it. 00:42:45.827 [2024-05-15 09:08:40.545391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.827 [2024-05-15 09:08:40.545520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.827 [2024-05-15 09:08:40.545547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.827 qpair failed and we were unable to recover it. 00:42:45.827 [2024-05-15 09:08:40.545681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.827 [2024-05-15 09:08:40.545812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.827 [2024-05-15 09:08:40.545839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.827 qpair failed and we were unable to recover it. 00:42:45.827 [2024-05-15 09:08:40.545970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.827 [2024-05-15 09:08:40.546107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.827 [2024-05-15 09:08:40.546134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.827 qpair failed and we were unable to recover it. 00:42:45.827 [2024-05-15 09:08:40.546331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.827 [2024-05-15 09:08:40.546466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.827 [2024-05-15 09:08:40.546492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.827 qpair failed and we were unable to recover it. 00:42:45.827 [2024-05-15 09:08:40.546626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.827 [2024-05-15 09:08:40.546785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.827 [2024-05-15 09:08:40.546812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.827 qpair failed and we were unable to recover it. 00:42:45.827 [2024-05-15 09:08:40.546941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.827 [2024-05-15 09:08:40.547069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.827 [2024-05-15 09:08:40.547097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.827 qpair failed and we were unable to recover it. 00:42:45.827 [2024-05-15 09:08:40.547254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.827 [2024-05-15 09:08:40.547389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.827 [2024-05-15 09:08:40.547419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.827 qpair failed and we were unable to recover it. 00:42:45.827 [2024-05-15 09:08:40.547566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.827 [2024-05-15 09:08:40.547722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.827 [2024-05-15 09:08:40.547749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.827 qpair failed and we were unable to recover it. 00:42:45.827 [2024-05-15 09:08:40.547902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.827 [2024-05-15 09:08:40.548071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.827 [2024-05-15 09:08:40.548101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.827 qpair failed and we were unable to recover it. 00:42:45.827 [2024-05-15 09:08:40.548269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.827 [2024-05-15 09:08:40.548448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.827 [2024-05-15 09:08:40.548475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.827 qpair failed and we were unable to recover it. 00:42:45.827 [2024-05-15 09:08:40.548585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.827 [2024-05-15 09:08:40.548692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.827 [2024-05-15 09:08:40.548719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.827 qpair failed and we were unable to recover it. 00:42:45.827 [2024-05-15 09:08:40.548829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.827 [2024-05-15 09:08:40.548956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.827 [2024-05-15 09:08:40.548982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.827 qpair failed and we were unable to recover it. 00:42:45.827 [2024-05-15 09:08:40.549115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.827 [2024-05-15 09:08:40.549250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.827 [2024-05-15 09:08:40.549281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.827 qpair failed and we were unable to recover it. 00:42:45.827 [2024-05-15 09:08:40.549386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.827 [2024-05-15 09:08:40.549542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.827 [2024-05-15 09:08:40.549568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.827 qpair failed and we were unable to recover it. 00:42:45.827 [2024-05-15 09:08:40.549727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.827 [2024-05-15 09:08:40.549856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.827 [2024-05-15 09:08:40.549883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.827 qpair failed and we were unable to recover it. 00:42:45.827 [2024-05-15 09:08:40.549994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.827 [2024-05-15 09:08:40.550151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.827 [2024-05-15 09:08:40.550177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.827 qpair failed and we were unable to recover it. 00:42:45.827 [2024-05-15 09:08:40.550289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.827 [2024-05-15 09:08:40.550398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.827 [2024-05-15 09:08:40.550425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.827 qpair failed and we were unable to recover it. 00:42:45.827 [2024-05-15 09:08:40.550606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.827 [2024-05-15 09:08:40.550739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.827 [2024-05-15 09:08:40.550765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.827 qpair failed and we were unable to recover it. 00:42:45.827 [2024-05-15 09:08:40.550923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.827 [2024-05-15 09:08:40.551039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.827 [2024-05-15 09:08:40.551069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.827 qpair failed and we were unable to recover it. 00:42:45.827 [2024-05-15 09:08:40.551221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.827 [2024-05-15 09:08:40.551356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.827 [2024-05-15 09:08:40.551383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.827 qpair failed and we were unable to recover it. 00:42:45.827 [2024-05-15 09:08:40.551512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.827 [2024-05-15 09:08:40.551646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.827 [2024-05-15 09:08:40.551673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.827 qpair failed and we were unable to recover it. 00:42:45.827 [2024-05-15 09:08:40.551810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.827 [2024-05-15 09:08:40.551941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.827 [2024-05-15 09:08:40.551967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.827 qpair failed and we were unable to recover it. 00:42:45.828 [2024-05-15 09:08:40.552135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.828 [2024-05-15 09:08:40.552309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.828 [2024-05-15 09:08:40.552339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.828 qpair failed and we were unable to recover it. 00:42:45.828 [2024-05-15 09:08:40.552463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.828 [2024-05-15 09:08:40.552604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.828 [2024-05-15 09:08:40.552630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.828 qpair failed and we were unable to recover it. 00:42:45.828 [2024-05-15 09:08:40.552770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.828 [2024-05-15 09:08:40.552893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.828 [2024-05-15 09:08:40.552921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.828 qpair failed and we were unable to recover it. 00:42:45.828 [2024-05-15 09:08:40.553058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.828 [2024-05-15 09:08:40.553187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.828 [2024-05-15 09:08:40.553214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.828 qpair failed and we were unable to recover it. 00:42:45.828 [2024-05-15 09:08:40.553352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.828 [2024-05-15 09:08:40.553476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.828 [2024-05-15 09:08:40.553503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.828 qpair failed and we were unable to recover it. 00:42:45.828 [2024-05-15 09:08:40.553609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.828 [2024-05-15 09:08:40.553792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.828 [2024-05-15 09:08:40.553822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.828 qpair failed and we were unable to recover it. 00:42:45.828 [2024-05-15 09:08:40.553949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.828 [2024-05-15 09:08:40.554051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.828 [2024-05-15 09:08:40.554078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.828 qpair failed and we were unable to recover it. 00:42:45.828 [2024-05-15 09:08:40.554229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.828 [2024-05-15 09:08:40.554336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.828 [2024-05-15 09:08:40.554363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.828 qpair failed and we were unable to recover it. 00:42:45.828 [2024-05-15 09:08:40.554517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.828 [2024-05-15 09:08:40.554644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.828 [2024-05-15 09:08:40.554674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.828 qpair failed and we were unable to recover it. 00:42:45.828 [2024-05-15 09:08:40.554830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.828 [2024-05-15 09:08:40.554962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.828 [2024-05-15 09:08:40.554989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.828 qpair failed and we were unable to recover it. 00:42:45.828 [2024-05-15 09:08:40.555097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.828 [2024-05-15 09:08:40.555223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.828 [2024-05-15 09:08:40.555250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.828 qpair failed and we were unable to recover it. 00:42:45.828 [2024-05-15 09:08:40.555411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.828 [2024-05-15 09:08:40.555538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.828 [2024-05-15 09:08:40.555564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.828 qpair failed and we were unable to recover it. 00:42:45.828 [2024-05-15 09:08:40.555693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.828 [2024-05-15 09:08:40.555812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.828 [2024-05-15 09:08:40.555839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.828 qpair failed and we were unable to recover it. 00:42:45.828 [2024-05-15 09:08:40.555998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.828 [2024-05-15 09:08:40.556129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.828 [2024-05-15 09:08:40.556155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.828 qpair failed and we were unable to recover it. 00:42:45.828 [2024-05-15 09:08:40.556254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.828 [2024-05-15 09:08:40.556357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.828 [2024-05-15 09:08:40.556382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.828 qpair failed and we were unable to recover it. 00:42:45.828 [2024-05-15 09:08:40.556482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.828 [2024-05-15 09:08:40.556633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.828 [2024-05-15 09:08:40.556660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.828 qpair failed and we were unable to recover it. 00:42:45.828 [2024-05-15 09:08:40.556816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.828 [2024-05-15 09:08:40.556924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.828 [2024-05-15 09:08:40.556950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.828 qpair failed and we were unable to recover it. 00:42:45.828 [2024-05-15 09:08:40.557106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.828 [2024-05-15 09:08:40.557219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.828 [2024-05-15 09:08:40.557246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.828 qpair failed and we were unable to recover it. 00:42:45.828 [2024-05-15 09:08:40.557389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.828 [2024-05-15 09:08:40.557557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.828 [2024-05-15 09:08:40.557586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.828 qpair failed and we were unable to recover it. 00:42:45.828 [2024-05-15 09:08:40.557755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.828 [2024-05-15 09:08:40.557855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.828 [2024-05-15 09:08:40.557882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.828 qpair failed and we were unable to recover it. 00:42:45.828 [2024-05-15 09:08:40.558055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.828 [2024-05-15 09:08:40.558191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.828 [2024-05-15 09:08:40.558227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.828 qpair failed and we were unable to recover it. 00:42:45.828 [2024-05-15 09:08:40.558400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.828 [2024-05-15 09:08:40.558641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.828 [2024-05-15 09:08:40.558699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.828 qpair failed and we were unable to recover it. 00:42:45.828 [2024-05-15 09:08:40.558855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.828 [2024-05-15 09:08:40.558987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.828 [2024-05-15 09:08:40.559013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.828 qpair failed and we were unable to recover it. 00:42:45.828 [2024-05-15 09:08:40.559194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.828 [2024-05-15 09:08:40.559320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.828 [2024-05-15 09:08:40.559350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.828 qpair failed and we were unable to recover it. 00:42:45.828 [2024-05-15 09:08:40.559465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.828 [2024-05-15 09:08:40.559654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.828 [2024-05-15 09:08:40.559681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.828 qpair failed and we were unable to recover it. 00:42:45.828 [2024-05-15 09:08:40.559815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.828 [2024-05-15 09:08:40.559919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.828 [2024-05-15 09:08:40.559945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.828 qpair failed and we were unable to recover it. 00:42:45.828 [2024-05-15 09:08:40.560057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.828 [2024-05-15 09:08:40.560153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.828 [2024-05-15 09:08:40.560180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.828 qpair failed and we were unable to recover it. 00:42:45.828 [2024-05-15 09:08:40.560373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.828 [2024-05-15 09:08:40.560487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.828 [2024-05-15 09:08:40.560518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.828 qpair failed and we were unable to recover it. 00:42:45.828 [2024-05-15 09:08:40.560704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.829 [2024-05-15 09:08:40.560841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.829 [2024-05-15 09:08:40.560866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.829 qpair failed and we were unable to recover it. 00:42:45.829 [2024-05-15 09:08:40.561025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.829 [2024-05-15 09:08:40.561179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.829 [2024-05-15 09:08:40.561205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.829 qpair failed and we were unable to recover it. 00:42:45.829 [2024-05-15 09:08:40.561364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.829 [2024-05-15 09:08:40.561504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.829 [2024-05-15 09:08:40.561533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.829 qpair failed and we were unable to recover it. 00:42:45.829 [2024-05-15 09:08:40.561682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.829 [2024-05-15 09:08:40.561820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.829 [2024-05-15 09:08:40.561846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.829 qpair failed and we were unable to recover it. 00:42:45.829 [2024-05-15 09:08:40.562023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.829 [2024-05-15 09:08:40.562136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.829 [2024-05-15 09:08:40.562166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.829 qpair failed and we were unable to recover it. 00:42:45.829 [2024-05-15 09:08:40.562305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.829 [2024-05-15 09:08:40.562418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.829 [2024-05-15 09:08:40.562445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.829 qpair failed and we were unable to recover it. 00:42:45.829 [2024-05-15 09:08:40.562599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.829 [2024-05-15 09:08:40.562764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.829 [2024-05-15 09:08:40.562791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.829 qpair failed and we were unable to recover it. 00:42:45.829 [2024-05-15 09:08:40.562993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.829 [2024-05-15 09:08:40.563126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.829 [2024-05-15 09:08:40.563152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.829 qpair failed and we were unable to recover it. 00:42:45.829 [2024-05-15 09:08:40.563311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.829 [2024-05-15 09:08:40.563446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.829 [2024-05-15 09:08:40.563476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.829 qpair failed and we were unable to recover it. 00:42:45.829 [2024-05-15 09:08:40.563630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.829 [2024-05-15 09:08:40.563767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.829 [2024-05-15 09:08:40.563793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.829 qpair failed and we were unable to recover it. 00:42:45.829 [2024-05-15 09:08:40.563934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.829 [2024-05-15 09:08:40.564039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.829 [2024-05-15 09:08:40.564068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.829 qpair failed and we were unable to recover it. 00:42:45.829 [2024-05-15 09:08:40.564237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.829 [2024-05-15 09:08:40.564400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.829 [2024-05-15 09:08:40.564426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.829 qpair failed and we were unable to recover it. 00:42:45.829 [2024-05-15 09:08:40.564560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.829 [2024-05-15 09:08:40.564661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.829 [2024-05-15 09:08:40.564687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.829 qpair failed and we were unable to recover it. 00:42:45.829 [2024-05-15 09:08:40.564785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.829 [2024-05-15 09:08:40.564909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.829 [2024-05-15 09:08:40.564942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.829 qpair failed and we were unable to recover it. 00:42:45.829 [2024-05-15 09:08:40.565074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.829 [2024-05-15 09:08:40.565199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.829 [2024-05-15 09:08:40.565232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.829 qpair failed and we were unable to recover it. 00:42:45.829 [2024-05-15 09:08:40.565390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.829 [2024-05-15 09:08:40.565567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.829 [2024-05-15 09:08:40.565596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.829 qpair failed and we were unable to recover it. 00:42:45.829 [2024-05-15 09:08:40.565710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.829 [2024-05-15 09:08:40.565875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.829 [2024-05-15 09:08:40.565905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.829 qpair failed and we were unable to recover it. 00:42:45.829 [2024-05-15 09:08:40.566055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.829 [2024-05-15 09:08:40.566210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.829 [2024-05-15 09:08:40.566242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.829 qpair failed and we were unable to recover it. 00:42:45.829 [2024-05-15 09:08:40.566371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.829 [2024-05-15 09:08:40.566504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.829 [2024-05-15 09:08:40.566530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.829 qpair failed and we were unable to recover it. 00:42:45.829 [2024-05-15 09:08:40.566690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.829 [2024-05-15 09:08:40.566852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.829 [2024-05-15 09:08:40.566878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.829 qpair failed and we were unable to recover it. 00:42:45.829 [2024-05-15 09:08:40.567008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.829 [2024-05-15 09:08:40.567140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.829 [2024-05-15 09:08:40.567166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.829 qpair failed and we were unable to recover it. 00:42:45.829 [2024-05-15 09:08:40.567301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.829 [2024-05-15 09:08:40.567405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.829 [2024-05-15 09:08:40.567430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.829 qpair failed and we were unable to recover it. 00:42:45.829 [2024-05-15 09:08:40.567526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.829 [2024-05-15 09:08:40.567623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.829 [2024-05-15 09:08:40.567647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.829 qpair failed and we were unable to recover it. 00:42:45.829 [2024-05-15 09:08:40.567780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.829 [2024-05-15 09:08:40.567912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.829 [2024-05-15 09:08:40.567942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.829 qpair failed and we were unable to recover it. 00:42:45.829 [2024-05-15 09:08:40.568075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.829 [2024-05-15 09:08:40.568178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.829 [2024-05-15 09:08:40.568205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.829 qpair failed and we were unable to recover it. 00:42:45.829 [2024-05-15 09:08:40.568392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.829 [2024-05-15 09:08:40.568519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.829 [2024-05-15 09:08:40.568546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.829 qpair failed and we were unable to recover it. 00:42:45.829 [2024-05-15 09:08:40.568676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.829 [2024-05-15 09:08:40.568865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.829 [2024-05-15 09:08:40.568891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.829 qpair failed and we were unable to recover it. 00:42:45.829 [2024-05-15 09:08:40.569024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.829 [2024-05-15 09:08:40.569147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.829 [2024-05-15 09:08:40.569174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.829 qpair failed and we were unable to recover it. 00:42:45.829 [2024-05-15 09:08:40.569275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.829 [2024-05-15 09:08:40.569416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.830 [2024-05-15 09:08:40.569446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.830 qpair failed and we were unable to recover it. 00:42:45.830 [2024-05-15 09:08:40.569609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.830 [2024-05-15 09:08:40.569709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.830 [2024-05-15 09:08:40.569734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.830 qpair failed and we were unable to recover it. 00:42:45.830 [2024-05-15 09:08:40.569872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.830 [2024-05-15 09:08:40.569982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.830 [2024-05-15 09:08:40.570009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.830 qpair failed and we were unable to recover it. 00:42:45.830 [2024-05-15 09:08:40.570140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.830 [2024-05-15 09:08:40.570294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.830 [2024-05-15 09:08:40.570321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.830 qpair failed and we were unable to recover it. 00:42:45.830 [2024-05-15 09:08:40.570451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.830 [2024-05-15 09:08:40.570601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.830 [2024-05-15 09:08:40.570631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.830 qpair failed and we were unable to recover it. 00:42:45.830 [2024-05-15 09:08:40.570788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.830 [2024-05-15 09:08:40.570914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.830 [2024-05-15 09:08:40.570940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.830 qpair failed and we were unable to recover it. 00:42:45.830 [2024-05-15 09:08:40.571047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.830 [2024-05-15 09:08:40.571176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.830 [2024-05-15 09:08:40.571203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.830 qpair failed and we were unable to recover it. 00:42:45.830 [2024-05-15 09:08:40.571391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.830 [2024-05-15 09:08:40.571539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.830 [2024-05-15 09:08:40.571568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.830 qpair failed and we were unable to recover it. 00:42:45.830 [2024-05-15 09:08:40.571684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.830 [2024-05-15 09:08:40.571787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.830 [2024-05-15 09:08:40.571814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.830 qpair failed and we were unable to recover it. 00:42:45.830 [2024-05-15 09:08:40.571938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.830 [2024-05-15 09:08:40.572069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.830 [2024-05-15 09:08:40.572096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.830 qpair failed and we were unable to recover it. 00:42:45.830 [2024-05-15 09:08:40.572243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.830 [2024-05-15 09:08:40.572382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.830 [2024-05-15 09:08:40.572408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.830 qpair failed and we were unable to recover it. 00:42:45.830 [2024-05-15 09:08:40.572537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.830 [2024-05-15 09:08:40.572690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.830 [2024-05-15 09:08:40.572717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.830 qpair failed and we were unable to recover it. 00:42:45.830 [2024-05-15 09:08:40.572826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.830 [2024-05-15 09:08:40.572967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.830 [2024-05-15 09:08:40.572996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.830 qpair failed and we were unable to recover it. 00:42:45.830 [2024-05-15 09:08:40.573109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.830 [2024-05-15 09:08:40.573255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.830 [2024-05-15 09:08:40.573285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.830 qpair failed and we were unable to recover it. 00:42:45.830 [2024-05-15 09:08:40.573429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.830 [2024-05-15 09:08:40.573567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.830 [2024-05-15 09:08:40.573593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.830 qpair failed and we were unable to recover it. 00:42:45.830 [2024-05-15 09:08:40.573747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.830 [2024-05-15 09:08:40.573920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.830 [2024-05-15 09:08:40.573949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.830 qpair failed and we were unable to recover it. 00:42:45.830 [2024-05-15 09:08:40.574133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.830 [2024-05-15 09:08:40.574258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.830 [2024-05-15 09:08:40.574285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.830 qpair failed and we were unable to recover it. 00:42:45.830 [2024-05-15 09:08:40.574420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.830 [2024-05-15 09:08:40.574552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.830 [2024-05-15 09:08:40.574579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.830 qpair failed and we were unable to recover it. 00:42:45.830 [2024-05-15 09:08:40.574737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.830 [2024-05-15 09:08:40.574868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.830 [2024-05-15 09:08:40.574897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.830 qpair failed and we were unable to recover it. 00:42:45.830 [2024-05-15 09:08:40.575001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.830 [2024-05-15 09:08:40.575158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.830 [2024-05-15 09:08:40.575184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.830 qpair failed and we were unable to recover it. 00:42:45.830 [2024-05-15 09:08:40.575315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.830 [2024-05-15 09:08:40.575418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.830 [2024-05-15 09:08:40.575443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.830 qpair failed and we were unable to recover it. 00:42:45.830 [2024-05-15 09:08:40.575554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.830 [2024-05-15 09:08:40.575676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.830 [2024-05-15 09:08:40.575702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.830 qpair failed and we were unable to recover it. 00:42:45.830 [2024-05-15 09:08:40.575855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.830 [2024-05-15 09:08:40.576010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.830 [2024-05-15 09:08:40.576036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.830 qpair failed and we were unable to recover it. 00:42:45.830 [2024-05-15 09:08:40.576188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.830 [2024-05-15 09:08:40.576330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.830 [2024-05-15 09:08:40.576356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.830 qpair failed and we were unable to recover it. 00:42:45.830 [2024-05-15 09:08:40.576529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.830 [2024-05-15 09:08:40.576668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.830 [2024-05-15 09:08:40.576698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.830 qpair failed and we were unable to recover it. 00:42:45.830 [2024-05-15 09:08:40.576809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.830 [2024-05-15 09:08:40.576914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.830 [2024-05-15 09:08:40.576943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.830 qpair failed and we were unable to recover it. 00:42:45.830 [2024-05-15 09:08:40.577061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.830 [2024-05-15 09:08:40.577192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.830 [2024-05-15 09:08:40.577226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.830 qpair failed and we were unable to recover it. 00:42:45.830 [2024-05-15 09:08:40.577379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.830 [2024-05-15 09:08:40.577535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.830 [2024-05-15 09:08:40.577562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.830 qpair failed and we were unable to recover it. 00:42:45.830 [2024-05-15 09:08:40.577687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.830 [2024-05-15 09:08:40.577834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.830 [2024-05-15 09:08:40.577863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.830 qpair failed and we were unable to recover it. 00:42:45.830 [2024-05-15 09:08:40.578021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.830 [2024-05-15 09:08:40.578146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.831 [2024-05-15 09:08:40.578172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.831 qpair failed and we were unable to recover it. 00:42:45.831 [2024-05-15 09:08:40.578344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.831 [2024-05-15 09:08:40.578450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.831 [2024-05-15 09:08:40.578476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.831 qpair failed and we were unable to recover it. 00:42:45.831 [2024-05-15 09:08:40.578610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.831 [2024-05-15 09:08:40.578750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.831 [2024-05-15 09:08:40.578779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.831 qpair failed and we were unable to recover it. 00:42:45.831 [2024-05-15 09:08:40.578947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.831 [2024-05-15 09:08:40.579053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.831 [2024-05-15 09:08:40.579080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.831 qpair failed and we were unable to recover it. 00:42:45.831 [2024-05-15 09:08:40.579278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.831 [2024-05-15 09:08:40.579437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.831 [2024-05-15 09:08:40.579464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.831 qpair failed and we were unable to recover it. 00:42:45.831 [2024-05-15 09:08:40.579564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.831 [2024-05-15 09:08:40.579656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.831 [2024-05-15 09:08:40.579681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.831 qpair failed and we were unable to recover it. 00:42:45.831 [2024-05-15 09:08:40.579812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.831 [2024-05-15 09:08:40.579937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.831 [2024-05-15 09:08:40.579964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.831 qpair failed and we were unable to recover it. 00:42:45.831 [2024-05-15 09:08:40.580092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.831 [2024-05-15 09:08:40.580246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.831 [2024-05-15 09:08:40.580273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.831 qpair failed and we were unable to recover it. 00:42:45.831 [2024-05-15 09:08:40.580434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.831 [2024-05-15 09:08:40.580532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.831 [2024-05-15 09:08:40.580559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.831 qpair failed and we were unable to recover it. 00:42:45.831 [2024-05-15 09:08:40.580679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.831 [2024-05-15 09:08:40.580787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.831 [2024-05-15 09:08:40.580813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.831 qpair failed and we were unable to recover it. 00:42:45.831 [2024-05-15 09:08:40.580916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.831 [2024-05-15 09:08:40.581040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.831 [2024-05-15 09:08:40.581067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.831 qpair failed and we were unable to recover it. 00:42:45.831 [2024-05-15 09:08:40.581204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.831 [2024-05-15 09:08:40.581389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.831 [2024-05-15 09:08:40.581419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.831 qpair failed and we were unable to recover it. 00:42:45.831 [2024-05-15 09:08:40.581540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.831 [2024-05-15 09:08:40.581699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.831 [2024-05-15 09:08:40.581725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.831 qpair failed and we were unable to recover it. 00:42:45.831 [2024-05-15 09:08:40.581878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.831 [2024-05-15 09:08:40.582021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.831 [2024-05-15 09:08:40.582047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.831 qpair failed and we were unable to recover it. 00:42:45.831 [2024-05-15 09:08:40.582181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.831 [2024-05-15 09:08:40.582301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.831 [2024-05-15 09:08:40.582328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.831 qpair failed and we were unable to recover it. 00:42:45.831 [2024-05-15 09:08:40.582453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.831 [2024-05-15 09:08:40.582603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.831 [2024-05-15 09:08:40.582630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.831 qpair failed and we were unable to recover it. 00:42:45.831 [2024-05-15 09:08:40.582778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.831 [2024-05-15 09:08:40.582932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.831 [2024-05-15 09:08:40.582958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.831 qpair failed and we were unable to recover it. 00:42:45.831 [2024-05-15 09:08:40.583117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.831 [2024-05-15 09:08:40.583253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.831 [2024-05-15 09:08:40.583285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.831 qpair failed and we were unable to recover it. 00:42:45.831 [2024-05-15 09:08:40.583417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.831 [2024-05-15 09:08:40.583542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.831 [2024-05-15 09:08:40.583569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.831 qpair failed and we were unable to recover it. 00:42:45.831 [2024-05-15 09:08:40.583677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.831 [2024-05-15 09:08:40.583805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.831 [2024-05-15 09:08:40.583831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.831 qpair failed and we were unable to recover it. 00:42:45.831 [2024-05-15 09:08:40.583960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.831 [2024-05-15 09:08:40.584110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.831 [2024-05-15 09:08:40.584138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.831 qpair failed and we were unable to recover it. 00:42:45.831 [2024-05-15 09:08:40.584268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.831 [2024-05-15 09:08:40.584401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.831 [2024-05-15 09:08:40.584427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.831 qpair failed and we were unable to recover it. 00:42:45.831 [2024-05-15 09:08:40.584570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.831 [2024-05-15 09:08:40.584671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.831 [2024-05-15 09:08:40.584701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.831 qpair failed and we were unable to recover it. 00:42:45.831 [2024-05-15 09:08:40.584847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.831 [2024-05-15 09:08:40.585003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.831 [2024-05-15 09:08:40.585029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.831 qpair failed and we were unable to recover it. 00:42:45.831 [2024-05-15 09:08:40.585185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.831 [2024-05-15 09:08:40.585299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.831 [2024-05-15 09:08:40.585325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.831 qpair failed and we were unable to recover it. 00:42:45.832 [2024-05-15 09:08:40.585434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.832 [2024-05-15 09:08:40.585534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.832 [2024-05-15 09:08:40.585561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.832 qpair failed and we were unable to recover it. 00:42:45.832 [2024-05-15 09:08:40.585716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.832 [2024-05-15 09:08:40.585848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.832 [2024-05-15 09:08:40.585875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.832 qpair failed and we were unable to recover it. 00:42:45.832 [2024-05-15 09:08:40.586018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.832 [2024-05-15 09:08:40.586147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.832 [2024-05-15 09:08:40.586174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.832 qpair failed and we were unable to recover it. 00:42:45.832 [2024-05-15 09:08:40.586349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.832 [2024-05-15 09:08:40.586500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.832 [2024-05-15 09:08:40.586526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.832 qpair failed and we were unable to recover it. 00:42:45.832 [2024-05-15 09:08:40.586650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.832 [2024-05-15 09:08:40.586800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.832 [2024-05-15 09:08:40.586830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.832 qpair failed and we were unable to recover it. 00:42:45.832 [2024-05-15 09:08:40.586974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.832 [2024-05-15 09:08:40.587114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.832 [2024-05-15 09:08:40.587141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.832 qpair failed and we were unable to recover it. 00:42:45.832 [2024-05-15 09:08:40.587270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.832 [2024-05-15 09:08:40.587381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.832 [2024-05-15 09:08:40.587408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.832 qpair failed and we were unable to recover it. 00:42:45.832 [2024-05-15 09:08:40.587564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.832 [2024-05-15 09:08:40.587696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.832 [2024-05-15 09:08:40.587725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.832 qpair failed and we were unable to recover it. 00:42:45.832 [2024-05-15 09:08:40.587877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.832 [2024-05-15 09:08:40.588008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.832 [2024-05-15 09:08:40.588034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.832 qpair failed and we were unable to recover it. 00:42:45.832 [2024-05-15 09:08:40.588151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.832 [2024-05-15 09:08:40.588330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.832 [2024-05-15 09:08:40.588357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.832 qpair failed and we were unable to recover it. 00:42:45.832 [2024-05-15 09:08:40.588525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.832 [2024-05-15 09:08:40.588669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.832 [2024-05-15 09:08:40.588699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.832 qpair failed and we were unable to recover it. 00:42:45.832 [2024-05-15 09:08:40.588849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.832 [2024-05-15 09:08:40.589008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.832 [2024-05-15 09:08:40.589034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.832 qpair failed and we were unable to recover it. 00:42:45.832 [2024-05-15 09:08:40.589138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.832 [2024-05-15 09:08:40.589267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.832 [2024-05-15 09:08:40.589294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.832 qpair failed and we were unable to recover it. 00:42:45.832 [2024-05-15 09:08:40.589409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.832 [2024-05-15 09:08:40.589527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.832 [2024-05-15 09:08:40.589558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.832 qpair failed and we were unable to recover it. 00:42:45.832 [2024-05-15 09:08:40.589710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.832 [2024-05-15 09:08:40.589841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.832 [2024-05-15 09:08:40.589868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.832 qpair failed and we were unable to recover it. 00:42:45.832 [2024-05-15 09:08:40.590000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.832 [2024-05-15 09:08:40.590090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.832 [2024-05-15 09:08:40.590132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.832 qpair failed and we were unable to recover it. 00:42:45.832 [2024-05-15 09:08:40.590271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.832 [2024-05-15 09:08:40.590414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.832 [2024-05-15 09:08:40.590443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.832 qpair failed and we were unable to recover it. 00:42:45.832 [2024-05-15 09:08:40.590603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.832 [2024-05-15 09:08:40.590759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.832 [2024-05-15 09:08:40.590786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.832 qpair failed and we were unable to recover it. 00:42:45.832 [2024-05-15 09:08:40.590926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.832 [2024-05-15 09:08:40.591079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.832 [2024-05-15 09:08:40.591106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.832 qpair failed and we were unable to recover it. 00:42:45.832 [2024-05-15 09:08:40.591240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.832 [2024-05-15 09:08:40.591372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.832 [2024-05-15 09:08:40.591398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.832 qpair failed and we were unable to recover it. 00:42:45.832 [2024-05-15 09:08:40.591495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.832 [2024-05-15 09:08:40.591632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.832 [2024-05-15 09:08:40.591658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.832 qpair failed and we were unable to recover it. 00:42:45.832 [2024-05-15 09:08:40.591819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.832 [2024-05-15 09:08:40.591949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.832 [2024-05-15 09:08:40.591975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.832 qpair failed and we were unable to recover it. 00:42:45.832 [2024-05-15 09:08:40.592099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.832 [2024-05-15 09:08:40.592251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.832 [2024-05-15 09:08:40.592280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.832 qpair failed and we were unable to recover it. 00:42:45.832 [2024-05-15 09:08:40.592453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.832 [2024-05-15 09:08:40.592548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.832 [2024-05-15 09:08:40.592575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.832 qpair failed and we were unable to recover it. 00:42:45.832 [2024-05-15 09:08:40.592751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.832 [2024-05-15 09:08:40.592884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.832 [2024-05-15 09:08:40.592910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.832 qpair failed and we were unable to recover it. 00:42:45.832 [2024-05-15 09:08:40.593063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.832 [2024-05-15 09:08:40.593225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.832 [2024-05-15 09:08:40.593251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.832 qpair failed and we were unable to recover it. 00:42:45.832 [2024-05-15 09:08:40.593351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.832 [2024-05-15 09:08:40.593505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.832 [2024-05-15 09:08:40.593531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.833 qpair failed and we were unable to recover it. 00:42:45.833 [2024-05-15 09:08:40.593720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.833 [2024-05-15 09:08:40.593846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.833 [2024-05-15 09:08:40.593872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.833 qpair failed and we were unable to recover it. 00:42:45.833 [2024-05-15 09:08:40.594009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.833 [2024-05-15 09:08:40.594136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.833 [2024-05-15 09:08:40.594167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.833 qpair failed and we were unable to recover it. 00:42:45.833 [2024-05-15 09:08:40.594330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.833 [2024-05-15 09:08:40.594440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.833 [2024-05-15 09:08:40.594467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.833 qpair failed and we were unable to recover it. 00:42:45.833 [2024-05-15 09:08:40.594592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.833 [2024-05-15 09:08:40.594698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.833 [2024-05-15 09:08:40.594744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.833 qpair failed and we were unable to recover it. 00:42:45.833 [2024-05-15 09:08:40.594903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.833 [2024-05-15 09:08:40.595047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.833 [2024-05-15 09:08:40.595073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.833 qpair failed and we were unable to recover it. 00:42:45.833 [2024-05-15 09:08:40.595178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.833 [2024-05-15 09:08:40.595312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.833 [2024-05-15 09:08:40.595340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.833 qpair failed and we were unable to recover it. 00:42:45.833 [2024-05-15 09:08:40.595540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.833 [2024-05-15 09:08:40.595705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.833 [2024-05-15 09:08:40.595731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.833 qpair failed and we were unable to recover it. 00:42:45.833 [2024-05-15 09:08:40.595889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.833 [2024-05-15 09:08:40.596040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.833 [2024-05-15 09:08:40.596066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.833 qpair failed and we were unable to recover it. 00:42:45.833 [2024-05-15 09:08:40.596196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.833 [2024-05-15 09:08:40.596329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.833 [2024-05-15 09:08:40.596356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.833 qpair failed and we were unable to recover it. 00:42:45.833 [2024-05-15 09:08:40.596518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.833 [2024-05-15 09:08:40.596698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.833 [2024-05-15 09:08:40.596724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.833 qpair failed and we were unable to recover it. 00:42:45.833 [2024-05-15 09:08:40.596852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.833 [2024-05-15 09:08:40.597042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.833 [2024-05-15 09:08:40.597068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.833 qpair failed and we were unable to recover it. 00:42:45.833 [2024-05-15 09:08:40.597228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.833 [2024-05-15 09:08:40.597362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.833 [2024-05-15 09:08:40.597388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.833 qpair failed and we were unable to recover it. 00:42:45.833 [2024-05-15 09:08:40.597540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.833 [2024-05-15 09:08:40.597709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.833 [2024-05-15 09:08:40.597738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.833 qpair failed and we were unable to recover it. 00:42:45.833 [2024-05-15 09:08:40.597847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.833 [2024-05-15 09:08:40.597983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.833 [2024-05-15 09:08:40.598013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.833 qpair failed and we were unable to recover it. 00:42:45.833 [2024-05-15 09:08:40.598173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.833 [2024-05-15 09:08:40.598315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.833 [2024-05-15 09:08:40.598343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.833 qpair failed and we were unable to recover it. 00:42:45.833 [2024-05-15 09:08:40.598476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.833 [2024-05-15 09:08:40.598579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.833 [2024-05-15 09:08:40.598606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.833 qpair failed and we were unable to recover it. 00:42:45.833 [2024-05-15 09:08:40.598733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.833 [2024-05-15 09:08:40.598841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.833 [2024-05-15 09:08:40.598872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.833 qpair failed and we were unable to recover it. 00:42:45.833 [2024-05-15 09:08:40.599036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.833 [2024-05-15 09:08:40.599155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.833 [2024-05-15 09:08:40.599182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.833 qpair failed and we were unable to recover it. 00:42:45.833 [2024-05-15 09:08:40.599355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.833 [2024-05-15 09:08:40.599458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.833 [2024-05-15 09:08:40.599484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.833 qpair failed and we were unable to recover it. 00:42:45.833 [2024-05-15 09:08:40.599628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.833 [2024-05-15 09:08:40.599730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.833 [2024-05-15 09:08:40.599757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.833 qpair failed and we were unable to recover it. 00:42:45.833 [2024-05-15 09:08:40.599886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.833 [2024-05-15 09:08:40.600016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.833 [2024-05-15 09:08:40.600042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.833 qpair failed and we were unable to recover it. 00:42:45.833 [2024-05-15 09:08:40.600211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.833 [2024-05-15 09:08:40.600345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.833 [2024-05-15 09:08:40.600372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.833 qpair failed and we were unable to recover it. 00:42:45.833 [2024-05-15 09:08:40.600492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.833 [2024-05-15 09:08:40.600676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.833 [2024-05-15 09:08:40.600702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.833 qpair failed and we were unable to recover it. 00:42:45.833 [2024-05-15 09:08:40.600865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.833 [2024-05-15 09:08:40.600985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.833 [2024-05-15 09:08:40.601027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.833 qpair failed and we were unable to recover it. 00:42:45.833 [2024-05-15 09:08:40.601160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.833 [2024-05-15 09:08:40.601318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.833 [2024-05-15 09:08:40.601345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.833 qpair failed and we were unable to recover it. 00:42:45.833 [2024-05-15 09:08:40.601553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.833 [2024-05-15 09:08:40.601705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.833 [2024-05-15 09:08:40.601732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.833 qpair failed and we were unable to recover it. 00:42:45.833 [2024-05-15 09:08:40.601887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.833 [2024-05-15 09:08:40.602046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.833 [2024-05-15 09:08:40.602072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.833 qpair failed and we were unable to recover it. 00:42:45.833 [2024-05-15 09:08:40.602181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.834 [2024-05-15 09:08:40.602331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.834 [2024-05-15 09:08:40.602375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.834 qpair failed and we were unable to recover it. 00:42:45.834 [2024-05-15 09:08:40.602507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.834 [2024-05-15 09:08:40.602620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.834 [2024-05-15 09:08:40.602649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.834 qpair failed and we were unable to recover it. 00:42:45.834 [2024-05-15 09:08:40.602783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.834 [2024-05-15 09:08:40.602922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.834 [2024-05-15 09:08:40.602948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.834 qpair failed and we were unable to recover it. 00:42:45.834 [2024-05-15 09:08:40.603059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.834 [2024-05-15 09:08:40.603175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.834 [2024-05-15 09:08:40.603204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.834 qpair failed and we were unable to recover it. 00:42:45.834 [2024-05-15 09:08:40.603337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.834 [2024-05-15 09:08:40.603480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.834 [2024-05-15 09:08:40.603510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.834 qpair failed and we were unable to recover it. 00:42:45.834 [2024-05-15 09:08:40.603682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.834 [2024-05-15 09:08:40.603777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.834 [2024-05-15 09:08:40.603804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.834 qpair failed and we were unable to recover it. 00:42:45.834 [2024-05-15 09:08:40.603930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.834 [2024-05-15 09:08:40.604045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.834 [2024-05-15 09:08:40.604074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.834 qpair failed and we were unable to recover it. 00:42:45.834 [2024-05-15 09:08:40.604227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.834 [2024-05-15 09:08:40.604337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.834 [2024-05-15 09:08:40.604363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.834 qpair failed and we were unable to recover it. 00:42:45.834 [2024-05-15 09:08:40.604495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.834 [2024-05-15 09:08:40.604653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.834 [2024-05-15 09:08:40.604680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.834 qpair failed and we were unable to recover it. 00:42:45.834 [2024-05-15 09:08:40.604819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.834 [2024-05-15 09:08:40.604928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.834 [2024-05-15 09:08:40.604956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.834 qpair failed and we were unable to recover it. 00:42:45.834 [2024-05-15 09:08:40.605081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.834 [2024-05-15 09:08:40.605178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.834 [2024-05-15 09:08:40.605204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.834 qpair failed and we were unable to recover it. 00:42:45.834 [2024-05-15 09:08:40.605376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.834 [2024-05-15 09:08:40.605483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.834 [2024-05-15 09:08:40.605510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.834 qpair failed and we were unable to recover it. 00:42:45.834 [2024-05-15 09:08:40.605675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.834 [2024-05-15 09:08:40.605804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.834 [2024-05-15 09:08:40.605830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.834 qpair failed and we were unable to recover it. 00:42:45.834 [2024-05-15 09:08:40.605981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.834 [2024-05-15 09:08:40.606110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.834 [2024-05-15 09:08:40.606137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.834 qpair failed and we were unable to recover it. 00:42:45.834 [2024-05-15 09:08:40.606279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.834 [2024-05-15 09:08:40.606381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.834 [2024-05-15 09:08:40.606407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.834 qpair failed and we were unable to recover it. 00:42:45.834 [2024-05-15 09:08:40.606516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.834 [2024-05-15 09:08:40.606616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:45.834 [2024-05-15 09:08:40.606643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:45.834 qpair failed and we were unable to recover it. 00:42:46.112 [2024-05-15 09:08:40.606768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.112 [2024-05-15 09:08:40.606934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.112 [2024-05-15 09:08:40.606961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.112 qpair failed and we were unable to recover it. 00:42:46.112 [2024-05-15 09:08:40.607132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.112 [2024-05-15 09:08:40.607240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.112 [2024-05-15 09:08:40.607267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.112 qpair failed and we were unable to recover it. 00:42:46.113 [2024-05-15 09:08:40.607378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.113 [2024-05-15 09:08:40.607503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.113 [2024-05-15 09:08:40.607534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.113 qpair failed and we were unable to recover it. 00:42:46.113 [2024-05-15 09:08:40.607705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.113 [2024-05-15 09:08:40.607808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.113 [2024-05-15 09:08:40.607834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.113 qpair failed and we were unable to recover it. 00:42:46.113 [2024-05-15 09:08:40.607963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.113 [2024-05-15 09:08:40.608093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.113 [2024-05-15 09:08:40.608120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.113 qpair failed and we were unable to recover it. 00:42:46.113 [2024-05-15 09:08:40.608253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.113 [2024-05-15 09:08:40.608405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.113 [2024-05-15 09:08:40.608435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.113 qpair failed and we were unable to recover it. 00:42:46.113 [2024-05-15 09:08:40.608563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.113 [2024-05-15 09:08:40.608660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.113 [2024-05-15 09:08:40.608687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.113 qpair failed and we were unable to recover it. 00:42:46.113 [2024-05-15 09:08:40.608800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.113 [2024-05-15 09:08:40.608930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.113 [2024-05-15 09:08:40.608957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.113 qpair failed and we were unable to recover it. 00:42:46.113 [2024-05-15 09:08:40.609091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.113 [2024-05-15 09:08:40.609202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.113 [2024-05-15 09:08:40.609235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.113 qpair failed and we were unable to recover it. 00:42:46.113 [2024-05-15 09:08:40.609372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.113 [2024-05-15 09:08:40.609481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.113 [2024-05-15 09:08:40.609507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.113 qpair failed and we were unable to recover it. 00:42:46.113 [2024-05-15 09:08:40.609633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.113 [2024-05-15 09:08:40.609736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.113 [2024-05-15 09:08:40.609762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.113 qpair failed and we were unable to recover it. 00:42:46.113 [2024-05-15 09:08:40.609869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.113 [2024-05-15 09:08:40.609965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.113 [2024-05-15 09:08:40.609991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.113 qpair failed and we were unable to recover it. 00:42:46.113 [2024-05-15 09:08:40.610190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.113 [2024-05-15 09:08:40.610333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.113 [2024-05-15 09:08:40.610360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.113 qpair failed and we were unable to recover it. 00:42:46.113 [2024-05-15 09:08:40.610482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.113 [2024-05-15 09:08:40.610585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.113 [2024-05-15 09:08:40.610611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.113 qpair failed and we were unable to recover it. 00:42:46.113 [2024-05-15 09:08:40.610763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.113 [2024-05-15 09:08:40.610916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.113 [2024-05-15 09:08:40.610942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.113 qpair failed and we were unable to recover it. 00:42:46.113 [2024-05-15 09:08:40.611047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.113 [2024-05-15 09:08:40.611174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.113 [2024-05-15 09:08:40.611200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.113 qpair failed and we were unable to recover it. 00:42:46.113 [2024-05-15 09:08:40.611323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.113 [2024-05-15 09:08:40.611454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.113 [2024-05-15 09:08:40.611480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.113 qpair failed and we were unable to recover it. 00:42:46.113 [2024-05-15 09:08:40.611577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.113 [2024-05-15 09:08:40.611709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.113 [2024-05-15 09:08:40.611735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.113 qpair failed and we were unable to recover it. 00:42:46.113 [2024-05-15 09:08:40.611849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.113 [2024-05-15 09:08:40.611952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.113 [2024-05-15 09:08:40.611978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.113 qpair failed and we were unable to recover it. 00:42:46.113 [2024-05-15 09:08:40.612139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.113 [2024-05-15 09:08:40.612296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.113 [2024-05-15 09:08:40.612341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.113 qpair failed and we were unable to recover it. 00:42:46.113 [2024-05-15 09:08:40.612524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.113 [2024-05-15 09:08:40.612679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.113 [2024-05-15 09:08:40.612721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.113 qpair failed and we were unable to recover it. 00:42:46.113 [2024-05-15 09:08:40.612877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.113 [2024-05-15 09:08:40.613051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.113 [2024-05-15 09:08:40.613079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.113 qpair failed and we were unable to recover it. 00:42:46.113 [2024-05-15 09:08:40.613233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.113 [2024-05-15 09:08:40.613341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.113 [2024-05-15 09:08:40.613367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.113 qpair failed and we were unable to recover it. 00:42:46.113 [2024-05-15 09:08:40.613474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.113 [2024-05-15 09:08:40.613637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.113 [2024-05-15 09:08:40.613663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.113 qpair failed and we were unable to recover it. 00:42:46.113 [2024-05-15 09:08:40.613817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.113 [2024-05-15 09:08:40.613948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.113 [2024-05-15 09:08:40.613981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.113 qpair failed and we were unable to recover it. 00:42:46.113 [2024-05-15 09:08:40.614095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.113 [2024-05-15 09:08:40.614193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.113 [2024-05-15 09:08:40.614234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.113 qpair failed and we were unable to recover it. 00:42:46.113 [2024-05-15 09:08:40.614365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.113 [2024-05-15 09:08:40.614494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.113 [2024-05-15 09:08:40.614521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.113 qpair failed and we were unable to recover it. 00:42:46.113 [2024-05-15 09:08:40.614682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.113 [2024-05-15 09:08:40.614819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.113 [2024-05-15 09:08:40.614845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.113 qpair failed and we were unable to recover it. 00:42:46.113 [2024-05-15 09:08:40.615004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.113 [2024-05-15 09:08:40.615127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.113 [2024-05-15 09:08:40.615169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.113 qpair failed and we were unable to recover it. 00:42:46.113 [2024-05-15 09:08:40.615294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.113 [2024-05-15 09:08:40.615462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.113 [2024-05-15 09:08:40.615491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.113 qpair failed and we were unable to recover it. 00:42:46.113 [2024-05-15 09:08:40.615637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.113 [2024-05-15 09:08:40.615776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.113 [2024-05-15 09:08:40.615805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.113 qpair failed and we were unable to recover it. 00:42:46.114 [2024-05-15 09:08:40.615979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.114 [2024-05-15 09:08:40.616132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.114 [2024-05-15 09:08:40.616159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.114 qpair failed and we were unable to recover it. 00:42:46.114 [2024-05-15 09:08:40.616295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.114 [2024-05-15 09:08:40.616405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.114 [2024-05-15 09:08:40.616433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.114 qpair failed and we were unable to recover it. 00:42:46.114 [2024-05-15 09:08:40.616610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.114 [2024-05-15 09:08:40.616753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.114 [2024-05-15 09:08:40.616783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.114 qpair failed and we were unable to recover it. 00:42:46.114 [2024-05-15 09:08:40.616938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.114 [2024-05-15 09:08:40.617093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.114 [2024-05-15 09:08:40.617119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.114 qpair failed and we were unable to recover it. 00:42:46.114 [2024-05-15 09:08:40.617282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.114 [2024-05-15 09:08:40.617420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.114 [2024-05-15 09:08:40.617449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.114 qpair failed and we were unable to recover it. 00:42:46.114 [2024-05-15 09:08:40.617623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.114 [2024-05-15 09:08:40.617792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.114 [2024-05-15 09:08:40.617821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.114 qpair failed and we were unable to recover it. 00:42:46.114 [2024-05-15 09:08:40.617964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.114 [2024-05-15 09:08:40.618075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.114 [2024-05-15 09:08:40.618102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.114 qpair failed and we were unable to recover it. 00:42:46.114 [2024-05-15 09:08:40.618202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.114 [2024-05-15 09:08:40.618377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.114 [2024-05-15 09:08:40.618406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.114 qpair failed and we were unable to recover it. 00:42:46.114 [2024-05-15 09:08:40.618560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.114 [2024-05-15 09:08:40.618663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.114 [2024-05-15 09:08:40.618689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.114 qpair failed and we were unable to recover it. 00:42:46.114 [2024-05-15 09:08:40.618799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.114 [2024-05-15 09:08:40.618960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.114 [2024-05-15 09:08:40.618987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.114 qpair failed and we were unable to recover it. 00:42:46.114 [2024-05-15 09:08:40.619132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.114 [2024-05-15 09:08:40.619280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.114 [2024-05-15 09:08:40.619312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.114 qpair failed and we were unable to recover it. 00:42:46.114 [2024-05-15 09:08:40.619434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.114 [2024-05-15 09:08:40.619563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.114 [2024-05-15 09:08:40.619589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.114 qpair failed and we were unable to recover it. 00:42:46.114 [2024-05-15 09:08:40.619716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.114 [2024-05-15 09:08:40.619839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.114 [2024-05-15 09:08:40.619865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.114 qpair failed and we were unable to recover it. 00:42:46.114 [2024-05-15 09:08:40.619990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.114 [2024-05-15 09:08:40.620140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.114 [2024-05-15 09:08:40.620166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.114 qpair failed and we were unable to recover it. 00:42:46.114 [2024-05-15 09:08:40.620307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.114 [2024-05-15 09:08:40.620407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.114 [2024-05-15 09:08:40.620433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.114 qpair failed and we were unable to recover it. 00:42:46.114 [2024-05-15 09:08:40.620557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.114 [2024-05-15 09:08:40.620685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.114 [2024-05-15 09:08:40.620711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.114 qpair failed and we were unable to recover it. 00:42:46.114 [2024-05-15 09:08:40.620842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.114 [2024-05-15 09:08:40.620988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.114 [2024-05-15 09:08:40.621016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.114 qpair failed and we were unable to recover it. 00:42:46.114 [2024-05-15 09:08:40.621168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.114 [2024-05-15 09:08:40.621315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.114 [2024-05-15 09:08:40.621345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.114 qpair failed and we were unable to recover it. 00:42:46.114 [2024-05-15 09:08:40.621491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.114 [2024-05-15 09:08:40.621596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.114 [2024-05-15 09:08:40.621623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.114 qpair failed and we were unable to recover it. 00:42:46.114 [2024-05-15 09:08:40.621778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.114 [2024-05-15 09:08:40.621924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.114 [2024-05-15 09:08:40.621953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.114 qpair failed and we were unable to recover it. 00:42:46.114 [2024-05-15 09:08:40.622125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.114 [2024-05-15 09:08:40.622254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.114 [2024-05-15 09:08:40.622282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.114 qpair failed and we were unable to recover it. 00:42:46.114 [2024-05-15 09:08:40.622393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.114 [2024-05-15 09:08:40.622503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.114 [2024-05-15 09:08:40.622530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.114 qpair failed and we were unable to recover it. 00:42:46.114 [2024-05-15 09:08:40.622649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.114 [2024-05-15 09:08:40.622765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.114 [2024-05-15 09:08:40.622795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.114 qpair failed and we were unable to recover it. 00:42:46.114 [2024-05-15 09:08:40.622978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.114 [2024-05-15 09:08:40.623140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.114 [2024-05-15 09:08:40.623166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.114 qpair failed and we were unable to recover it. 00:42:46.114 [2024-05-15 09:08:40.623308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.114 [2024-05-15 09:08:40.623435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.114 [2024-05-15 09:08:40.623461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.114 qpair failed and we were unable to recover it. 00:42:46.114 [2024-05-15 09:08:40.623593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.114 [2024-05-15 09:08:40.623736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.114 [2024-05-15 09:08:40.623777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.114 qpair failed and we were unable to recover it. 00:42:46.114 [2024-05-15 09:08:40.623876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.114 [2024-05-15 09:08:40.623998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.114 [2024-05-15 09:08:40.624025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.114 qpair failed and we were unable to recover it. 00:42:46.114 [2024-05-15 09:08:40.624129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.114 [2024-05-15 09:08:40.624265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.114 [2024-05-15 09:08:40.624293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.114 qpair failed and we were unable to recover it. 00:42:46.114 [2024-05-15 09:08:40.624419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.114 [2024-05-15 09:08:40.624573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.115 [2024-05-15 09:08:40.624615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.115 qpair failed and we were unable to recover it. 00:42:46.115 [2024-05-15 09:08:40.624747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.115 [2024-05-15 09:08:40.624901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.115 [2024-05-15 09:08:40.624930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.115 qpair failed and we were unable to recover it. 00:42:46.115 [2024-05-15 09:08:40.625104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.115 [2024-05-15 09:08:40.625231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.115 [2024-05-15 09:08:40.625259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.115 qpair failed and we were unable to recover it. 00:42:46.115 [2024-05-15 09:08:40.625411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.115 [2024-05-15 09:08:40.625562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.115 [2024-05-15 09:08:40.625591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.115 qpair failed and we were unable to recover it. 00:42:46.115 [2024-05-15 09:08:40.625747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.115 [2024-05-15 09:08:40.625856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.115 [2024-05-15 09:08:40.625883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.115 qpair failed and we were unable to recover it. 00:42:46.115 [2024-05-15 09:08:40.626025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.115 [2024-05-15 09:08:40.626151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.115 [2024-05-15 09:08:40.626178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.115 qpair failed and we were unable to recover it. 00:42:46.115 [2024-05-15 09:08:40.626318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.115 [2024-05-15 09:08:40.626495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.115 [2024-05-15 09:08:40.626524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.115 qpair failed and we were unable to recover it. 00:42:46.115 [2024-05-15 09:08:40.626701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.115 [2024-05-15 09:08:40.626829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.115 [2024-05-15 09:08:40.626855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.115 qpair failed and we were unable to recover it. 00:42:46.115 [2024-05-15 09:08:40.626982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.115 [2024-05-15 09:08:40.627123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.115 [2024-05-15 09:08:40.627149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.115 qpair failed and we were unable to recover it. 00:42:46.115 [2024-05-15 09:08:40.627286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.115 [2024-05-15 09:08:40.627467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.115 [2024-05-15 09:08:40.627494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.115 qpair failed and we were unable to recover it. 00:42:46.115 [2024-05-15 09:08:40.627622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.115 [2024-05-15 09:08:40.627780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.115 [2024-05-15 09:08:40.627813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.115 qpair failed and we were unable to recover it. 00:42:46.115 [2024-05-15 09:08:40.627939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.115 [2024-05-15 09:08:40.628082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.115 [2024-05-15 09:08:40.628109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.115 qpair failed and we were unable to recover it. 00:42:46.115 [2024-05-15 09:08:40.628260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.115 [2024-05-15 09:08:40.628396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.115 [2024-05-15 09:08:40.628425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.115 qpair failed and we were unable to recover it. 00:42:46.115 [2024-05-15 09:08:40.628585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.115 [2024-05-15 09:08:40.628718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.115 [2024-05-15 09:08:40.628745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.115 qpair failed and we were unable to recover it. 00:42:46.115 [2024-05-15 09:08:40.628860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.115 [2024-05-15 09:08:40.629016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.115 [2024-05-15 09:08:40.629044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.115 qpair failed and we were unable to recover it. 00:42:46.115 [2024-05-15 09:08:40.629143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.115 [2024-05-15 09:08:40.629295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.115 [2024-05-15 09:08:40.629326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.115 qpair failed and we were unable to recover it. 00:42:46.115 [2024-05-15 09:08:40.629469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.115 [2024-05-15 09:08:40.629639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.115 [2024-05-15 09:08:40.629670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.115 qpair failed and we were unable to recover it. 00:42:46.115 [2024-05-15 09:08:40.629812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.115 [2024-05-15 09:08:40.629945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.115 [2024-05-15 09:08:40.629971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.115 qpair failed and we were unable to recover it. 00:42:46.115 [2024-05-15 09:08:40.630099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.115 [2024-05-15 09:08:40.630280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.115 [2024-05-15 09:08:40.630307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.115 qpair failed and we were unable to recover it. 00:42:46.115 [2024-05-15 09:08:40.630435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.115 [2024-05-15 09:08:40.630566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.115 [2024-05-15 09:08:40.630594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.115 qpair failed and we were unable to recover it. 00:42:46.115 [2024-05-15 09:08:40.630728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.115 [2024-05-15 09:08:40.630854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.115 [2024-05-15 09:08:40.630881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.115 qpair failed and we were unable to recover it. 00:42:46.115 [2024-05-15 09:08:40.631031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.115 [2024-05-15 09:08:40.631168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.115 [2024-05-15 09:08:40.631194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.115 qpair failed and we were unable to recover it. 00:42:46.115 [2024-05-15 09:08:40.631355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.115 [2024-05-15 09:08:40.631483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.115 [2024-05-15 09:08:40.631509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.115 qpair failed and we were unable to recover it. 00:42:46.115 [2024-05-15 09:08:40.631644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.115 [2024-05-15 09:08:40.631768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.115 [2024-05-15 09:08:40.631794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.115 qpair failed and we were unable to recover it. 00:42:46.115 [2024-05-15 09:08:40.631932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.115 [2024-05-15 09:08:40.632064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.115 [2024-05-15 09:08:40.632090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.115 qpair failed and we were unable to recover it. 00:42:46.115 [2024-05-15 09:08:40.632196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.115 [2024-05-15 09:08:40.632372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.115 [2024-05-15 09:08:40.632399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.115 qpair failed and we were unable to recover it. 00:42:46.115 [2024-05-15 09:08:40.632541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.115 [2024-05-15 09:08:40.632694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.115 [2024-05-15 09:08:40.632725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.115 qpair failed and we were unable to recover it. 00:42:46.115 [2024-05-15 09:08:40.632904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.115 [2024-05-15 09:08:40.633047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.115 [2024-05-15 09:08:40.633077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.115 qpair failed and we were unable to recover it. 00:42:46.115 [2024-05-15 09:08:40.633229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.115 [2024-05-15 09:08:40.633394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.115 [2024-05-15 09:08:40.633423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.115 qpair failed and we were unable to recover it. 00:42:46.115 [2024-05-15 09:08:40.633600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.115 [2024-05-15 09:08:40.633726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.116 [2024-05-15 09:08:40.633768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.116 qpair failed and we were unable to recover it. 00:42:46.116 [2024-05-15 09:08:40.633932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.116 [2024-05-15 09:08:40.634080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.116 [2024-05-15 09:08:40.634106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.116 qpair failed and we were unable to recover it. 00:42:46.116 [2024-05-15 09:08:40.634244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.116 [2024-05-15 09:08:40.634340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.116 [2024-05-15 09:08:40.634366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.116 qpair failed and we were unable to recover it. 00:42:46.116 [2024-05-15 09:08:40.634478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.116 [2024-05-15 09:08:40.634631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.116 [2024-05-15 09:08:40.634656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.116 qpair failed and we were unable to recover it. 00:42:46.116 [2024-05-15 09:08:40.634832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.116 [2024-05-15 09:08:40.634965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.116 [2024-05-15 09:08:40.634994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.116 qpair failed and we were unable to recover it. 00:42:46.116 [2024-05-15 09:08:40.635130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.116 [2024-05-15 09:08:40.635286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.116 [2024-05-15 09:08:40.635313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.116 qpair failed and we were unable to recover it. 00:42:46.116 [2024-05-15 09:08:40.635522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.116 [2024-05-15 09:08:40.635712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.116 [2024-05-15 09:08:40.635776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.116 qpair failed and we were unable to recover it. 00:42:46.116 [2024-05-15 09:08:40.635918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.116 [2024-05-15 09:08:40.636064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.116 [2024-05-15 09:08:40.636093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.116 qpair failed and we were unable to recover it. 00:42:46.116 [2024-05-15 09:08:40.636247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.116 [2024-05-15 09:08:40.636460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.116 [2024-05-15 09:08:40.636504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.116 qpair failed and we were unable to recover it. 00:42:46.116 [2024-05-15 09:08:40.636621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.116 [2024-05-15 09:08:40.636770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.116 [2024-05-15 09:08:40.636797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.116 qpair failed and we were unable to recover it. 00:42:46.116 [2024-05-15 09:08:40.636957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.116 [2024-05-15 09:08:40.637078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.116 [2024-05-15 09:08:40.637109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.116 qpair failed and we were unable to recover it. 00:42:46.116 [2024-05-15 09:08:40.637269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.116 [2024-05-15 09:08:40.637410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.116 [2024-05-15 09:08:40.637440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.116 qpair failed and we were unable to recover it. 00:42:46.116 [2024-05-15 09:08:40.637559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.116 [2024-05-15 09:08:40.637712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.116 [2024-05-15 09:08:40.637741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.116 qpair failed and we were unable to recover it. 00:42:46.116 [2024-05-15 09:08:40.637905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.116 [2024-05-15 09:08:40.638119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.116 [2024-05-15 09:08:40.638145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.116 qpair failed and we were unable to recover it. 00:42:46.116 [2024-05-15 09:08:40.638267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.116 [2024-05-15 09:08:40.638392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.116 [2024-05-15 09:08:40.638418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.116 qpair failed and we were unable to recover it. 00:42:46.116 [2024-05-15 09:08:40.638572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.116 [2024-05-15 09:08:40.638721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.116 [2024-05-15 09:08:40.638748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.116 qpair failed and we were unable to recover it. 00:42:46.116 [2024-05-15 09:08:40.638885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.116 [2024-05-15 09:08:40.639031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.116 [2024-05-15 09:08:40.639061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.116 qpair failed and we were unable to recover it. 00:42:46.116 [2024-05-15 09:08:40.639225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.116 [2024-05-15 09:08:40.639335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.116 [2024-05-15 09:08:40.639361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.116 qpair failed and we were unable to recover it. 00:42:46.116 [2024-05-15 09:08:40.639491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.116 [2024-05-15 09:08:40.639619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.116 [2024-05-15 09:08:40.639646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.116 qpair failed and we were unable to recover it. 00:42:46.116 [2024-05-15 09:08:40.639804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.116 [2024-05-15 09:08:40.639943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.116 [2024-05-15 09:08:40.639972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.116 qpair failed and we were unable to recover it. 00:42:46.116 [2024-05-15 09:08:40.640107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.116 [2024-05-15 09:08:40.640225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.116 [2024-05-15 09:08:40.640254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.116 qpair failed and we were unable to recover it. 00:42:46.116 [2024-05-15 09:08:40.640406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.116 [2024-05-15 09:08:40.640540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.116 [2024-05-15 09:08:40.640566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.116 qpair failed and we were unable to recover it. 00:42:46.116 [2024-05-15 09:08:40.640726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.116 [2024-05-15 09:08:40.640888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.116 [2024-05-15 09:08:40.640919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.116 qpair failed and we were unable to recover it. 00:42:46.116 [2024-05-15 09:08:40.641062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.116 [2024-05-15 09:08:40.641194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.116 [2024-05-15 09:08:40.641227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.116 qpair failed and we were unable to recover it. 00:42:46.116 [2024-05-15 09:08:40.641385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.116 [2024-05-15 09:08:40.641495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.116 [2024-05-15 09:08:40.641523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.116 qpair failed and we were unable to recover it. 00:42:46.116 [2024-05-15 09:08:40.641651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.116 [2024-05-15 09:08:40.641783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.116 [2024-05-15 09:08:40.641809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.116 qpair failed and we were unable to recover it. 00:42:46.116 [2024-05-15 09:08:40.641956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.116 [2024-05-15 09:08:40.642121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.116 [2024-05-15 09:08:40.642150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.116 qpair failed and we were unable to recover it. 00:42:46.116 [2024-05-15 09:08:40.642297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.116 [2024-05-15 09:08:40.642420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.116 [2024-05-15 09:08:40.642446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.116 qpair failed and we were unable to recover it. 00:42:46.116 [2024-05-15 09:08:40.642591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.116 [2024-05-15 09:08:40.642756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.116 [2024-05-15 09:08:40.642782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.116 qpair failed and we were unable to recover it. 00:42:46.116 [2024-05-15 09:08:40.642915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.116 [2024-05-15 09:08:40.643045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.117 [2024-05-15 09:08:40.643071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.117 qpair failed and we were unable to recover it. 00:42:46.117 [2024-05-15 09:08:40.643245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.117 [2024-05-15 09:08:40.643353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.117 [2024-05-15 09:08:40.643395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.117 qpair failed and we were unable to recover it. 00:42:46.117 [2024-05-15 09:08:40.643538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.117 [2024-05-15 09:08:40.643680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.117 [2024-05-15 09:08:40.643708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.117 qpair failed and we were unable to recover it. 00:42:46.117 [2024-05-15 09:08:40.643847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.117 [2024-05-15 09:08:40.643997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.117 [2024-05-15 09:08:40.644026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.117 qpair failed and we were unable to recover it. 00:42:46.117 [2024-05-15 09:08:40.644167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.117 [2024-05-15 09:08:40.644325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.117 [2024-05-15 09:08:40.644352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.117 qpair failed and we were unable to recover it. 00:42:46.117 [2024-05-15 09:08:40.644512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.117 [2024-05-15 09:08:40.644628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.117 [2024-05-15 09:08:40.644657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.117 qpair failed and we were unable to recover it. 00:42:46.117 [2024-05-15 09:08:40.644772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.117 [2024-05-15 09:08:40.644926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.117 [2024-05-15 09:08:40.644955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.117 qpair failed and we were unable to recover it. 00:42:46.117 [2024-05-15 09:08:40.645127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.117 [2024-05-15 09:08:40.645293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.117 [2024-05-15 09:08:40.645320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.117 qpair failed and we were unable to recover it. 00:42:46.117 [2024-05-15 09:08:40.645418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.117 [2024-05-15 09:08:40.645552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.117 [2024-05-15 09:08:40.645578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.117 qpair failed and we were unable to recover it. 00:42:46.117 [2024-05-15 09:08:40.645736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.117 [2024-05-15 09:08:40.645892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.117 [2024-05-15 09:08:40.645921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.117 qpair failed and we were unable to recover it. 00:42:46.117 [2024-05-15 09:08:40.646069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.117 [2024-05-15 09:08:40.646201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.117 [2024-05-15 09:08:40.646236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.117 qpair failed and we were unable to recover it. 00:42:46.117 [2024-05-15 09:08:40.646427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.117 [2024-05-15 09:08:40.646561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.117 [2024-05-15 09:08:40.646590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.117 qpair failed and we were unable to recover it. 00:42:46.117 [2024-05-15 09:08:40.646745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.117 [2024-05-15 09:08:40.646904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.117 [2024-05-15 09:08:40.646931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.117 qpair failed and we were unable to recover it. 00:42:46.117 [2024-05-15 09:08:40.647065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.117 [2024-05-15 09:08:40.647198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.117 [2024-05-15 09:08:40.647249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.117 qpair failed and we were unable to recover it. 00:42:46.117 [2024-05-15 09:08:40.647419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.117 [2024-05-15 09:08:40.647555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.117 [2024-05-15 09:08:40.647582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.117 qpair failed and we were unable to recover it. 00:42:46.117 [2024-05-15 09:08:40.647762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.117 [2024-05-15 09:08:40.647903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.117 [2024-05-15 09:08:40.647944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.117 qpair failed and we were unable to recover it. 00:42:46.117 [2024-05-15 09:08:40.648047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.117 [2024-05-15 09:08:40.648201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.117 [2024-05-15 09:08:40.648234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.117 qpair failed and we were unable to recover it. 00:42:46.117 [2024-05-15 09:08:40.648365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.117 [2024-05-15 09:08:40.648515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.117 [2024-05-15 09:08:40.648544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.117 qpair failed and we were unable to recover it. 00:42:46.117 [2024-05-15 09:08:40.648687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.117 [2024-05-15 09:08:40.648855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.117 [2024-05-15 09:08:40.648884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.117 qpair failed and we were unable to recover it. 00:42:46.117 [2024-05-15 09:08:40.649021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.117 [2024-05-15 09:08:40.649148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.117 [2024-05-15 09:08:40.649179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.117 qpair failed and we were unable to recover it. 00:42:46.117 [2024-05-15 09:08:40.649321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.117 [2024-05-15 09:08:40.649437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.117 [2024-05-15 09:08:40.649461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.117 qpair failed and we were unable to recover it. 00:42:46.117 [2024-05-15 09:08:40.649563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.117 [2024-05-15 09:08:40.649697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.117 [2024-05-15 09:08:40.649724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.117 qpair failed and we were unable to recover it. 00:42:46.117 [2024-05-15 09:08:40.649848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.117 [2024-05-15 09:08:40.649946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.117 [2024-05-15 09:08:40.649973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.117 qpair failed and we were unable to recover it. 00:42:46.117 [2024-05-15 09:08:40.650129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.117 [2024-05-15 09:08:40.650265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.117 [2024-05-15 09:08:40.650296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.117 qpair failed and we were unable to recover it. 00:42:46.118 [2024-05-15 09:08:40.650401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.118 [2024-05-15 09:08:40.650522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.118 [2024-05-15 09:08:40.650549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.118 qpair failed and we were unable to recover it. 00:42:46.118 [2024-05-15 09:08:40.650757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.118 [2024-05-15 09:08:40.650905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.118 [2024-05-15 09:08:40.650936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.118 qpair failed and we were unable to recover it. 00:42:46.118 [2024-05-15 09:08:40.651079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.118 [2024-05-15 09:08:40.651245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.118 [2024-05-15 09:08:40.651275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.118 qpair failed and we were unable to recover it. 00:42:46.118 [2024-05-15 09:08:40.651445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.118 [2024-05-15 09:08:40.651591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.118 [2024-05-15 09:08:40.651620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.118 qpair failed and we were unable to recover it. 00:42:46.118 [2024-05-15 09:08:40.651788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.118 [2024-05-15 09:08:40.651919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.118 [2024-05-15 09:08:40.651946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.118 qpair failed and we were unable to recover it. 00:42:46.118 [2024-05-15 09:08:40.652048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.118 [2024-05-15 09:08:40.652197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.118 [2024-05-15 09:08:40.652234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.118 qpair failed and we were unable to recover it. 00:42:46.118 [2024-05-15 09:08:40.652410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.118 [2024-05-15 09:08:40.652577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.118 [2024-05-15 09:08:40.652606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.118 qpair failed and we were unable to recover it. 00:42:46.118 [2024-05-15 09:08:40.652753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.118 [2024-05-15 09:08:40.652885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.118 [2024-05-15 09:08:40.652912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.118 qpair failed and we were unable to recover it. 00:42:46.118 [2024-05-15 09:08:40.653035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.118 [2024-05-15 09:08:40.653161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.118 [2024-05-15 09:08:40.653187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.118 qpair failed and we were unable to recover it. 00:42:46.118 [2024-05-15 09:08:40.653347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.118 [2024-05-15 09:08:40.653503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.118 [2024-05-15 09:08:40.653530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.118 qpair failed and we were unable to recover it. 00:42:46.118 [2024-05-15 09:08:40.653655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.118 [2024-05-15 09:08:40.653774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.118 [2024-05-15 09:08:40.653801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.118 qpair failed and we were unable to recover it. 00:42:46.118 [2024-05-15 09:08:40.653928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.118 [2024-05-15 09:08:40.654054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.118 [2024-05-15 09:08:40.654082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.118 qpair failed and we were unable to recover it. 00:42:46.118 [2024-05-15 09:08:40.654222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.118 [2024-05-15 09:08:40.654359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.118 [2024-05-15 09:08:40.654385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.118 qpair failed and we were unable to recover it. 00:42:46.118 [2024-05-15 09:08:40.654502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.118 [2024-05-15 09:08:40.654655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.118 [2024-05-15 09:08:40.654681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.118 qpair failed and we were unable to recover it. 00:42:46.118 [2024-05-15 09:08:40.654837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.118 [2024-05-15 09:08:40.654995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.118 [2024-05-15 09:08:40.655022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.118 qpair failed and we were unable to recover it. 00:42:46.118 [2024-05-15 09:08:40.655234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.118 [2024-05-15 09:08:40.655382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.118 [2024-05-15 09:08:40.655412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.118 qpair failed and we were unable to recover it. 00:42:46.118 [2024-05-15 09:08:40.655600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.118 [2024-05-15 09:08:40.655701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.118 [2024-05-15 09:08:40.655728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.118 qpair failed and we were unable to recover it. 00:42:46.118 [2024-05-15 09:08:40.655861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.118 [2024-05-15 09:08:40.656012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.118 [2024-05-15 09:08:40.656041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.118 qpair failed and we were unable to recover it. 00:42:46.118 [2024-05-15 09:08:40.656196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.118 [2024-05-15 09:08:40.656298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.118 [2024-05-15 09:08:40.656325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.118 qpair failed and we were unable to recover it. 00:42:46.118 [2024-05-15 09:08:40.656476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.118 [2024-05-15 09:08:40.656632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.118 [2024-05-15 09:08:40.656659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.118 qpair failed and we were unable to recover it. 00:42:46.118 [2024-05-15 09:08:40.656814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.118 [2024-05-15 09:08:40.657027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.118 [2024-05-15 09:08:40.657054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.118 qpair failed and we were unable to recover it. 00:42:46.118 [2024-05-15 09:08:40.657239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.118 [2024-05-15 09:08:40.657405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.118 [2024-05-15 09:08:40.657434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.118 qpair failed and we were unable to recover it. 00:42:46.118 [2024-05-15 09:08:40.657581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.118 [2024-05-15 09:08:40.657736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.118 [2024-05-15 09:08:40.657763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.118 qpair failed and we were unable to recover it. 00:42:46.118 [2024-05-15 09:08:40.657938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.118 [2024-05-15 09:08:40.658069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.118 [2024-05-15 09:08:40.658095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.118 qpair failed and we were unable to recover it. 00:42:46.118 [2024-05-15 09:08:40.658249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.118 [2024-05-15 09:08:40.658375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.118 [2024-05-15 09:08:40.658402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.118 qpair failed and we were unable to recover it. 00:42:46.118 [2024-05-15 09:08:40.658559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.118 [2024-05-15 09:08:40.658663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.118 [2024-05-15 09:08:40.658690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.118 qpair failed and we were unable to recover it. 00:42:46.118 [2024-05-15 09:08:40.658817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.118 [2024-05-15 09:08:40.658948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.118 [2024-05-15 09:08:40.658974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.118 qpair failed and we were unable to recover it. 00:42:46.118 [2024-05-15 09:08:40.659124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.118 [2024-05-15 09:08:40.659292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.118 [2024-05-15 09:08:40.659321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.118 qpair failed and we were unable to recover it. 00:42:46.118 [2024-05-15 09:08:40.659470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.118 [2024-05-15 09:08:40.659579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.118 [2024-05-15 09:08:40.659605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.119 qpair failed and we were unable to recover it. 00:42:46.119 [2024-05-15 09:08:40.659715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.119 [2024-05-15 09:08:40.659861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.119 [2024-05-15 09:08:40.659890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.119 qpair failed and we were unable to recover it. 00:42:46.119 [2024-05-15 09:08:40.660044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.119 [2024-05-15 09:08:40.660178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.119 [2024-05-15 09:08:40.660205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.119 qpair failed and we were unable to recover it. 00:42:46.119 [2024-05-15 09:08:40.660422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.119 [2024-05-15 09:08:40.660563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.119 [2024-05-15 09:08:40.660592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.119 qpair failed and we were unable to recover it. 00:42:46.119 [2024-05-15 09:08:40.660758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.119 [2024-05-15 09:08:40.660877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.119 [2024-05-15 09:08:40.660907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.119 qpair failed and we were unable to recover it. 00:42:46.119 [2024-05-15 09:08:40.661050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.119 [2024-05-15 09:08:40.661195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.119 [2024-05-15 09:08:40.661231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.119 qpair failed and we were unable to recover it. 00:42:46.119 [2024-05-15 09:08:40.661395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.119 [2024-05-15 09:08:40.661526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.119 [2024-05-15 09:08:40.661553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.119 qpair failed and we were unable to recover it. 00:42:46.119 [2024-05-15 09:08:40.661699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.119 [2024-05-15 09:08:40.661959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.119 [2024-05-15 09:08:40.662019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.119 qpair failed and we were unable to recover it. 00:42:46.119 [2024-05-15 09:08:40.662197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.119 [2024-05-15 09:08:40.662326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.119 [2024-05-15 09:08:40.662354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.119 qpair failed and we were unable to recover it. 00:42:46.119 [2024-05-15 09:08:40.662513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.119 [2024-05-15 09:08:40.662612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.119 [2024-05-15 09:08:40.662637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.119 qpair failed and we were unable to recover it. 00:42:46.119 [2024-05-15 09:08:40.662768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.119 [2024-05-15 09:08:40.662906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.119 [2024-05-15 09:08:40.662949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.119 qpair failed and we were unable to recover it. 00:42:46.119 [2024-05-15 09:08:40.663102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.119 [2024-05-15 09:08:40.663229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.119 [2024-05-15 09:08:40.663259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.119 qpair failed and we were unable to recover it. 00:42:46.119 [2024-05-15 09:08:40.663438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.119 [2024-05-15 09:08:40.663578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.119 [2024-05-15 09:08:40.663622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.119 qpair failed and we were unable to recover it. 00:42:46.119 [2024-05-15 09:08:40.663767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.119 [2024-05-15 09:08:40.663928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.119 [2024-05-15 09:08:40.663955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.119 qpair failed and we were unable to recover it. 00:42:46.119 [2024-05-15 09:08:40.664054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.119 [2024-05-15 09:08:40.664209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.119 [2024-05-15 09:08:40.664243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.119 qpair failed and we were unable to recover it. 00:42:46.119 [2024-05-15 09:08:40.664347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.119 [2024-05-15 09:08:40.664448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.119 [2024-05-15 09:08:40.664474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.119 qpair failed and we were unable to recover it. 00:42:46.119 [2024-05-15 09:08:40.664684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.119 [2024-05-15 09:08:40.664830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.119 [2024-05-15 09:08:40.664858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.119 qpair failed and we were unable to recover it. 00:42:46.119 [2024-05-15 09:08:40.665039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.119 [2024-05-15 09:08:40.665170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.119 [2024-05-15 09:08:40.665196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.119 qpair failed and we were unable to recover it. 00:42:46.119 [2024-05-15 09:08:40.665332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.119 [2024-05-15 09:08:40.665428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.119 [2024-05-15 09:08:40.665458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.119 qpair failed and we were unable to recover it. 00:42:46.119 [2024-05-15 09:08:40.665589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.119 [2024-05-15 09:08:40.665745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.119 [2024-05-15 09:08:40.665771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.119 qpair failed and we were unable to recover it. 00:42:46.119 [2024-05-15 09:08:40.665928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.119 [2024-05-15 09:08:40.666060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.119 [2024-05-15 09:08:40.666089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.119 qpair failed and we were unable to recover it. 00:42:46.119 [2024-05-15 09:08:40.666247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.119 [2024-05-15 09:08:40.666387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.119 [2024-05-15 09:08:40.666414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.119 qpair failed and we were unable to recover it. 00:42:46.119 [2024-05-15 09:08:40.666541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.119 [2024-05-15 09:08:40.666713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.119 [2024-05-15 09:08:40.666742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.119 qpair failed and we were unable to recover it. 00:42:46.119 [2024-05-15 09:08:40.666879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.119 [2024-05-15 09:08:40.667031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.119 [2024-05-15 09:08:40.667057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.119 qpair failed and we were unable to recover it. 00:42:46.119 [2024-05-15 09:08:40.667211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.119 [2024-05-15 09:08:40.667354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.119 [2024-05-15 09:08:40.667381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.119 qpair failed and we were unable to recover it. 00:42:46.119 [2024-05-15 09:08:40.667542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.119 [2024-05-15 09:08:40.667692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.119 [2024-05-15 09:08:40.667721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.119 qpair failed and we were unable to recover it. 00:42:46.119 [2024-05-15 09:08:40.667911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.119 [2024-05-15 09:08:40.668043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.119 [2024-05-15 09:08:40.668070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.119 qpair failed and we were unable to recover it. 00:42:46.119 [2024-05-15 09:08:40.668204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.119 [2024-05-15 09:08:40.668332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.119 [2024-05-15 09:08:40.668358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.119 qpair failed and we were unable to recover it. 00:42:46.119 [2024-05-15 09:08:40.668466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.119 [2024-05-15 09:08:40.668575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.119 [2024-05-15 09:08:40.668602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.119 qpair failed and we were unable to recover it. 00:42:46.119 [2024-05-15 09:08:40.668710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.119 [2024-05-15 09:08:40.668841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.120 [2024-05-15 09:08:40.668867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.120 qpair failed and we were unable to recover it. 00:42:46.120 [2024-05-15 09:08:40.668997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.120 [2024-05-15 09:08:40.669124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.120 [2024-05-15 09:08:40.669150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.120 qpair failed and we were unable to recover it. 00:42:46.120 [2024-05-15 09:08:40.669277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.120 [2024-05-15 09:08:40.669411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.120 [2024-05-15 09:08:40.669437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.120 qpair failed and we were unable to recover it. 00:42:46.120 [2024-05-15 09:08:40.669626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.120 [2024-05-15 09:08:40.669776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.120 [2024-05-15 09:08:40.669803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.120 qpair failed and we were unable to recover it. 00:42:46.120 [2024-05-15 09:08:40.669930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.120 [2024-05-15 09:08:40.670136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.120 [2024-05-15 09:08:40.670163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.120 qpair failed and we were unable to recover it. 00:42:46.120 [2024-05-15 09:08:40.670286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.120 [2024-05-15 09:08:40.670512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.120 [2024-05-15 09:08:40.670538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.120 qpair failed and we were unable to recover it. 00:42:46.120 [2024-05-15 09:08:40.670711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.120 [2024-05-15 09:08:40.670854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.120 [2024-05-15 09:08:40.670883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.120 qpair failed and we were unable to recover it. 00:42:46.120 [2024-05-15 09:08:40.671007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.120 [2024-05-15 09:08:40.671133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.120 [2024-05-15 09:08:40.671160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.120 qpair failed and we were unable to recover it. 00:42:46.120 [2024-05-15 09:08:40.671341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.120 [2024-05-15 09:08:40.671474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.120 [2024-05-15 09:08:40.671501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.120 qpair failed and we were unable to recover it. 00:42:46.120 [2024-05-15 09:08:40.671665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.120 [2024-05-15 09:08:40.671815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.120 [2024-05-15 09:08:40.671842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.120 qpair failed and we were unable to recover it. 00:42:46.120 [2024-05-15 09:08:40.671978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.120 [2024-05-15 09:08:40.672079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.120 [2024-05-15 09:08:40.672106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.120 qpair failed and we were unable to recover it. 00:42:46.120 [2024-05-15 09:08:40.672233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.120 [2024-05-15 09:08:40.672340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.120 [2024-05-15 09:08:40.672367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.120 qpair failed and we were unable to recover it. 00:42:46.120 [2024-05-15 09:08:40.672514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.120 [2024-05-15 09:08:40.672722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.120 [2024-05-15 09:08:40.672778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.120 qpair failed and we were unable to recover it. 00:42:46.120 [2024-05-15 09:08:40.672889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.120 [2024-05-15 09:08:40.673016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.120 [2024-05-15 09:08:40.673042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.120 qpair failed and we were unable to recover it. 00:42:46.120 [2024-05-15 09:08:40.673198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.120 [2024-05-15 09:08:40.673371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.120 [2024-05-15 09:08:40.673398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.120 qpair failed and we were unable to recover it. 00:42:46.120 [2024-05-15 09:08:40.673524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.120 [2024-05-15 09:08:40.673719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.120 [2024-05-15 09:08:40.673746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.120 qpair failed and we were unable to recover it. 00:42:46.120 [2024-05-15 09:08:40.673876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.120 [2024-05-15 09:08:40.674033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.120 [2024-05-15 09:08:40.674076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.120 qpair failed and we were unable to recover it. 00:42:46.120 [2024-05-15 09:08:40.674262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.120 [2024-05-15 09:08:40.674394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.120 [2024-05-15 09:08:40.674421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.120 qpair failed and we were unable to recover it. 00:42:46.120 [2024-05-15 09:08:40.674538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.120 [2024-05-15 09:08:40.674681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.120 [2024-05-15 09:08:40.674710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.120 qpair failed and we were unable to recover it. 00:42:46.120 [2024-05-15 09:08:40.674861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.120 [2024-05-15 09:08:40.674993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.120 [2024-05-15 09:08:40.675036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.120 qpair failed and we were unable to recover it. 00:42:46.120 [2024-05-15 09:08:40.675187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.120 [2024-05-15 09:08:40.675301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.120 [2024-05-15 09:08:40.675331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.120 qpair failed and we were unable to recover it. 00:42:46.120 [2024-05-15 09:08:40.675476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.120 [2024-05-15 09:08:40.675622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.120 [2024-05-15 09:08:40.675651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.120 qpair failed and we were unable to recover it. 00:42:46.120 [2024-05-15 09:08:40.675798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.120 [2024-05-15 09:08:40.675910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.120 [2024-05-15 09:08:40.675937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.120 qpair failed and we were unable to recover it. 00:42:46.120 [2024-05-15 09:08:40.676079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.120 [2024-05-15 09:08:40.676247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.120 [2024-05-15 09:08:40.676276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.120 qpair failed and we were unable to recover it. 00:42:46.120 [2024-05-15 09:08:40.676413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.120 [2024-05-15 09:08:40.676551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.120 [2024-05-15 09:08:40.676580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.120 qpair failed and we were unable to recover it. 00:42:46.120 [2024-05-15 09:08:40.676696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.120 [2024-05-15 09:08:40.676816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.120 [2024-05-15 09:08:40.676842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.120 qpair failed and we were unable to recover it. 00:42:46.120 [2024-05-15 09:08:40.676965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.120 [2024-05-15 09:08:40.677111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.120 [2024-05-15 09:08:40.677137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.120 qpair failed and we were unable to recover it. 00:42:46.120 [2024-05-15 09:08:40.677243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.120 [2024-05-15 09:08:40.677410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.120 [2024-05-15 09:08:40.677436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.120 qpair failed and we were unable to recover it. 00:42:46.120 [2024-05-15 09:08:40.677593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.120 [2024-05-15 09:08:40.677702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.120 [2024-05-15 09:08:40.677730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.120 qpair failed and we were unable to recover it. 00:42:46.121 [2024-05-15 09:08:40.677865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.121 [2024-05-15 09:08:40.678047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.121 [2024-05-15 09:08:40.678074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.121 qpair failed and we were unable to recover it. 00:42:46.121 [2024-05-15 09:08:40.678204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.121 [2024-05-15 09:08:40.678346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.121 [2024-05-15 09:08:40.678372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.121 qpair failed and we were unable to recover it. 00:42:46.121 [2024-05-15 09:08:40.678501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.121 [2024-05-15 09:08:40.678627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.121 [2024-05-15 09:08:40.678654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.121 qpair failed and we were unable to recover it. 00:42:46.121 [2024-05-15 09:08:40.678814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.121 [2024-05-15 09:08:40.678992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.121 [2024-05-15 09:08:40.679018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.121 qpair failed and we were unable to recover it. 00:42:46.121 [2024-05-15 09:08:40.679140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.121 [2024-05-15 09:08:40.679327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.121 [2024-05-15 09:08:40.679354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.121 qpair failed and we were unable to recover it. 00:42:46.121 [2024-05-15 09:08:40.679512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.121 [2024-05-15 09:08:40.679778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.121 [2024-05-15 09:08:40.679833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.121 qpair failed and we were unable to recover it. 00:42:46.121 [2024-05-15 09:08:40.680003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.121 [2024-05-15 09:08:40.680148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.121 [2024-05-15 09:08:40.680178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.121 qpair failed and we were unable to recover it. 00:42:46.121 [2024-05-15 09:08:40.680333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.121 [2024-05-15 09:08:40.680458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.121 [2024-05-15 09:08:40.680484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.121 qpair failed and we were unable to recover it. 00:42:46.121 [2024-05-15 09:08:40.680648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.121 [2024-05-15 09:08:40.680855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.121 [2024-05-15 09:08:40.680898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.121 qpair failed and we were unable to recover it. 00:42:46.121 [2024-05-15 09:08:40.681074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.121 [2024-05-15 09:08:40.681237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.121 [2024-05-15 09:08:40.681264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.121 qpair failed and we were unable to recover it. 00:42:46.121 [2024-05-15 09:08:40.681396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.121 [2024-05-15 09:08:40.681497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.121 [2024-05-15 09:08:40.681525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.121 qpair failed and we were unable to recover it. 00:42:46.121 [2024-05-15 09:08:40.681677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.121 [2024-05-15 09:08:40.681812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.121 [2024-05-15 09:08:40.681843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.121 qpair failed and we were unable to recover it. 00:42:46.121 [2024-05-15 09:08:40.681972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.121 [2024-05-15 09:08:40.682145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.121 [2024-05-15 09:08:40.682174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.121 qpair failed and we were unable to recover it. 00:42:46.121 [2024-05-15 09:08:40.682337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.121 [2024-05-15 09:08:40.682545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.121 [2024-05-15 09:08:40.682571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.121 qpair failed and we were unable to recover it. 00:42:46.121 [2024-05-15 09:08:40.682733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.121 [2024-05-15 09:08:40.682886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.121 [2024-05-15 09:08:40.682928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.121 qpair failed and we were unable to recover it. 00:42:46.121 [2024-05-15 09:08:40.683103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.121 [2024-05-15 09:08:40.683234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.121 [2024-05-15 09:08:40.683261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.121 qpair failed and we were unable to recover it. 00:42:46.121 [2024-05-15 09:08:40.683377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.121 [2024-05-15 09:08:40.683476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.121 [2024-05-15 09:08:40.683502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.121 qpair failed and we were unable to recover it. 00:42:46.121 [2024-05-15 09:08:40.683628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.121 [2024-05-15 09:08:40.683787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.121 [2024-05-15 09:08:40.683830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.121 qpair failed and we were unable to recover it. 00:42:46.121 [2024-05-15 09:08:40.683980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.121 [2024-05-15 09:08:40.684138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.121 [2024-05-15 09:08:40.684164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.121 qpair failed and we were unable to recover it. 00:42:46.121 [2024-05-15 09:08:40.684269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.121 [2024-05-15 09:08:40.684403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.121 [2024-05-15 09:08:40.684429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.121 qpair failed and we were unable to recover it. 00:42:46.121 [2024-05-15 09:08:40.684560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.121 [2024-05-15 09:08:40.684718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.121 [2024-05-15 09:08:40.684760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.121 qpair failed and we were unable to recover it. 00:42:46.121 [2024-05-15 09:08:40.684922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.121 [2024-05-15 09:08:40.685073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.121 [2024-05-15 09:08:40.685103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.121 qpair failed and we were unable to recover it. 00:42:46.121 [2024-05-15 09:08:40.685256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.121 [2024-05-15 09:08:40.685369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.121 [2024-05-15 09:08:40.685398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.121 qpair failed and we were unable to recover it. 00:42:46.121 [2024-05-15 09:08:40.685521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.121 [2024-05-15 09:08:40.685676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.121 [2024-05-15 09:08:40.685702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.121 qpair failed and we were unable to recover it. 00:42:46.121 [2024-05-15 09:08:40.685878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.121 [2024-05-15 09:08:40.686052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.121 [2024-05-15 09:08:40.686082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.121 qpair failed and we were unable to recover it. 00:42:46.121 [2024-05-15 09:08:40.686227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.121 [2024-05-15 09:08:40.686407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.121 [2024-05-15 09:08:40.686434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.121 qpair failed and we were unable to recover it. 00:42:46.121 [2024-05-15 09:08:40.686572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.121 [2024-05-15 09:08:40.686726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.121 [2024-05-15 09:08:40.686769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.121 qpair failed and we were unable to recover it. 00:42:46.121 [2024-05-15 09:08:40.686888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.121 [2024-05-15 09:08:40.687036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.121 [2024-05-15 09:08:40.687066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.121 qpair failed and we were unable to recover it. 00:42:46.121 [2024-05-15 09:08:40.687197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.121 [2024-05-15 09:08:40.687356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.121 [2024-05-15 09:08:40.687388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.122 qpair failed and we were unable to recover it. 00:42:46.122 [2024-05-15 09:08:40.687551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.122 [2024-05-15 09:08:40.687681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.122 [2024-05-15 09:08:40.687716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.122 qpair failed and we were unable to recover it. 00:42:46.122 [2024-05-15 09:08:40.687850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.122 [2024-05-15 09:08:40.687956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.122 [2024-05-15 09:08:40.687982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.122 qpair failed and we were unable to recover it. 00:42:46.122 [2024-05-15 09:08:40.688086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.122 [2024-05-15 09:08:40.688177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.122 [2024-05-15 09:08:40.688204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.122 qpair failed and we were unable to recover it. 00:42:46.122 [2024-05-15 09:08:40.688349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.122 [2024-05-15 09:08:40.688480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.122 [2024-05-15 09:08:40.688506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.122 qpair failed and we were unable to recover it. 00:42:46.122 [2024-05-15 09:08:40.688638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.122 [2024-05-15 09:08:40.688743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.122 [2024-05-15 09:08:40.688773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.122 qpair failed and we were unable to recover it. 00:42:46.122 [2024-05-15 09:08:40.688939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.122 [2024-05-15 09:08:40.689051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.122 [2024-05-15 09:08:40.689080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.122 qpair failed and we were unable to recover it. 00:42:46.122 [2024-05-15 09:08:40.689198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.122 [2024-05-15 09:08:40.689337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.122 [2024-05-15 09:08:40.689364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.122 qpair failed and we were unable to recover it. 00:42:46.122 [2024-05-15 09:08:40.689497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.122 [2024-05-15 09:08:40.689640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.122 [2024-05-15 09:08:40.689683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.122 qpair failed and we were unable to recover it. 00:42:46.122 [2024-05-15 09:08:40.689907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.122 [2024-05-15 09:08:40.690082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.122 [2024-05-15 09:08:40.690111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.122 qpair failed and we were unable to recover it. 00:42:46.122 [2024-05-15 09:08:40.690266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.122 [2024-05-15 09:08:40.690375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.122 [2024-05-15 09:08:40.690401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.122 qpair failed and we were unable to recover it. 00:42:46.122 [2024-05-15 09:08:40.690533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.122 [2024-05-15 09:08:40.690673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.122 [2024-05-15 09:08:40.690713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.122 qpair failed and we were unable to recover it. 00:42:46.122 [2024-05-15 09:08:40.690869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.122 [2024-05-15 09:08:40.690988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.122 [2024-05-15 09:08:40.691027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.122 qpair failed and we were unable to recover it. 00:42:46.122 [2024-05-15 09:08:40.691171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.122 [2024-05-15 09:08:40.691331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.122 [2024-05-15 09:08:40.691359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.122 qpair failed and we were unable to recover it. 00:42:46.122 [2024-05-15 09:08:40.691551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.122 [2024-05-15 09:08:40.691677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.122 [2024-05-15 09:08:40.691703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.122 qpair failed and we were unable to recover it. 00:42:46.122 [2024-05-15 09:08:40.691848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.122 [2024-05-15 09:08:40.692027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.122 [2024-05-15 09:08:40.692053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.122 qpair failed and we were unable to recover it. 00:42:46.122 [2024-05-15 09:08:40.692206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.122 [2024-05-15 09:08:40.692385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.122 [2024-05-15 09:08:40.692414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.122 qpair failed and we were unable to recover it. 00:42:46.122 [2024-05-15 09:08:40.692559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.122 [2024-05-15 09:08:40.692705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.122 [2024-05-15 09:08:40.692734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.122 qpair failed and we were unable to recover it. 00:42:46.122 [2024-05-15 09:08:40.692870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.122 [2024-05-15 09:08:40.693011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.122 [2024-05-15 09:08:40.693040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.122 qpair failed and we were unable to recover it. 00:42:46.122 [2024-05-15 09:08:40.693194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.122 [2024-05-15 09:08:40.693354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.122 [2024-05-15 09:08:40.693397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.122 qpair failed and we were unable to recover it. 00:42:46.122 [2024-05-15 09:08:40.693533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.122 [2024-05-15 09:08:40.693690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.122 [2024-05-15 09:08:40.693717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.122 qpair failed and we were unable to recover it. 00:42:46.122 [2024-05-15 09:08:40.693849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.122 [2024-05-15 09:08:40.693995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.122 [2024-05-15 09:08:40.694024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.122 qpair failed and we were unable to recover it. 00:42:46.122 [2024-05-15 09:08:40.694198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.122 [2024-05-15 09:08:40.694416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.122 [2024-05-15 09:08:40.694446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.122 qpair failed and we were unable to recover it. 00:42:46.122 [2024-05-15 09:08:40.694710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.122 [2024-05-15 09:08:40.694926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.122 [2024-05-15 09:08:40.694980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.122 qpair failed and we were unable to recover it. 00:42:46.122 [2024-05-15 09:08:40.695152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.122 [2024-05-15 09:08:40.695325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.122 [2024-05-15 09:08:40.695355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.122 qpair failed and we were unable to recover it. 00:42:46.122 [2024-05-15 09:08:40.695506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.123 [2024-05-15 09:08:40.695637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.123 [2024-05-15 09:08:40.695663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.123 qpair failed and we were unable to recover it. 00:42:46.123 [2024-05-15 09:08:40.695792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.123 [2024-05-15 09:08:40.695918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.123 [2024-05-15 09:08:40.695945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.123 qpair failed and we were unable to recover it. 00:42:46.123 [2024-05-15 09:08:40.696066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.123 [2024-05-15 09:08:40.696201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.123 [2024-05-15 09:08:40.696231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.123 qpair failed and we were unable to recover it. 00:42:46.123 [2024-05-15 09:08:40.696364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.123 [2024-05-15 09:08:40.696502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.123 [2024-05-15 09:08:40.696531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.123 qpair failed and we were unable to recover it. 00:42:46.123 [2024-05-15 09:08:40.696642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.123 [2024-05-15 09:08:40.696777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.123 [2024-05-15 09:08:40.696804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.123 qpair failed and we were unable to recover it. 00:42:46.123 [2024-05-15 09:08:40.696931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.123 [2024-05-15 09:08:40.697087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.123 [2024-05-15 09:08:40.697113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.123 qpair failed and we were unable to recover it. 00:42:46.123 [2024-05-15 09:08:40.697303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.123 [2024-05-15 09:08:40.697470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.123 [2024-05-15 09:08:40.697499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.123 qpair failed and we were unable to recover it. 00:42:46.123 [2024-05-15 09:08:40.697654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.123 [2024-05-15 09:08:40.697782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.123 [2024-05-15 09:08:40.697808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.123 qpair failed and we were unable to recover it. 00:42:46.123 [2024-05-15 09:08:40.697932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.123 [2024-05-15 09:08:40.698103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.123 [2024-05-15 09:08:40.698132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.123 qpair failed and we were unable to recover it. 00:42:46.123 [2024-05-15 09:08:40.698280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.123 [2024-05-15 09:08:40.698424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.123 [2024-05-15 09:08:40.698451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.123 qpair failed and we were unable to recover it. 00:42:46.123 [2024-05-15 09:08:40.698709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.123 [2024-05-15 09:08:40.698846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.123 [2024-05-15 09:08:40.698875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.123 qpair failed and we were unable to recover it. 00:42:46.123 [2024-05-15 09:08:40.699047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.123 [2024-05-15 09:08:40.699230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.123 [2024-05-15 09:08:40.699258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.123 qpair failed and we were unable to recover it. 00:42:46.123 [2024-05-15 09:08:40.699368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.123 [2024-05-15 09:08:40.699469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.123 [2024-05-15 09:08:40.699511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.123 qpair failed and we were unable to recover it. 00:42:46.123 [2024-05-15 09:08:40.699684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.123 [2024-05-15 09:08:40.699811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.123 [2024-05-15 09:08:40.699838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.123 qpair failed and we were unable to recover it. 00:42:46.123 [2024-05-15 09:08:40.699969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.123 [2024-05-15 09:08:40.700095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.123 [2024-05-15 09:08:40.700121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.123 qpair failed and we were unable to recover it. 00:42:46.123 [2024-05-15 09:08:40.700255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.123 [2024-05-15 09:08:40.700386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.123 [2024-05-15 09:08:40.700413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.123 qpair failed and we were unable to recover it. 00:42:46.123 [2024-05-15 09:08:40.700567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.123 [2024-05-15 09:08:40.700677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.123 [2024-05-15 09:08:40.700707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.123 qpair failed and we were unable to recover it. 00:42:46.123 [2024-05-15 09:08:40.700888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.123 [2024-05-15 09:08:40.701041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.123 [2024-05-15 09:08:40.701068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.123 qpair failed and we were unable to recover it. 00:42:46.123 [2024-05-15 09:08:40.701243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.123 [2024-05-15 09:08:40.701390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.123 [2024-05-15 09:08:40.701419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.123 qpair failed and we were unable to recover it. 00:42:46.123 [2024-05-15 09:08:40.701634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.123 [2024-05-15 09:08:40.701783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.123 [2024-05-15 09:08:40.701817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.123 qpair failed and we were unable to recover it. 00:42:46.123 [2024-05-15 09:08:40.702008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.123 [2024-05-15 09:08:40.702143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.123 [2024-05-15 09:08:40.702170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.123 qpair failed and we were unable to recover it. 00:42:46.123 [2024-05-15 09:08:40.702369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.123 [2024-05-15 09:08:40.702499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.123 [2024-05-15 09:08:40.702525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.123 qpair failed and we were unable to recover it. 00:42:46.123 [2024-05-15 09:08:40.702654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.123 [2024-05-15 09:08:40.702804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.123 [2024-05-15 09:08:40.702830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.123 qpair failed and we were unable to recover it. 00:42:46.123 [2024-05-15 09:08:40.703009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.123 [2024-05-15 09:08:40.703142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.123 [2024-05-15 09:08:40.703171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.123 qpair failed and we were unable to recover it. 00:42:46.123 [2024-05-15 09:08:40.703335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.123 [2024-05-15 09:08:40.703490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.123 [2024-05-15 09:08:40.703517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.123 qpair failed and we were unable to recover it. 00:42:46.123 [2024-05-15 09:08:40.703676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.123 [2024-05-15 09:08:40.703827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.123 [2024-05-15 09:08:40.703869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.123 qpair failed and we were unable to recover it. 00:42:46.123 [2024-05-15 09:08:40.704020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.123 [2024-05-15 09:08:40.704125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.123 [2024-05-15 09:08:40.704152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.123 qpair failed and we were unable to recover it. 00:42:46.123 [2024-05-15 09:08:40.704247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.123 [2024-05-15 09:08:40.704452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.123 [2024-05-15 09:08:40.704495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.123 qpair failed and we were unable to recover it. 00:42:46.123 [2024-05-15 09:08:40.704667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.123 [2024-05-15 09:08:40.704825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.124 [2024-05-15 09:08:40.704851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.124 qpair failed and we were unable to recover it. 00:42:46.124 [2024-05-15 09:08:40.705055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.124 [2024-05-15 09:08:40.705203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.124 [2024-05-15 09:08:40.705239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.124 qpair failed and we were unable to recover it. 00:42:46.124 [2024-05-15 09:08:40.705478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.124 [2024-05-15 09:08:40.705726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.124 [2024-05-15 09:08:40.705755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.124 qpair failed and we were unable to recover it. 00:42:46.124 [2024-05-15 09:08:40.705932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.124 [2024-05-15 09:08:40.706030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.124 [2024-05-15 09:08:40.706057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.124 qpair failed and we were unable to recover it. 00:42:46.124 [2024-05-15 09:08:40.706178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.124 [2024-05-15 09:08:40.706344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.124 [2024-05-15 09:08:40.706372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.124 qpair failed and we were unable to recover it. 00:42:46.124 [2024-05-15 09:08:40.706545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.124 [2024-05-15 09:08:40.706700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.124 [2024-05-15 09:08:40.706726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.124 qpair failed and we were unable to recover it. 00:42:46.124 [2024-05-15 09:08:40.706937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.124 [2024-05-15 09:08:40.707116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.124 [2024-05-15 09:08:40.707144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.124 qpair failed and we were unable to recover it. 00:42:46.124 [2024-05-15 09:08:40.707260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.124 [2024-05-15 09:08:40.707403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.124 [2024-05-15 09:08:40.707431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.124 qpair failed and we were unable to recover it. 00:42:46.124 [2024-05-15 09:08:40.707566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.124 [2024-05-15 09:08:40.707739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.124 [2024-05-15 09:08:40.707765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.124 qpair failed and we were unable to recover it. 00:42:46.124 [2024-05-15 09:08:40.707922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.124 [2024-05-15 09:08:40.708023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.124 [2024-05-15 09:08:40.708051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.124 qpair failed and we were unable to recover it. 00:42:46.124 [2024-05-15 09:08:40.708202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.124 [2024-05-15 09:08:40.708389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.124 [2024-05-15 09:08:40.708419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.124 qpair failed and we were unable to recover it. 00:42:46.124 [2024-05-15 09:08:40.708585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.124 [2024-05-15 09:08:40.708736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.124 [2024-05-15 09:08:40.708765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.124 qpair failed and we were unable to recover it. 00:42:46.124 [2024-05-15 09:08:40.708917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.124 [2024-05-15 09:08:40.709051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.124 [2024-05-15 09:08:40.709080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.124 qpair failed and we were unable to recover it. 00:42:46.124 [2024-05-15 09:08:40.709241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.124 [2024-05-15 09:08:40.709385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.124 [2024-05-15 09:08:40.709427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.124 qpair failed and we were unable to recover it. 00:42:46.124 [2024-05-15 09:08:40.709573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.124 [2024-05-15 09:08:40.709790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.124 [2024-05-15 09:08:40.709819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.124 qpair failed and we were unable to recover it. 00:42:46.124 [2024-05-15 09:08:40.709967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.124 [2024-05-15 09:08:40.710078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.124 [2024-05-15 09:08:40.710105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.124 qpair failed and we were unable to recover it. 00:42:46.124 [2024-05-15 09:08:40.710265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.124 [2024-05-15 09:08:40.710451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.124 [2024-05-15 09:08:40.710480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.124 qpair failed and we were unable to recover it. 00:42:46.124 [2024-05-15 09:08:40.710633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.124 [2024-05-15 09:08:40.710774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.124 [2024-05-15 09:08:40.710800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.124 qpair failed and we were unable to recover it. 00:42:46.124 [2024-05-15 09:08:40.710959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.124 [2024-05-15 09:08:40.711089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.124 [2024-05-15 09:08:40.711118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.124 qpair failed and we were unable to recover it. 00:42:46.124 [2024-05-15 09:08:40.711233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.124 [2024-05-15 09:08:40.711360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.124 [2024-05-15 09:08:40.711386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.124 qpair failed and we were unable to recover it. 00:42:46.124 [2024-05-15 09:08:40.711517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.124 [2024-05-15 09:08:40.711670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.124 [2024-05-15 09:08:40.711696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.124 qpair failed and we were unable to recover it. 00:42:46.124 [2024-05-15 09:08:40.711824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.124 [2024-05-15 09:08:40.711948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.124 [2024-05-15 09:08:40.711974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.124 qpair failed and we were unable to recover it. 00:42:46.124 [2024-05-15 09:08:40.712131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.124 [2024-05-15 09:08:40.712267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.124 [2024-05-15 09:08:40.712294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.124 qpair failed and we were unable to recover it. 00:42:46.124 [2024-05-15 09:08:40.712474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.124 [2024-05-15 09:08:40.712604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.124 [2024-05-15 09:08:40.712631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.124 qpair failed and we were unable to recover it. 00:42:46.124 [2024-05-15 09:08:40.712793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.124 [2024-05-15 09:08:40.712922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.124 [2024-05-15 09:08:40.712949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.124 qpair failed and we were unable to recover it. 00:42:46.124 [2024-05-15 09:08:40.713055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.124 [2024-05-15 09:08:40.713163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.124 [2024-05-15 09:08:40.713190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.124 qpair failed and we were unable to recover it. 00:42:46.124 [2024-05-15 09:08:40.713382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.124 [2024-05-15 09:08:40.713508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.124 [2024-05-15 09:08:40.713535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.124 qpair failed and we were unable to recover it. 00:42:46.124 [2024-05-15 09:08:40.713642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.124 [2024-05-15 09:08:40.713789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.124 [2024-05-15 09:08:40.713819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.124 qpair failed and we were unable to recover it. 00:42:46.124 [2024-05-15 09:08:40.713954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.124 [2024-05-15 09:08:40.714063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.124 [2024-05-15 09:08:40.714093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.124 qpair failed and we were unable to recover it. 00:42:46.124 [2024-05-15 09:08:40.714235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.125 [2024-05-15 09:08:40.714369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.125 [2024-05-15 09:08:40.714396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.125 qpair failed and we were unable to recover it. 00:42:46.125 [2024-05-15 09:08:40.714522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.125 [2024-05-15 09:08:40.714688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.125 [2024-05-15 09:08:40.714717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.125 qpair failed and we were unable to recover it. 00:42:46.125 [2024-05-15 09:08:40.714880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.125 [2024-05-15 09:08:40.715095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.125 [2024-05-15 09:08:40.715124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.125 qpair failed and we were unable to recover it. 00:42:46.125 [2024-05-15 09:08:40.715267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.125 [2024-05-15 09:08:40.715415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.125 [2024-05-15 09:08:40.715445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.125 qpair failed and we were unable to recover it. 00:42:46.125 [2024-05-15 09:08:40.715633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.125 [2024-05-15 09:08:40.715745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.125 [2024-05-15 09:08:40.715772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.125 qpair failed and we were unable to recover it. 00:42:46.125 [2024-05-15 09:08:40.715928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.125 [2024-05-15 09:08:40.716120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.125 [2024-05-15 09:08:40.716149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.125 qpair failed and we were unable to recover it. 00:42:46.125 [2024-05-15 09:08:40.716298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.125 [2024-05-15 09:08:40.716423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.125 [2024-05-15 09:08:40.716462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.125 qpair failed and we were unable to recover it. 00:42:46.125 [2024-05-15 09:08:40.716594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.125 [2024-05-15 09:08:40.716735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.125 [2024-05-15 09:08:40.716772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.125 qpair failed and we were unable to recover it. 00:42:46.125 [2024-05-15 09:08:40.716916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.125 [2024-05-15 09:08:40.717040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.125 [2024-05-15 09:08:40.717066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.125 qpair failed and we were unable to recover it. 00:42:46.125 [2024-05-15 09:08:40.717277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.125 [2024-05-15 09:08:40.717419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.125 [2024-05-15 09:08:40.717450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.125 qpair failed and we were unable to recover it. 00:42:46.125 [2024-05-15 09:08:40.717602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.125 [2024-05-15 09:08:40.717743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.125 [2024-05-15 09:08:40.717778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.125 qpair failed and we were unable to recover it. 00:42:46.125 [2024-05-15 09:08:40.717922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.125 [2024-05-15 09:08:40.718103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.125 [2024-05-15 09:08:40.718130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.125 qpair failed and we were unable to recover it. 00:42:46.125 [2024-05-15 09:08:40.718236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.125 [2024-05-15 09:08:40.718344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.125 [2024-05-15 09:08:40.718369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.125 qpair failed and we were unable to recover it. 00:42:46.125 [2024-05-15 09:08:40.718476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.125 [2024-05-15 09:08:40.718618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.125 [2024-05-15 09:08:40.718649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.125 qpair failed and we were unable to recover it. 00:42:46.125 [2024-05-15 09:08:40.718799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.125 [2024-05-15 09:08:40.718939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.125 [2024-05-15 09:08:40.718969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.125 qpair failed and we were unable to recover it. 00:42:46.125 [2024-05-15 09:08:40.719112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.125 [2024-05-15 09:08:40.719290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.125 [2024-05-15 09:08:40.719317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.125 qpair failed and we were unable to recover it. 00:42:46.125 [2024-05-15 09:08:40.719445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.125 [2024-05-15 09:08:40.719603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.125 [2024-05-15 09:08:40.719630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.125 qpair failed and we were unable to recover it. 00:42:46.125 [2024-05-15 09:08:40.719753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.125 [2024-05-15 09:08:40.719919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.125 [2024-05-15 09:08:40.719945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.125 qpair failed and we were unable to recover it. 00:42:46.125 [2024-05-15 09:08:40.720068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.125 [2024-05-15 09:08:40.720182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.125 [2024-05-15 09:08:40.720212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.125 qpair failed and we were unable to recover it. 00:42:46.125 [2024-05-15 09:08:40.720340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.125 [2024-05-15 09:08:40.720481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.125 [2024-05-15 09:08:40.720507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.125 qpair failed and we were unable to recover it. 00:42:46.125 [2024-05-15 09:08:40.720661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.125 [2024-05-15 09:08:40.720767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.125 [2024-05-15 09:08:40.720794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.125 qpair failed and we were unable to recover it. 00:42:46.125 [2024-05-15 09:08:40.720949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.125 [2024-05-15 09:08:40.721074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.125 [2024-05-15 09:08:40.721101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.125 qpair failed and we were unable to recover it. 00:42:46.125 [2024-05-15 09:08:40.721232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.125 [2024-05-15 09:08:40.721386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.125 [2024-05-15 09:08:40.721413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.125 qpair failed and we were unable to recover it. 00:42:46.125 [2024-05-15 09:08:40.721623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.125 [2024-05-15 09:08:40.721765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.125 [2024-05-15 09:08:40.721794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.125 qpair failed and we were unable to recover it. 00:42:46.125 [2024-05-15 09:08:40.721946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.125 [2024-05-15 09:08:40.722083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.125 [2024-05-15 09:08:40.722109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.125 qpair failed and we were unable to recover it. 00:42:46.125 [2024-05-15 09:08:40.722262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.125 [2024-05-15 09:08:40.722422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.125 [2024-05-15 09:08:40.722448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.125 qpair failed and we were unable to recover it. 00:42:46.125 [2024-05-15 09:08:40.722547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.125 [2024-05-15 09:08:40.722700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.125 [2024-05-15 09:08:40.722727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.125 qpair failed and we were unable to recover it. 00:42:46.125 [2024-05-15 09:08:40.722933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.125 [2024-05-15 09:08:40.723058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.125 [2024-05-15 09:08:40.723084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.125 qpair failed and we were unable to recover it. 00:42:46.125 [2024-05-15 09:08:40.723211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.125 [2024-05-15 09:08:40.723381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.125 [2024-05-15 09:08:40.723422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.125 qpair failed and we were unable to recover it. 00:42:46.125 [2024-05-15 09:08:40.723564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.126 [2024-05-15 09:08:40.723704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.126 [2024-05-15 09:08:40.723733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.126 qpair failed and we were unable to recover it. 00:42:46.126 [2024-05-15 09:08:40.723853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.126 [2024-05-15 09:08:40.723977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.126 [2024-05-15 09:08:40.724003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.126 qpair failed and we were unable to recover it. 00:42:46.126 [2024-05-15 09:08:40.724154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.126 [2024-05-15 09:08:40.724363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.126 [2024-05-15 09:08:40.724390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.126 qpair failed and we were unable to recover it. 00:42:46.126 [2024-05-15 09:08:40.724523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.126 [2024-05-15 09:08:40.724646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.126 [2024-05-15 09:08:40.724677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.126 qpair failed and we were unable to recover it. 00:42:46.126 [2024-05-15 09:08:40.724850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.126 [2024-05-15 09:08:40.725014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.126 [2024-05-15 09:08:40.725043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.126 qpair failed and we were unable to recover it. 00:42:46.126 [2024-05-15 09:08:40.725227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.126 [2024-05-15 09:08:40.725361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.126 [2024-05-15 09:08:40.725388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.126 qpair failed and we were unable to recover it. 00:42:46.126 [2024-05-15 09:08:40.725540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.126 [2024-05-15 09:08:40.725688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.126 [2024-05-15 09:08:40.725718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.126 qpair failed and we were unable to recover it. 00:42:46.126 [2024-05-15 09:08:40.725833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.126 [2024-05-15 09:08:40.725967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.126 [2024-05-15 09:08:40.725994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.126 qpair failed and we were unable to recover it. 00:42:46.126 [2024-05-15 09:08:40.726129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.126 [2024-05-15 09:08:40.726302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.126 [2024-05-15 09:08:40.726332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.126 qpair failed and we were unable to recover it. 00:42:46.126 [2024-05-15 09:08:40.726472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.126 [2024-05-15 09:08:40.726614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.126 [2024-05-15 09:08:40.726644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.126 qpair failed and we were unable to recover it. 00:42:46.126 [2024-05-15 09:08:40.726785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.126 [2024-05-15 09:08:40.726923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.126 [2024-05-15 09:08:40.726952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.126 qpair failed and we were unable to recover it. 00:42:46.126 [2024-05-15 09:08:40.727117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.126 [2024-05-15 09:08:40.727253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.126 [2024-05-15 09:08:40.727280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.126 qpair failed and we were unable to recover it. 00:42:46.126 [2024-05-15 09:08:40.727466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.126 [2024-05-15 09:08:40.727601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.126 [2024-05-15 09:08:40.727627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.126 qpair failed and we were unable to recover it. 00:42:46.126 [2024-05-15 09:08:40.727834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.126 [2024-05-15 09:08:40.727930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.126 [2024-05-15 09:08:40.727957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.126 qpair failed and we were unable to recover it. 00:42:46.126 [2024-05-15 09:08:40.728086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.126 [2024-05-15 09:08:40.728214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.126 [2024-05-15 09:08:40.728247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.126 qpair failed and we were unable to recover it. 00:42:46.126 [2024-05-15 09:08:40.728355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.126 [2024-05-15 09:08:40.728460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.126 [2024-05-15 09:08:40.728488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.126 qpair failed and we were unable to recover it. 00:42:46.126 [2024-05-15 09:08:40.728610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.126 [2024-05-15 09:08:40.728778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.126 [2024-05-15 09:08:40.728807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.126 qpair failed and we were unable to recover it. 00:42:46.126 [2024-05-15 09:08:40.728918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.126 [2024-05-15 09:08:40.729061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.126 [2024-05-15 09:08:40.729092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.126 qpair failed and we were unable to recover it. 00:42:46.126 [2024-05-15 09:08:40.729205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.126 [2024-05-15 09:08:40.729337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.126 [2024-05-15 09:08:40.729363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.126 qpair failed and we were unable to recover it. 00:42:46.126 [2024-05-15 09:08:40.729461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.126 [2024-05-15 09:08:40.729560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.126 [2024-05-15 09:08:40.729585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.126 qpair failed and we were unable to recover it. 00:42:46.126 [2024-05-15 09:08:40.729729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.126 [2024-05-15 09:08:40.729871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.126 [2024-05-15 09:08:40.729900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.126 qpair failed and we were unable to recover it. 00:42:46.126 [2024-05-15 09:08:40.730015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.126 [2024-05-15 09:08:40.730158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.126 [2024-05-15 09:08:40.730187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.126 qpair failed and we were unable to recover it. 00:42:46.126 [2024-05-15 09:08:40.730350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.126 [2024-05-15 09:08:40.730473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.126 [2024-05-15 09:08:40.730499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.126 qpair failed and we were unable to recover it. 00:42:46.126 [2024-05-15 09:08:40.730660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.126 [2024-05-15 09:08:40.730819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.126 [2024-05-15 09:08:40.730845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.126 qpair failed and we were unable to recover it. 00:42:46.126 [2024-05-15 09:08:40.730999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.126 [2024-05-15 09:08:40.731135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.126 [2024-05-15 09:08:40.731164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.126 qpair failed and we were unable to recover it. 00:42:46.127 [2024-05-15 09:08:40.731303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.127 [2024-05-15 09:08:40.731427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.127 [2024-05-15 09:08:40.731458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.127 qpair failed and we were unable to recover it. 00:42:46.127 [2024-05-15 09:08:40.731604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.127 [2024-05-15 09:08:40.731763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.127 [2024-05-15 09:08:40.731790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.127 qpair failed and we were unable to recover it. 00:42:46.127 [2024-05-15 09:08:40.731998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.127 [2024-05-15 09:08:40.732140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.127 [2024-05-15 09:08:40.732170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.127 qpair failed and we were unable to recover it. 00:42:46.127 [2024-05-15 09:08:40.732353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.127 [2024-05-15 09:08:40.732482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.127 [2024-05-15 09:08:40.732508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.127 qpair failed and we were unable to recover it. 00:42:46.127 [2024-05-15 09:08:40.732645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.127 [2024-05-15 09:08:40.732769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.127 [2024-05-15 09:08:40.732795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.127 qpair failed and we were unable to recover it. 00:42:46.127 [2024-05-15 09:08:40.732906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.127 [2024-05-15 09:08:40.733116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.127 [2024-05-15 09:08:40.733144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.127 qpair failed and we were unable to recover it. 00:42:46.127 [2024-05-15 09:08:40.733287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.127 [2024-05-15 09:08:40.733411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.127 [2024-05-15 09:08:40.733437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.127 qpair failed and we were unable to recover it. 00:42:46.127 [2024-05-15 09:08:40.733595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.127 [2024-05-15 09:08:40.733764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.127 [2024-05-15 09:08:40.733792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.127 qpair failed and we were unable to recover it. 00:42:46.127 [2024-05-15 09:08:40.733959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.127 [2024-05-15 09:08:40.734091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.127 [2024-05-15 09:08:40.734117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.127 qpair failed and we were unable to recover it. 00:42:46.127 [2024-05-15 09:08:40.734290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.127 [2024-05-15 09:08:40.734416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.127 [2024-05-15 09:08:40.734442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.127 qpair failed and we were unable to recover it. 00:42:46.127 [2024-05-15 09:08:40.734580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.127 [2024-05-15 09:08:40.734718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.127 [2024-05-15 09:08:40.734748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.127 qpair failed and we were unable to recover it. 00:42:46.127 [2024-05-15 09:08:40.734874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.127 [2024-05-15 09:08:40.735025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.127 [2024-05-15 09:08:40.735051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.127 qpair failed and we were unable to recover it. 00:42:46.127 [2024-05-15 09:08:40.735229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.127 [2024-05-15 09:08:40.735375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.127 [2024-05-15 09:08:40.735405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.127 qpair failed and we were unable to recover it. 00:42:46.127 [2024-05-15 09:08:40.735587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.127 [2024-05-15 09:08:40.735719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.127 [2024-05-15 09:08:40.735745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.127 qpair failed and we were unable to recover it. 00:42:46.127 [2024-05-15 09:08:40.735876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.127 [2024-05-15 09:08:40.736028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.127 [2024-05-15 09:08:40.736055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.127 qpair failed and we were unable to recover it. 00:42:46.127 [2024-05-15 09:08:40.736224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.127 [2024-05-15 09:08:40.736369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.127 [2024-05-15 09:08:40.736399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.127 qpair failed and we were unable to recover it. 00:42:46.127 [2024-05-15 09:08:40.736548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.127 [2024-05-15 09:08:40.736702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.127 [2024-05-15 09:08:40.736731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.127 qpair failed and we were unable to recover it. 00:42:46.127 [2024-05-15 09:08:40.736874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.127 [2024-05-15 09:08:40.737053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.127 [2024-05-15 09:08:40.737079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.127 qpair failed and we were unable to recover it. 00:42:46.127 [2024-05-15 09:08:40.737206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.127 [2024-05-15 09:08:40.737348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.127 [2024-05-15 09:08:40.737374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.127 qpair failed and we were unable to recover it. 00:42:46.127 [2024-05-15 09:08:40.737512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.127 [2024-05-15 09:08:40.737611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.127 [2024-05-15 09:08:40.737638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.127 qpair failed and we were unable to recover it. 00:42:46.127 [2024-05-15 09:08:40.737770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.127 [2024-05-15 09:08:40.737873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.127 [2024-05-15 09:08:40.737904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.127 qpair failed and we were unable to recover it. 00:42:46.127 [2024-05-15 09:08:40.738044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.127 [2024-05-15 09:08:40.738245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.127 [2024-05-15 09:08:40.738273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.127 qpair failed and we were unable to recover it. 00:42:46.127 [2024-05-15 09:08:40.738396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.127 [2024-05-15 09:08:40.738550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.127 [2024-05-15 09:08:40.738576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.127 qpair failed and we were unable to recover it. 00:42:46.127 [2024-05-15 09:08:40.738807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.127 [2024-05-15 09:08:40.738959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.127 [2024-05-15 09:08:40.738985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.127 qpair failed and we were unable to recover it. 00:42:46.127 [2024-05-15 09:08:40.739118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.127 [2024-05-15 09:08:40.739281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.127 [2024-05-15 09:08:40.739308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.127 qpair failed and we were unable to recover it. 00:42:46.127 [2024-05-15 09:08:40.739465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.127 [2024-05-15 09:08:40.739570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.127 [2024-05-15 09:08:40.739597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.127 qpair failed and we were unable to recover it. 00:42:46.127 [2024-05-15 09:08:40.739749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.127 [2024-05-15 09:08:40.739850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.127 [2024-05-15 09:08:40.739877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.127 qpair failed and we were unable to recover it. 00:42:46.127 [2024-05-15 09:08:40.740007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.127 [2024-05-15 09:08:40.740137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.127 [2024-05-15 09:08:40.740163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.127 qpair failed and we were unable to recover it. 00:42:46.127 [2024-05-15 09:08:40.740284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.127 [2024-05-15 09:08:40.740444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.127 [2024-05-15 09:08:40.740471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.127 qpair failed and we were unable to recover it. 00:42:46.128 [2024-05-15 09:08:40.740590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.128 [2024-05-15 09:08:40.740727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.128 [2024-05-15 09:08:40.740756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.128 qpair failed and we were unable to recover it. 00:42:46.128 [2024-05-15 09:08:40.740880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.128 [2024-05-15 09:08:40.740982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.128 [2024-05-15 09:08:40.741009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.128 qpair failed and we were unable to recover it. 00:42:46.128 [2024-05-15 09:08:40.741191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.128 [2024-05-15 09:08:40.741299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.128 [2024-05-15 09:08:40.741328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.128 qpair failed and we were unable to recover it. 00:42:46.128 [2024-05-15 09:08:40.741471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.128 [2024-05-15 09:08:40.741575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.128 [2024-05-15 09:08:40.741605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.128 qpair failed and we were unable to recover it. 00:42:46.128 [2024-05-15 09:08:40.741818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.128 [2024-05-15 09:08:40.741949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.128 [2024-05-15 09:08:40.741978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.128 qpair failed and we were unable to recover it. 00:42:46.128 [2024-05-15 09:08:40.742135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.128 [2024-05-15 09:08:40.742263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.128 [2024-05-15 09:08:40.742290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.128 qpair failed and we were unable to recover it. 00:42:46.128 [2024-05-15 09:08:40.742449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.128 [2024-05-15 09:08:40.742590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.128 [2024-05-15 09:08:40.742618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.128 qpair failed and we were unable to recover it. 00:42:46.128 [2024-05-15 09:08:40.742776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.128 [2024-05-15 09:08:40.742982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.128 [2024-05-15 09:08:40.743008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.128 qpair failed and we were unable to recover it. 00:42:46.128 [2024-05-15 09:08:40.743182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.128 [2024-05-15 09:08:40.743372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.128 [2024-05-15 09:08:40.743399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.128 qpair failed and we were unable to recover it. 00:42:46.128 [2024-05-15 09:08:40.743554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.128 [2024-05-15 09:08:40.743686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.128 [2024-05-15 09:08:40.743713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.128 qpair failed and we were unable to recover it. 00:42:46.128 [2024-05-15 09:08:40.743814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.128 [2024-05-15 09:08:40.743965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.128 [2024-05-15 09:08:40.743992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.128 qpair failed and we were unable to recover it. 00:42:46.128 [2024-05-15 09:08:40.744170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.128 [2024-05-15 09:08:40.744334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.128 [2024-05-15 09:08:40.744362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.128 qpair failed and we were unable to recover it. 00:42:46.128 [2024-05-15 09:08:40.744501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.128 [2024-05-15 09:08:40.744695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.128 [2024-05-15 09:08:40.744722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.128 qpair failed and we were unable to recover it. 00:42:46.128 [2024-05-15 09:08:40.744842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.128 [2024-05-15 09:08:40.744995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.128 [2024-05-15 09:08:40.745021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.128 qpair failed and we were unable to recover it. 00:42:46.128 [2024-05-15 09:08:40.745183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.128 [2024-05-15 09:08:40.745352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.128 [2024-05-15 09:08:40.745379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.128 qpair failed and we were unable to recover it. 00:42:46.128 [2024-05-15 09:08:40.745504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.128 [2024-05-15 09:08:40.745667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.128 [2024-05-15 09:08:40.745693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.128 qpair failed and we were unable to recover it. 00:42:46.128 [2024-05-15 09:08:40.745851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.128 [2024-05-15 09:08:40.745993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.128 [2024-05-15 09:08:40.746023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.128 qpair failed and we were unable to recover it. 00:42:46.128 [2024-05-15 09:08:40.746192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.128 [2024-05-15 09:08:40.746378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.128 [2024-05-15 09:08:40.746408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.128 qpair failed and we were unable to recover it. 00:42:46.128 [2024-05-15 09:08:40.746580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.128 [2024-05-15 09:08:40.746835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.128 [2024-05-15 09:08:40.746894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.128 qpair failed and we were unable to recover it. 00:42:46.128 [2024-05-15 09:08:40.747063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.128 [2024-05-15 09:08:40.747210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.128 [2024-05-15 09:08:40.747246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.128 qpair failed and we were unable to recover it. 00:42:46.128 [2024-05-15 09:08:40.747365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.128 [2024-05-15 09:08:40.747524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.128 [2024-05-15 09:08:40.747550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.128 qpair failed and we were unable to recover it. 00:42:46.128 [2024-05-15 09:08:40.747675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.128 [2024-05-15 09:08:40.747832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.128 [2024-05-15 09:08:40.747858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.128 qpair failed and we were unable to recover it. 00:42:46.128 [2024-05-15 09:08:40.748037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.128 [2024-05-15 09:08:40.748207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.128 [2024-05-15 09:08:40.748267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.128 qpair failed and we were unable to recover it. 00:42:46.128 [2024-05-15 09:08:40.748416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.128 [2024-05-15 09:08:40.748584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.128 [2024-05-15 09:08:40.748613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.128 qpair failed and we were unable to recover it. 00:42:46.128 [2024-05-15 09:08:40.748781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.128 [2024-05-15 09:08:40.748948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.128 [2024-05-15 09:08:40.749014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.128 qpair failed and we were unable to recover it. 00:42:46.128 [2024-05-15 09:08:40.749170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.128 [2024-05-15 09:08:40.749324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.128 [2024-05-15 09:08:40.749351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.128 qpair failed and we were unable to recover it. 00:42:46.128 [2024-05-15 09:08:40.749482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.128 [2024-05-15 09:08:40.749623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.128 [2024-05-15 09:08:40.749692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.128 qpair failed and we were unable to recover it. 00:42:46.128 [2024-05-15 09:08:40.749860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.128 [2024-05-15 09:08:40.750008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.128 [2024-05-15 09:08:40.750038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.128 qpair failed and we were unable to recover it. 00:42:46.128 [2024-05-15 09:08:40.750177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.128 [2024-05-15 09:08:40.750397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.129 [2024-05-15 09:08:40.750424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.129 qpair failed and we were unable to recover it. 00:42:46.129 [2024-05-15 09:08:40.750579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.129 [2024-05-15 09:08:40.750731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.129 [2024-05-15 09:08:40.750760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.129 qpair failed and we were unable to recover it. 00:42:46.129 [2024-05-15 09:08:40.750900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.129 [2024-05-15 09:08:40.751037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.129 [2024-05-15 09:08:40.751066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.129 qpair failed and we were unable to recover it. 00:42:46.129 [2024-05-15 09:08:40.751177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.129 [2024-05-15 09:08:40.751330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.129 [2024-05-15 09:08:40.751357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.129 qpair failed and we were unable to recover it. 00:42:46.129 [2024-05-15 09:08:40.751487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.129 [2024-05-15 09:08:40.751621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.129 [2024-05-15 09:08:40.751651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.129 qpair failed and we were unable to recover it. 00:42:46.129 [2024-05-15 09:08:40.751786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.129 [2024-05-15 09:08:40.751939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.129 [2024-05-15 09:08:40.751966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.129 qpair failed and we were unable to recover it. 00:42:46.129 [2024-05-15 09:08:40.752067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.129 [2024-05-15 09:08:40.752191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.129 [2024-05-15 09:08:40.752225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.129 qpair failed and we were unable to recover it. 00:42:46.129 [2024-05-15 09:08:40.752331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.129 [2024-05-15 09:08:40.752460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.129 [2024-05-15 09:08:40.752486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.129 qpair failed and we were unable to recover it. 00:42:46.129 [2024-05-15 09:08:40.752635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.129 [2024-05-15 09:08:40.752806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.129 [2024-05-15 09:08:40.752835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.129 qpair failed and we were unable to recover it. 00:42:46.129 [2024-05-15 09:08:40.752985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.129 [2024-05-15 09:08:40.753115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.129 [2024-05-15 09:08:40.753141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.129 qpair failed and we were unable to recover it. 00:42:46.129 [2024-05-15 09:08:40.753295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.129 [2024-05-15 09:08:40.753483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.129 [2024-05-15 09:08:40.753510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.129 qpair failed and we were unable to recover it. 00:42:46.129 [2024-05-15 09:08:40.753663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.129 [2024-05-15 09:08:40.753823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.129 [2024-05-15 09:08:40.753849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.129 qpair failed and we were unable to recover it. 00:42:46.129 [2024-05-15 09:08:40.754013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.129 [2024-05-15 09:08:40.754189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.129 [2024-05-15 09:08:40.754224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.129 qpair failed and we were unable to recover it. 00:42:46.129 [2024-05-15 09:08:40.754397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.129 [2024-05-15 09:08:40.754530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.129 [2024-05-15 09:08:40.754556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.129 qpair failed and we were unable to recover it. 00:42:46.129 [2024-05-15 09:08:40.754753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.129 [2024-05-15 09:08:40.754915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.129 [2024-05-15 09:08:40.754946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.129 qpair failed and we were unable to recover it. 00:42:46.129 [2024-05-15 09:08:40.755074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.129 [2024-05-15 09:08:40.755212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.129 [2024-05-15 09:08:40.755247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.129 qpair failed and we were unable to recover it. 00:42:46.129 [2024-05-15 09:08:40.755413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.129 [2024-05-15 09:08:40.755555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.129 [2024-05-15 09:08:40.755584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.129 qpair failed and we were unable to recover it. 00:42:46.129 [2024-05-15 09:08:40.755703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.129 [2024-05-15 09:08:40.755855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.129 [2024-05-15 09:08:40.755882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.129 qpair failed and we were unable to recover it. 00:42:46.129 [2024-05-15 09:08:40.756006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.129 [2024-05-15 09:08:40.756147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.129 [2024-05-15 09:08:40.756176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.129 qpair failed and we were unable to recover it. 00:42:46.129 [2024-05-15 09:08:40.756339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.129 [2024-05-15 09:08:40.756531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.129 [2024-05-15 09:08:40.756557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.129 qpair failed and we were unable to recover it. 00:42:46.129 [2024-05-15 09:08:40.756688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.129 [2024-05-15 09:08:40.756789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.129 [2024-05-15 09:08:40.756833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.129 qpair failed and we were unable to recover it. 00:42:46.129 [2024-05-15 09:08:40.756951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.129 [2024-05-15 09:08:40.757057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.129 [2024-05-15 09:08:40.757083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.129 qpair failed and we were unable to recover it. 00:42:46.129 [2024-05-15 09:08:40.757266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.129 [2024-05-15 09:08:40.757405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.129 [2024-05-15 09:08:40.757434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.129 qpair failed and we were unable to recover it. 00:42:46.129 [2024-05-15 09:08:40.757552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.129 [2024-05-15 09:08:40.757688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.129 [2024-05-15 09:08:40.757722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.129 qpair failed and we were unable to recover it. 00:42:46.129 [2024-05-15 09:08:40.757883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.129 [2024-05-15 09:08:40.758037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.129 [2024-05-15 09:08:40.758063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.129 qpair failed and we were unable to recover it. 00:42:46.129 [2024-05-15 09:08:40.758205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.129 [2024-05-15 09:08:40.758341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.129 [2024-05-15 09:08:40.758368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.129 qpair failed and we were unable to recover it. 00:42:46.129 [2024-05-15 09:08:40.758517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.129 [2024-05-15 09:08:40.758669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.129 [2024-05-15 09:08:40.758698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.129 qpair failed and we were unable to recover it. 00:42:46.129 [2024-05-15 09:08:40.758839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.129 [2024-05-15 09:08:40.758985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.129 [2024-05-15 09:08:40.759014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.129 qpair failed and we were unable to recover it. 00:42:46.129 [2024-05-15 09:08:40.759163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.129 [2024-05-15 09:08:40.759341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.129 [2024-05-15 09:08:40.759370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.129 qpair failed and we were unable to recover it. 00:42:46.129 [2024-05-15 09:08:40.759502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.129 [2024-05-15 09:08:40.759660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.129 [2024-05-15 09:08:40.759686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.129 qpair failed and we were unable to recover it. 00:42:46.129 [2024-05-15 09:08:40.759818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.129 [2024-05-15 09:08:40.759988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.129 [2024-05-15 09:08:40.760017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.129 qpair failed and we were unable to recover it. 00:42:46.129 [2024-05-15 09:08:40.760164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.129 [2024-05-15 09:08:40.760288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.129 [2024-05-15 09:08:40.760317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.130 qpair failed and we were unable to recover it. 00:42:46.130 [2024-05-15 09:08:40.760437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.130 [2024-05-15 09:08:40.760584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.130 [2024-05-15 09:08:40.760613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.130 qpair failed and we were unable to recover it. 00:42:46.130 [2024-05-15 09:08:40.760776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.130 [2024-05-15 09:08:40.760877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.130 [2024-05-15 09:08:40.760903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.130 qpair failed and we were unable to recover it. 00:42:46.130 [2024-05-15 09:08:40.761068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.130 [2024-05-15 09:08:40.761198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.130 [2024-05-15 09:08:40.761231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.130 qpair failed and we were unable to recover it. 00:42:46.130 [2024-05-15 09:08:40.761370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.130 [2024-05-15 09:08:40.761487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.130 [2024-05-15 09:08:40.761516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.130 qpair failed and we were unable to recover it. 00:42:46.130 [2024-05-15 09:08:40.761640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.130 [2024-05-15 09:08:40.761782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.130 [2024-05-15 09:08:40.761811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.130 qpair failed and we were unable to recover it. 00:42:46.130 [2024-05-15 09:08:40.761961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.130 [2024-05-15 09:08:40.762077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.130 [2024-05-15 09:08:40.762104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.130 qpair failed and we were unable to recover it. 00:42:46.130 [2024-05-15 09:08:40.762266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.130 [2024-05-15 09:08:40.762437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.130 [2024-05-15 09:08:40.762466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.130 qpair failed and we were unable to recover it. 00:42:46.130 [2024-05-15 09:08:40.762611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.130 [2024-05-15 09:08:40.762780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.130 [2024-05-15 09:08:40.762809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.130 qpair failed and we were unable to recover it. 00:42:46.130 [2024-05-15 09:08:40.762950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.130 [2024-05-15 09:08:40.763065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.130 [2024-05-15 09:08:40.763095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.130 qpair failed and we were unable to recover it. 00:42:46.130 [2024-05-15 09:08:40.763260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.130 [2024-05-15 09:08:40.763370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.130 [2024-05-15 09:08:40.763413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.130 qpair failed and we were unable to recover it. 00:42:46.130 [2024-05-15 09:08:40.763537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.130 [2024-05-15 09:08:40.763675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.130 [2024-05-15 09:08:40.763704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.130 qpair failed and we were unable to recover it. 00:42:46.130 [2024-05-15 09:08:40.763849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.130 [2024-05-15 09:08:40.763955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.130 [2024-05-15 09:08:40.763985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.130 qpair failed and we were unable to recover it. 00:42:46.130 [2024-05-15 09:08:40.764128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.130 [2024-05-15 09:08:40.764276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.130 [2024-05-15 09:08:40.764306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.130 qpair failed and we were unable to recover it. 00:42:46.130 [2024-05-15 09:08:40.764479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.130 [2024-05-15 09:08:40.764596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.130 [2024-05-15 09:08:40.764623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.130 qpair failed and we were unable to recover it. 00:42:46.130 [2024-05-15 09:08:40.764750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.130 [2024-05-15 09:08:40.764895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.130 [2024-05-15 09:08:40.764924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.130 qpair failed and we were unable to recover it. 00:42:46.130 [2024-05-15 09:08:40.765035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.130 [2024-05-15 09:08:40.765203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.130 [2024-05-15 09:08:40.765240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.130 qpair failed and we were unable to recover it. 00:42:46.130 [2024-05-15 09:08:40.765383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.130 [2024-05-15 09:08:40.765507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.130 [2024-05-15 09:08:40.765536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.130 qpair failed and we were unable to recover it. 00:42:46.130 [2024-05-15 09:08:40.765660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.130 [2024-05-15 09:08:40.765789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.130 [2024-05-15 09:08:40.765815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.130 qpair failed and we were unable to recover it. 00:42:46.130 [2024-05-15 09:08:40.765946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.130 [2024-05-15 09:08:40.766084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.130 [2024-05-15 09:08:40.766110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.130 qpair failed and we were unable to recover it. 00:42:46.130 [2024-05-15 09:08:40.766213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.130 [2024-05-15 09:08:40.766417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.130 [2024-05-15 09:08:40.766444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.130 qpair failed and we were unable to recover it. 00:42:46.130 [2024-05-15 09:08:40.766543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.130 [2024-05-15 09:08:40.766672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.130 [2024-05-15 09:08:40.766698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.130 qpair failed and we were unable to recover it. 00:42:46.130 [2024-05-15 09:08:40.766826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.130 [2024-05-15 09:08:40.766955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.130 [2024-05-15 09:08:40.766982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.130 qpair failed and we were unable to recover it. 00:42:46.130 [2024-05-15 09:08:40.767128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.130 [2024-05-15 09:08:40.767304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.130 [2024-05-15 09:08:40.767331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.130 qpair failed and we were unable to recover it. 00:42:46.130 [2024-05-15 09:08:40.767446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.130 [2024-05-15 09:08:40.767631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.130 [2024-05-15 09:08:40.767657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.130 qpair failed and we were unable to recover it. 00:42:46.130 [2024-05-15 09:08:40.767762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.130 [2024-05-15 09:08:40.767888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.130 [2024-05-15 09:08:40.767917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.130 qpair failed and we were unable to recover it. 00:42:46.130 [2024-05-15 09:08:40.768093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.130 [2024-05-15 09:08:40.768203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.130 [2024-05-15 09:08:40.768237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.130 qpair failed and we were unable to recover it. 00:42:46.130 [2024-05-15 09:08:40.768342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.130 [2024-05-15 09:08:40.768458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.130 [2024-05-15 09:08:40.768485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.130 qpair failed and we were unable to recover it. 00:42:46.130 [2024-05-15 09:08:40.768592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.130 [2024-05-15 09:08:40.768726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.130 [2024-05-15 09:08:40.768756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.130 qpair failed and we were unable to recover it. 00:42:46.130 [2024-05-15 09:08:40.768902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.130 [2024-05-15 09:08:40.769054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.130 [2024-05-15 09:08:40.769081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.130 qpair failed and we were unable to recover it. 00:42:46.130 [2024-05-15 09:08:40.769234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.130 [2024-05-15 09:08:40.769342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.130 [2024-05-15 09:08:40.769368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.130 qpair failed and we were unable to recover it. 00:42:46.130 [2024-05-15 09:08:40.769532] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16af0f0 is same with the state(5) to be set 00:42:46.130 [2024-05-15 09:08:40.769709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.130 [2024-05-15 09:08:40.769894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.131 [2024-05-15 09:08:40.769928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:46.131 qpair failed and we were unable to recover it. 00:42:46.131 [2024-05-15 09:08:40.770049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.131 [2024-05-15 09:08:40.770165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.131 [2024-05-15 09:08:40.770196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:46.131 qpair failed and we were unable to recover it. 00:42:46.131 [2024-05-15 09:08:40.770326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.131 [2024-05-15 09:08:40.770460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.131 [2024-05-15 09:08:40.770488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:46.131 qpair failed and we were unable to recover it. 00:42:46.131 [2024-05-15 09:08:40.770628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.131 [2024-05-15 09:08:40.770782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.131 [2024-05-15 09:08:40.770810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.131 qpair failed and we were unable to recover it. 00:42:46.131 [2024-05-15 09:08:40.770916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.131 [2024-05-15 09:08:40.771050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.131 [2024-05-15 09:08:40.771094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.131 qpair failed and we were unable to recover it. 00:42:46.131 [2024-05-15 09:08:40.771210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.131 [2024-05-15 09:08:40.771348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.131 [2024-05-15 09:08:40.771374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.131 qpair failed and we were unable to recover it. 00:42:46.131 [2024-05-15 09:08:40.771519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.131 [2024-05-15 09:08:40.771686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.131 [2024-05-15 09:08:40.771715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.131 qpair failed and we were unable to recover it. 00:42:46.131 [2024-05-15 09:08:40.771821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.131 [2024-05-15 09:08:40.771962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.131 [2024-05-15 09:08:40.771991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.131 qpair failed and we were unable to recover it. 00:42:46.131 [2024-05-15 09:08:40.772146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.131 [2024-05-15 09:08:40.772244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.131 [2024-05-15 09:08:40.772269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.131 qpair failed and we were unable to recover it. 00:42:46.131 [2024-05-15 09:08:40.772447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.131 [2024-05-15 09:08:40.772605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.131 [2024-05-15 09:08:40.772631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.131 qpair failed and we were unable to recover it. 00:42:46.131 [2024-05-15 09:08:40.772758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.131 [2024-05-15 09:08:40.772856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.131 [2024-05-15 09:08:40.772882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.131 qpair failed and we were unable to recover it. 00:42:46.131 [2024-05-15 09:08:40.773036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.131 [2024-05-15 09:08:40.773185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.131 [2024-05-15 09:08:40.773212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.131 qpair failed and we were unable to recover it. 00:42:46.131 [2024-05-15 09:08:40.773349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.131 [2024-05-15 09:08:40.773464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.131 [2024-05-15 09:08:40.773490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.131 qpair failed and we were unable to recover it. 00:42:46.131 [2024-05-15 09:08:40.773638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.131 [2024-05-15 09:08:40.773760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.131 [2024-05-15 09:08:40.773789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.131 qpair failed and we were unable to recover it. 00:42:46.131 [2024-05-15 09:08:40.773919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.131 [2024-05-15 09:08:40.774022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.131 [2024-05-15 09:08:40.774048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.131 qpair failed and we were unable to recover it. 00:42:46.131 [2024-05-15 09:08:40.774155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.131 [2024-05-15 09:08:40.774306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.131 [2024-05-15 09:08:40.774334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.131 qpair failed and we were unable to recover it. 00:42:46.131 [2024-05-15 09:08:40.774441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.131 [2024-05-15 09:08:40.774543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.131 [2024-05-15 09:08:40.774569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.131 qpair failed and we were unable to recover it. 00:42:46.131 [2024-05-15 09:08:40.774730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.131 [2024-05-15 09:08:40.774863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.131 [2024-05-15 09:08:40.774889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.131 qpair failed and we were unable to recover it. 00:42:46.131 [2024-05-15 09:08:40.774998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.131 [2024-05-15 09:08:40.775097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.131 [2024-05-15 09:08:40.775124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.131 qpair failed and we were unable to recover it. 00:42:46.131 [2024-05-15 09:08:40.775306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.131 [2024-05-15 09:08:40.775413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.131 [2024-05-15 09:08:40.775439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.131 qpair failed and we were unable to recover it. 00:42:46.131 [2024-05-15 09:08:40.775537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.131 [2024-05-15 09:08:40.775668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.131 [2024-05-15 09:08:40.775695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.131 qpair failed and we were unable to recover it. 00:42:46.131 [2024-05-15 09:08:40.775869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.131 [2024-05-15 09:08:40.776002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.131 [2024-05-15 09:08:40.776028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.131 qpair failed and we were unable to recover it. 00:42:46.131 [2024-05-15 09:08:40.776161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.131 [2024-05-15 09:08:40.776279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.131 [2024-05-15 09:08:40.776308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.131 qpair failed and we were unable to recover it. 00:42:46.131 [2024-05-15 09:08:40.776455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.131 [2024-05-15 09:08:40.776586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.131 [2024-05-15 09:08:40.776616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.131 qpair failed and we were unable to recover it. 00:42:46.131 [2024-05-15 09:08:40.776824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.131 [2024-05-15 09:08:40.776993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.131 [2024-05-15 09:08:40.777022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.131 qpair failed and we were unable to recover it. 00:42:46.131 [2024-05-15 09:08:40.777141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.131 [2024-05-15 09:08:40.777261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.131 [2024-05-15 09:08:40.777291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.131 qpair failed and we were unable to recover it. 00:42:46.131 [2024-05-15 09:08:40.777432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.131 [2024-05-15 09:08:40.777547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.131 [2024-05-15 09:08:40.777573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.131 qpair failed and we were unable to recover it. 00:42:46.131 [2024-05-15 09:08:40.777704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.131 [2024-05-15 09:08:40.777850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.131 [2024-05-15 09:08:40.777879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.131 qpair failed and we were unable to recover it. 00:42:46.132 [2024-05-15 09:08:40.778048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.132 [2024-05-15 09:08:40.778206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.132 [2024-05-15 09:08:40.778239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.132 qpair failed and we were unable to recover it. 00:42:46.132 [2024-05-15 09:08:40.778364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.132 [2024-05-15 09:08:40.778496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.132 [2024-05-15 09:08:40.778522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.132 qpair failed and we were unable to recover it. 00:42:46.132 [2024-05-15 09:08:40.778686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.132 [2024-05-15 09:08:40.778853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.132 [2024-05-15 09:08:40.778882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.132 qpair failed and we were unable to recover it. 00:42:46.132 [2024-05-15 09:08:40.778992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.132 [2024-05-15 09:08:40.779133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.132 [2024-05-15 09:08:40.779159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.132 qpair failed and we were unable to recover it. 00:42:46.132 [2024-05-15 09:08:40.779251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.132 [2024-05-15 09:08:40.779380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.132 [2024-05-15 09:08:40.779406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.132 qpair failed and we were unable to recover it. 00:42:46.132 [2024-05-15 09:08:40.779561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.132 [2024-05-15 09:08:40.779749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.132 [2024-05-15 09:08:40.779775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.132 qpair failed and we were unable to recover it. 00:42:46.132 [2024-05-15 09:08:40.779878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.132 [2024-05-15 09:08:40.780045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.132 [2024-05-15 09:08:40.780074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.132 qpair failed and we were unable to recover it. 00:42:46.132 [2024-05-15 09:08:40.780228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.132 [2024-05-15 09:08:40.780340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.132 [2024-05-15 09:08:40.780366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.132 qpair failed and we were unable to recover it. 00:42:46.132 [2024-05-15 09:08:40.780548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.132 [2024-05-15 09:08:40.780764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.132 [2024-05-15 09:08:40.780830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.132 qpair failed and we were unable to recover it. 00:42:46.132 [2024-05-15 09:08:40.780975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.132 [2024-05-15 09:08:40.781121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.132 [2024-05-15 09:08:40.781149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.132 qpair failed and we were unable to recover it. 00:42:46.132 [2024-05-15 09:08:40.781275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.132 [2024-05-15 09:08:40.781405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.132 [2024-05-15 09:08:40.781431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.132 qpair failed and we were unable to recover it. 00:42:46.132 [2024-05-15 09:08:40.781537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.132 [2024-05-15 09:08:40.781692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.132 [2024-05-15 09:08:40.781718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.132 qpair failed and we were unable to recover it. 00:42:46.132 [2024-05-15 09:08:40.781867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.132 [2024-05-15 09:08:40.782016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.132 [2024-05-15 09:08:40.782045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.132 qpair failed and we were unable to recover it. 00:42:46.132 [2024-05-15 09:08:40.782214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.132 [2024-05-15 09:08:40.782327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.132 [2024-05-15 09:08:40.782370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.132 qpair failed and we were unable to recover it. 00:42:46.132 [2024-05-15 09:08:40.782543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.132 [2024-05-15 09:08:40.782684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.132 [2024-05-15 09:08:40.782712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.132 qpair failed and we were unable to recover it. 00:42:46.132 [2024-05-15 09:08:40.782836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.132 [2024-05-15 09:08:40.782959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.132 [2024-05-15 09:08:40.782987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.132 qpair failed and we were unable to recover it. 00:42:46.132 [2024-05-15 09:08:40.783124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.132 [2024-05-15 09:08:40.783288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.132 [2024-05-15 09:08:40.783315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.132 qpair failed and we were unable to recover it. 00:42:46.132 [2024-05-15 09:08:40.783458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.132 [2024-05-15 09:08:40.783602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.132 [2024-05-15 09:08:40.783632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.132 qpair failed and we were unable to recover it. 00:42:46.132 [2024-05-15 09:08:40.783747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.132 [2024-05-15 09:08:40.783886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.132 [2024-05-15 09:08:40.783912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.132 qpair failed and we were unable to recover it. 00:42:46.132 [2024-05-15 09:08:40.784046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.132 [2024-05-15 09:08:40.784144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.132 [2024-05-15 09:08:40.784170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.132 qpair failed and we were unable to recover it. 00:42:46.132 [2024-05-15 09:08:40.784301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.132 [2024-05-15 09:08:40.784437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.132 [2024-05-15 09:08:40.784463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.132 qpair failed and we were unable to recover it. 00:42:46.132 [2024-05-15 09:08:40.784594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.132 [2024-05-15 09:08:40.784736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.132 [2024-05-15 09:08:40.784778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.132 qpair failed and we were unable to recover it. 00:42:46.132 [2024-05-15 09:08:40.784888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.132 [2024-05-15 09:08:40.785020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.132 [2024-05-15 09:08:40.785046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.132 qpair failed and we were unable to recover it. 00:42:46.132 [2024-05-15 09:08:40.785191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.132 [2024-05-15 09:08:40.785348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.132 [2024-05-15 09:08:40.785374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.132 qpair failed and we were unable to recover it. 00:42:46.132 [2024-05-15 09:08:40.785474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.132 [2024-05-15 09:08:40.785620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.132 [2024-05-15 09:08:40.785649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.132 qpair failed and we were unable to recover it. 00:42:46.132 [2024-05-15 09:08:40.785798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.132 [2024-05-15 09:08:40.785924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.132 [2024-05-15 09:08:40.785950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.132 qpair failed and we were unable to recover it. 00:42:46.132 [2024-05-15 09:08:40.786100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.132 [2024-05-15 09:08:40.786256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.132 [2024-05-15 09:08:40.786283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.132 qpair failed and we were unable to recover it. 00:42:46.132 [2024-05-15 09:08:40.786414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.132 [2024-05-15 09:08:40.786554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.132 [2024-05-15 09:08:40.786583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.132 qpair failed and we were unable to recover it. 00:42:46.132 [2024-05-15 09:08:40.786708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.132 [2024-05-15 09:08:40.786835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.132 [2024-05-15 09:08:40.786862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.132 qpair failed and we were unable to recover it. 00:42:46.132 [2024-05-15 09:08:40.787015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.132 [2024-05-15 09:08:40.787150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.132 [2024-05-15 09:08:40.787179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.132 qpair failed and we were unable to recover it. 00:42:46.132 [2024-05-15 09:08:40.787359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.132 [2024-05-15 09:08:40.787471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.132 [2024-05-15 09:08:40.787497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.132 qpair failed and we were unable to recover it. 00:42:46.132 [2024-05-15 09:08:40.787623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.133 [2024-05-15 09:08:40.787780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.133 [2024-05-15 09:08:40.787806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.133 qpair failed and we were unable to recover it. 00:42:46.133 [2024-05-15 09:08:40.787975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.133 [2024-05-15 09:08:40.788118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.133 [2024-05-15 09:08:40.788147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.133 qpair failed and we were unable to recover it. 00:42:46.133 [2024-05-15 09:08:40.788327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.133 [2024-05-15 09:08:40.788447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.133 [2024-05-15 09:08:40.788478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.133 qpair failed and we were unable to recover it. 00:42:46.133 [2024-05-15 09:08:40.788630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.133 [2024-05-15 09:08:40.788741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.133 [2024-05-15 09:08:40.788767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.133 qpair failed and we were unable to recover it. 00:42:46.133 [2024-05-15 09:08:40.788894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.133 [2024-05-15 09:08:40.789009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.133 [2024-05-15 09:08:40.789038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.133 qpair failed and we were unable to recover it. 00:42:46.133 [2024-05-15 09:08:40.789187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.133 [2024-05-15 09:08:40.789368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.133 [2024-05-15 09:08:40.789397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.133 qpair failed and we were unable to recover it. 00:42:46.133 [2024-05-15 09:08:40.789529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.133 [2024-05-15 09:08:40.789670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.133 [2024-05-15 09:08:40.789697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.133 qpair failed and we were unable to recover it. 00:42:46.133 [2024-05-15 09:08:40.789793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.133 [2024-05-15 09:08:40.789898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.133 [2024-05-15 09:08:40.789924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.133 qpair failed and we were unable to recover it. 00:42:46.133 [2024-05-15 09:08:40.790029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.133 [2024-05-15 09:08:40.790123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.133 [2024-05-15 09:08:40.790148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.133 qpair failed and we were unable to recover it. 00:42:46.133 [2024-05-15 09:08:40.790277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.133 [2024-05-15 09:08:40.790436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.133 [2024-05-15 09:08:40.790462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.133 qpair failed and we were unable to recover it. 00:42:46.133 [2024-05-15 09:08:40.790576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.133 [2024-05-15 09:08:40.790743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.133 [2024-05-15 09:08:40.790772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.133 qpair failed and we were unable to recover it. 00:42:46.133 [2024-05-15 09:08:40.790914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.133 [2024-05-15 09:08:40.791054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.133 [2024-05-15 09:08:40.791083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.133 qpair failed and we were unable to recover it. 00:42:46.133 [2024-05-15 09:08:40.791226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.133 [2024-05-15 09:08:40.791379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.133 [2024-05-15 09:08:40.791404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.133 qpair failed and we were unable to recover it. 00:42:46.133 [2024-05-15 09:08:40.791533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.133 [2024-05-15 09:08:40.791663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.133 [2024-05-15 09:08:40.791689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.133 qpair failed and we were unable to recover it. 00:42:46.133 [2024-05-15 09:08:40.791860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.133 [2024-05-15 09:08:40.792004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.133 [2024-05-15 09:08:40.792032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.133 qpair failed and we were unable to recover it. 00:42:46.133 [2024-05-15 09:08:40.792184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.133 [2024-05-15 09:08:40.792323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.133 [2024-05-15 09:08:40.792354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.133 qpair failed and we were unable to recover it. 00:42:46.133 [2024-05-15 09:08:40.792457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.133 [2024-05-15 09:08:40.792585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.133 [2024-05-15 09:08:40.792612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.133 qpair failed and we were unable to recover it. 00:42:46.133 [2024-05-15 09:08:40.792737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.133 [2024-05-15 09:08:40.792861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.133 [2024-05-15 09:08:40.792887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.133 qpair failed and we were unable to recover it. 00:42:46.133 [2024-05-15 09:08:40.793022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.133 [2024-05-15 09:08:40.793163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.133 [2024-05-15 09:08:40.793191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.133 qpair failed and we were unable to recover it. 00:42:46.133 [2024-05-15 09:08:40.793328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.133 [2024-05-15 09:08:40.793451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.133 [2024-05-15 09:08:40.793477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.133 qpair failed and we were unable to recover it. 00:42:46.133 [2024-05-15 09:08:40.793611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.133 [2024-05-15 09:08:40.793734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.133 [2024-05-15 09:08:40.793759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.133 qpair failed and we were unable to recover it. 00:42:46.133 [2024-05-15 09:08:40.793900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.133 [2024-05-15 09:08:40.794073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.133 [2024-05-15 09:08:40.794101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.133 qpair failed and we were unable to recover it. 00:42:46.133 [2024-05-15 09:08:40.794250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.133 [2024-05-15 09:08:40.794390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.133 [2024-05-15 09:08:40.794433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.133 qpair failed and we were unable to recover it. 00:42:46.133 [2024-05-15 09:08:40.794534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.133 [2024-05-15 09:08:40.794683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.133 [2024-05-15 09:08:40.794712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.133 qpair failed and we were unable to recover it. 00:42:46.133 [2024-05-15 09:08:40.794885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.133 [2024-05-15 09:08:40.795036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.133 [2024-05-15 09:08:40.795080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.133 qpair failed and we were unable to recover it. 00:42:46.133 [2024-05-15 09:08:40.795192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.133 [2024-05-15 09:08:40.795343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.133 [2024-05-15 09:08:40.795377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.133 qpair failed and we were unable to recover it. 00:42:46.133 [2024-05-15 09:08:40.795555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.133 [2024-05-15 09:08:40.795701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.133 [2024-05-15 09:08:40.795729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.133 qpair failed and we were unable to recover it. 00:42:46.133 [2024-05-15 09:08:40.795860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.133 [2024-05-15 09:08:40.795966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.133 [2024-05-15 09:08:40.795992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.133 qpair failed and we were unable to recover it. 00:42:46.133 [2024-05-15 09:08:40.796144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.133 [2024-05-15 09:08:40.796257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.133 [2024-05-15 09:08:40.796286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.133 qpair failed and we were unable to recover it. 00:42:46.133 [2024-05-15 09:08:40.796434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.133 [2024-05-15 09:08:40.796545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.133 [2024-05-15 09:08:40.796574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.133 qpair failed and we were unable to recover it. 00:42:46.133 [2024-05-15 09:08:40.796719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.133 [2024-05-15 09:08:40.796833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.133 [2024-05-15 09:08:40.796861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.134 qpair failed and we were unable to recover it. 00:42:46.134 [2024-05-15 09:08:40.796990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.797134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.797162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.134 qpair failed and we were unable to recover it. 00:42:46.134 [2024-05-15 09:08:40.797322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.797438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.797464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.134 qpair failed and we were unable to recover it. 00:42:46.134 [2024-05-15 09:08:40.797572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.797677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.797703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.134 qpair failed and we were unable to recover it. 00:42:46.134 [2024-05-15 09:08:40.797850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.797961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.797990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.134 qpair failed and we were unable to recover it. 00:42:46.134 [2024-05-15 09:08:40.798097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.798234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.798274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.134 qpair failed and we were unable to recover it. 00:42:46.134 [2024-05-15 09:08:40.798420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.798547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.798573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.134 qpair failed and we were unable to recover it. 00:42:46.134 [2024-05-15 09:08:40.798685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.798842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.798868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.134 qpair failed and we were unable to recover it. 00:42:46.134 [2024-05-15 09:08:40.799030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.799173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.799202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.134 qpair failed and we were unable to recover it. 00:42:46.134 [2024-05-15 09:08:40.799338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.799462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.799488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.134 qpair failed and we were unable to recover it. 00:42:46.134 [2024-05-15 09:08:40.799638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.799806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.799835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.134 qpair failed and we were unable to recover it. 00:42:46.134 [2024-05-15 09:08:40.799974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.800143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.800172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.134 qpair failed and we were unable to recover it. 00:42:46.134 [2024-05-15 09:08:40.800322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.800429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.800456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.134 qpair failed and we were unable to recover it. 00:42:46.134 [2024-05-15 09:08:40.800588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.800719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.800746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.134 qpair failed and we were unable to recover it. 00:42:46.134 [2024-05-15 09:08:40.800875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.801043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.801072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.134 qpair failed and we were unable to recover it. 00:42:46.134 [2024-05-15 09:08:40.801199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.801334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.801361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.134 qpair failed and we were unable to recover it. 00:42:46.134 [2024-05-15 09:08:40.801486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.801649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.801679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.134 qpair failed and we were unable to recover it. 00:42:46.134 [2024-05-15 09:08:40.801815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.801953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.801979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.134 qpair failed and we were unable to recover it. 00:42:46.134 [2024-05-15 09:08:40.802111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.802242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.802269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.134 qpair failed and we were unable to recover it. 00:42:46.134 [2024-05-15 09:08:40.802377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.802505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.802532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.134 qpair failed and we were unable to recover it. 00:42:46.134 [2024-05-15 09:08:40.802694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.802850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.802878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.134 qpair failed and we were unable to recover it. 00:42:46.134 [2024-05-15 09:08:40.803027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.803161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.803188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.134 qpair failed and we were unable to recover it. 00:42:46.134 [2024-05-15 09:08:40.803359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.803485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.803511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.134 qpair failed and we were unable to recover it. 00:42:46.134 [2024-05-15 09:08:40.803637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.803806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.803833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.134 qpair failed and we were unable to recover it. 00:42:46.134 [2024-05-15 09:08:40.803984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.804114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.804140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.134 qpair failed and we were unable to recover it. 00:42:46.134 [2024-05-15 09:08:40.804274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.804384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.804412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.134 qpair failed and we were unable to recover it. 00:42:46.134 [2024-05-15 09:08:40.804524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.804679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.804709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.134 qpair failed and we were unable to recover it. 00:42:46.134 [2024-05-15 09:08:40.804889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.804991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.805018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.134 qpair failed and we were unable to recover it. 00:42:46.134 [2024-05-15 09:08:40.805140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.805269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.805299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.134 qpair failed and we were unable to recover it. 00:42:46.134 [2024-05-15 09:08:40.805403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.805518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.805545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.134 qpair failed and we were unable to recover it. 00:42:46.134 [2024-05-15 09:08:40.805696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.805830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.805857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.134 qpair failed and we were unable to recover it. 00:42:46.134 [2024-05-15 09:08:40.805994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.806180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.806206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.134 qpair failed and we were unable to recover it. 00:42:46.134 [2024-05-15 09:08:40.806331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.806492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.806518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.134 qpair failed and we were unable to recover it. 00:42:46.134 [2024-05-15 09:08:40.806646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.806785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.134 [2024-05-15 09:08:40.806812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.135 qpair failed and we were unable to recover it. 00:42:46.135 [2024-05-15 09:08:40.806969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.135 [2024-05-15 09:08:40.807139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.135 [2024-05-15 09:08:40.807168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.135 qpair failed and we were unable to recover it. 00:42:46.135 [2024-05-15 09:08:40.807318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.135 [2024-05-15 09:08:40.807424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.135 [2024-05-15 09:08:40.807452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.135 qpair failed and we were unable to recover it. 00:42:46.135 [2024-05-15 09:08:40.807607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.135 [2024-05-15 09:08:40.807744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.135 [2024-05-15 09:08:40.807771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.135 qpair failed and we were unable to recover it. 00:42:46.135 [2024-05-15 09:08:40.807930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.135 [2024-05-15 09:08:40.808063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.135 [2024-05-15 09:08:40.808093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.135 qpair failed and we were unable to recover it. 00:42:46.135 [2024-05-15 09:08:40.808242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.135 [2024-05-15 09:08:40.808398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.135 [2024-05-15 09:08:40.808424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.135 qpair failed and we were unable to recover it. 00:42:46.135 [2024-05-15 09:08:40.808553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.135 [2024-05-15 09:08:40.808649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.135 [2024-05-15 09:08:40.808675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.135 qpair failed and we were unable to recover it. 00:42:46.135 [2024-05-15 09:08:40.808795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.135 [2024-05-15 09:08:40.808940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.135 [2024-05-15 09:08:40.808969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.135 qpair failed and we were unable to recover it. 00:42:46.135 [2024-05-15 09:08:40.809088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.135 [2024-05-15 09:08:40.809267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.135 [2024-05-15 09:08:40.809297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.135 qpair failed and we were unable to recover it. 00:42:46.135 [2024-05-15 09:08:40.809443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.135 [2024-05-15 09:08:40.809575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.135 [2024-05-15 09:08:40.809601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.135 qpair failed and we were unable to recover it. 00:42:46.135 [2024-05-15 09:08:40.809702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.135 [2024-05-15 09:08:40.809802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.135 [2024-05-15 09:08:40.809829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.135 qpair failed and we were unable to recover it. 00:42:46.135 [2024-05-15 09:08:40.809991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.135 [2024-05-15 09:08:40.810117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.135 [2024-05-15 09:08:40.810146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.135 qpair failed and we were unable to recover it. 00:42:46.135 [2024-05-15 09:08:40.810293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.135 [2024-05-15 09:08:40.810397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.135 [2024-05-15 09:08:40.810423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.135 qpair failed and we were unable to recover it. 00:42:46.135 [2024-05-15 09:08:40.810542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.135 [2024-05-15 09:08:40.810722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.135 [2024-05-15 09:08:40.810753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.135 qpair failed and we were unable to recover it. 00:42:46.135 [2024-05-15 09:08:40.810885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.135 [2024-05-15 09:08:40.811043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.135 [2024-05-15 09:08:40.811073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.135 qpair failed and we were unable to recover it. 00:42:46.135 [2024-05-15 09:08:40.811200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.135 [2024-05-15 09:08:40.811327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.135 [2024-05-15 09:08:40.811354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.135 qpair failed and we were unable to recover it. 00:42:46.135 [2024-05-15 09:08:40.811478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.135 [2024-05-15 09:08:40.811591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.135 [2024-05-15 09:08:40.811621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.135 qpair failed and we were unable to recover it. 00:42:46.135 [2024-05-15 09:08:40.811762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.135 [2024-05-15 09:08:40.811942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.135 [2024-05-15 09:08:40.811969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.135 qpair failed and we were unable to recover it. 00:42:46.135 [2024-05-15 09:08:40.812077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.135 [2024-05-15 09:08:40.812207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.135 [2024-05-15 09:08:40.812241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.135 qpair failed and we were unable to recover it. 00:42:46.135 [2024-05-15 09:08:40.812363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.135 [2024-05-15 09:08:40.812474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.135 [2024-05-15 09:08:40.812503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.135 qpair failed and we were unable to recover it. 00:42:46.135 [2024-05-15 09:08:40.812610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.135 [2024-05-15 09:08:40.812753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.135 [2024-05-15 09:08:40.812779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.135 qpair failed and we were unable to recover it. 00:42:46.135 [2024-05-15 09:08:40.812923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.135 [2024-05-15 09:08:40.813048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.135 [2024-05-15 09:08:40.813074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.135 qpair failed and we were unable to recover it. 00:42:46.135 [2024-05-15 09:08:40.813222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.135 [2024-05-15 09:08:40.813363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.135 [2024-05-15 09:08:40.813392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.135 qpair failed and we were unable to recover it. 00:42:46.135 [2024-05-15 09:08:40.813564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.135 [2024-05-15 09:08:40.813706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.135 [2024-05-15 09:08:40.813735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.135 qpair failed and we were unable to recover it. 00:42:46.135 [2024-05-15 09:08:40.813885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.135 [2024-05-15 09:08:40.814011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.135 [2024-05-15 09:08:40.814037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.135 qpair failed and we were unable to recover it. 00:42:46.135 [2024-05-15 09:08:40.814229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.135 [2024-05-15 09:08:40.814376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.135 [2024-05-15 09:08:40.814406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.135 qpair failed and we were unable to recover it. 00:42:46.135 [2024-05-15 09:08:40.814593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.135 [2024-05-15 09:08:40.814748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.135 [2024-05-15 09:08:40.814775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.135 qpair failed and we were unable to recover it. 00:42:46.135 [2024-05-15 09:08:40.814873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.135 [2024-05-15 09:08:40.814995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.135 [2024-05-15 09:08:40.815021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.135 qpair failed and we were unable to recover it. 00:42:46.135 [2024-05-15 09:08:40.815155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.135 [2024-05-15 09:08:40.815296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.135 [2024-05-15 09:08:40.815325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.135 qpair failed and we were unable to recover it. 00:42:46.135 [2024-05-15 09:08:40.815466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.135 [2024-05-15 09:08:40.815613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.135 [2024-05-15 09:08:40.815642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.135 qpair failed and we were unable to recover it. 00:42:46.135 [2024-05-15 09:08:40.815796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.135 [2024-05-15 09:08:40.815927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.815953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.136 qpair failed and we were unable to recover it. 00:42:46.136 [2024-05-15 09:08:40.816122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.816296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.816324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.136 qpair failed and we were unable to recover it. 00:42:46.136 [2024-05-15 09:08:40.816494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.816626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.816652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.136 qpair failed and we were unable to recover it. 00:42:46.136 [2024-05-15 09:08:40.816774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.816880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.816906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.136 qpair failed and we were unable to recover it. 00:42:46.136 [2024-05-15 09:08:40.817089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.817264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.817291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.136 qpair failed and we were unable to recover it. 00:42:46.136 [2024-05-15 09:08:40.817451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.817581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.817607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.136 qpair failed and we were unable to recover it. 00:42:46.136 [2024-05-15 09:08:40.817709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.817813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.817839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.136 qpair failed and we were unable to recover it. 00:42:46.136 [2024-05-15 09:08:40.817985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.818100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.818129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.136 qpair failed and we were unable to recover it. 00:42:46.136 [2024-05-15 09:08:40.818301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.818463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.818489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.136 qpair failed and we were unable to recover it. 00:42:46.136 [2024-05-15 09:08:40.818620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.818728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.818756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.136 qpair failed and we were unable to recover it. 00:42:46.136 [2024-05-15 09:08:40.818879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.819053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.819082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.136 qpair failed and we were unable to recover it. 00:42:46.136 [2024-05-15 09:08:40.819229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.819354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.819382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.136 qpair failed and we were unable to recover it. 00:42:46.136 [2024-05-15 09:08:40.819498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.819630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.819656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.136 qpair failed and we were unable to recover it. 00:42:46.136 [2024-05-15 09:08:40.819829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.819973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.820002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.136 qpair failed and we were unable to recover it. 00:42:46.136 [2024-05-15 09:08:40.820117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.820301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.820328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.136 qpair failed and we were unable to recover it. 00:42:46.136 [2024-05-15 09:08:40.820456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.820559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.820585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.136 qpair failed and we were unable to recover it. 00:42:46.136 [2024-05-15 09:08:40.820696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.820841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.820870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.136 qpair failed and we were unable to recover it. 00:42:46.136 [2024-05-15 09:08:40.821020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.821189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.821226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.136 qpair failed and we were unable to recover it. 00:42:46.136 [2024-05-15 09:08:40.821364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.821470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.821498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.136 qpair failed and we were unable to recover it. 00:42:46.136 [2024-05-15 09:08:40.821686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.821826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.821855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.136 qpair failed and we were unable to recover it. 00:42:46.136 [2024-05-15 09:08:40.822023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.822185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.822213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.136 qpair failed and we were unable to recover it. 00:42:46.136 [2024-05-15 09:08:40.822346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.822472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.822499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.136 qpair failed and we were unable to recover it. 00:42:46.136 [2024-05-15 09:08:40.822639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.822781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.822810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.136 qpair failed and we were unable to recover it. 00:42:46.136 [2024-05-15 09:08:40.822962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.823086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.823113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.136 qpair failed and we were unable to recover it. 00:42:46.136 [2024-05-15 09:08:40.823245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.823383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.823409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.136 qpair failed and we were unable to recover it. 00:42:46.136 [2024-05-15 09:08:40.823541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.823649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.823675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.136 qpair failed and we were unable to recover it. 00:42:46.136 [2024-05-15 09:08:40.823823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.823959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.823988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.136 qpair failed and we were unable to recover it. 00:42:46.136 [2024-05-15 09:08:40.824113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.824233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.824260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.136 qpair failed and we were unable to recover it. 00:42:46.136 [2024-05-15 09:08:40.824443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.824582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.824610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.136 qpair failed and we were unable to recover it. 00:42:46.136 [2024-05-15 09:08:40.824757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.824912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.824938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.136 qpair failed and we were unable to recover it. 00:42:46.136 [2024-05-15 09:08:40.825037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.825148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.825174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.136 qpair failed and we were unable to recover it. 00:42:46.136 [2024-05-15 09:08:40.825284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.825398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.825425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.136 qpair failed and we were unable to recover it. 00:42:46.136 [2024-05-15 09:08:40.825559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.825734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.825762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.136 qpair failed and we were unable to recover it. 00:42:46.136 [2024-05-15 09:08:40.825943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.136 [2024-05-15 09:08:40.826117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.826146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.137 qpair failed and we were unable to recover it. 00:42:46.137 [2024-05-15 09:08:40.826314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.826458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.826491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.137 qpair failed and we were unable to recover it. 00:42:46.137 [2024-05-15 09:08:40.826643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.826778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.826804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.137 qpair failed and we were unable to recover it. 00:42:46.137 [2024-05-15 09:08:40.826919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.827073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.827099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.137 qpair failed and we were unable to recover it. 00:42:46.137 [2024-05-15 09:08:40.827237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.827408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.827437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.137 qpair failed and we were unable to recover it. 00:42:46.137 [2024-05-15 09:08:40.827602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.827760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.827792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.137 qpair failed and we were unable to recover it. 00:42:46.137 [2024-05-15 09:08:40.827957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.828080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.828106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.137 qpair failed and we were unable to recover it. 00:42:46.137 [2024-05-15 09:08:40.828281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.828414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.828440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.137 qpair failed and we were unable to recover it. 00:42:46.137 [2024-05-15 09:08:40.828553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.828720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.828749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.137 qpair failed and we were unable to recover it. 00:42:46.137 [2024-05-15 09:08:40.828887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.829027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.829053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.137 qpair failed and we were unable to recover it. 00:42:46.137 [2024-05-15 09:08:40.829227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.829399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.829427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.137 qpair failed and we were unable to recover it. 00:42:46.137 [2024-05-15 09:08:40.829567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.829677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.829706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.137 qpair failed and we were unable to recover it. 00:42:46.137 [2024-05-15 09:08:40.829866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.830016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.830041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.137 qpair failed and we were unable to recover it. 00:42:46.137 [2024-05-15 09:08:40.830148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.830306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.830336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.137 qpair failed and we were unable to recover it. 00:42:46.137 [2024-05-15 09:08:40.830455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.830590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.830619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.137 qpair failed and we were unable to recover it. 00:42:46.137 [2024-05-15 09:08:40.830743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.830840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.830867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.137 qpair failed and we were unable to recover it. 00:42:46.137 [2024-05-15 09:08:40.831017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.831153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.831182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.137 qpair failed and we were unable to recover it. 00:42:46.137 [2024-05-15 09:08:40.831310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.831432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.831458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.137 qpair failed and we were unable to recover it. 00:42:46.137 [2024-05-15 09:08:40.831582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.831673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.831699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.137 qpair failed and we were unable to recover it. 00:42:46.137 [2024-05-15 09:08:40.831800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.831930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.831956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.137 qpair failed and we were unable to recover it. 00:42:46.137 [2024-05-15 09:08:40.832102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.832249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.832279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.137 qpair failed and we were unable to recover it. 00:42:46.137 [2024-05-15 09:08:40.832409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.832540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.832567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.137 qpair failed and we were unable to recover it. 00:42:46.137 [2024-05-15 09:08:40.832681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.832831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.832859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.137 qpair failed and we were unable to recover it. 00:42:46.137 [2024-05-15 09:08:40.833025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.833191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.833226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.137 qpair failed and we were unable to recover it. 00:42:46.137 [2024-05-15 09:08:40.833351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.833488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.833514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.137 qpair failed and we were unable to recover it. 00:42:46.137 [2024-05-15 09:08:40.833660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.833795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.833824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.137 qpair failed and we were unable to recover it. 00:42:46.137 [2024-05-15 09:08:40.833967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.834111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.834140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.137 qpair failed and we were unable to recover it. 00:42:46.137 [2024-05-15 09:08:40.834285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.834414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.834440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.137 qpair failed and we were unable to recover it. 00:42:46.137 [2024-05-15 09:08:40.834564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.834718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.834745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.137 qpair failed and we were unable to recover it. 00:42:46.137 [2024-05-15 09:08:40.834853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.834974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.835000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.137 qpair failed and we were unable to recover it. 00:42:46.137 [2024-05-15 09:08:40.835154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.835268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.835295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.137 qpair failed and we were unable to recover it. 00:42:46.137 [2024-05-15 09:08:40.835421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.835533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.835562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.137 qpair failed and we were unable to recover it. 00:42:46.137 [2024-05-15 09:08:40.835733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.835879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.835908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.137 qpair failed and we were unable to recover it. 00:42:46.137 [2024-05-15 09:08:40.836063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.836197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.836269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.137 qpair failed and we were unable to recover it. 00:42:46.137 [2024-05-15 09:08:40.836421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.836562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.836592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.137 qpair failed and we were unable to recover it. 00:42:46.137 [2024-05-15 09:08:40.836761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.836906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.836935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.137 qpair failed and we were unable to recover it. 00:42:46.137 [2024-05-15 09:08:40.837090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.137 [2024-05-15 09:08:40.837228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.837255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.138 qpair failed and we were unable to recover it. 00:42:46.138 [2024-05-15 09:08:40.837437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.837595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.837620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.138 qpair failed and we were unable to recover it. 00:42:46.138 [2024-05-15 09:08:40.837800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.837952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.837978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.138 qpair failed and we were unable to recover it. 00:42:46.138 [2024-05-15 09:08:40.838132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.838239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.838266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.138 qpair failed and we were unable to recover it. 00:42:46.138 [2024-05-15 09:08:40.838455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.838614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.838641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.138 qpair failed and we were unable to recover it. 00:42:46.138 [2024-05-15 09:08:40.838743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.838886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.838915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.138 qpair failed and we were unable to recover it. 00:42:46.138 [2024-05-15 09:08:40.839085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.839226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.839253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.138 qpair failed and we were unable to recover it. 00:42:46.138 [2024-05-15 09:08:40.839416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.839544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.839571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.138 qpair failed and we were unable to recover it. 00:42:46.138 [2024-05-15 09:08:40.839701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.839869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.839898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.138 qpair failed and we were unable to recover it. 00:42:46.138 [2024-05-15 09:08:40.840049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.840171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.840197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.138 qpair failed and we were unable to recover it. 00:42:46.138 [2024-05-15 09:08:40.840388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.840529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.840572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.138 qpair failed and we were unable to recover it. 00:42:46.138 [2024-05-15 09:08:40.840701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.840823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.840852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.138 qpair failed and we were unable to recover it. 00:42:46.138 [2024-05-15 09:08:40.841031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.841132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.841158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.138 qpair failed and we were unable to recover it. 00:42:46.138 [2024-05-15 09:08:40.841327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.841496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.841525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.138 qpair failed and we were unable to recover it. 00:42:46.138 [2024-05-15 09:08:40.841675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.841805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.841831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.138 qpair failed and we were unable to recover it. 00:42:46.138 [2024-05-15 09:08:40.841956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.842083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.842109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.138 qpair failed and we were unable to recover it. 00:42:46.138 [2024-05-15 09:08:40.842233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.842348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.842382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.138 qpair failed and we were unable to recover it. 00:42:46.138 [2024-05-15 09:08:40.842536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.842672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.842698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.138 qpair failed and we were unable to recover it. 00:42:46.138 [2024-05-15 09:08:40.842826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.842931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.842957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.138 qpair failed and we were unable to recover it. 00:42:46.138 [2024-05-15 09:08:40.843101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.843249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.843276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.138 qpair failed and we were unable to recover it. 00:42:46.138 [2024-05-15 09:08:40.843375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.843482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.843508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.138 qpair failed and we were unable to recover it. 00:42:46.138 [2024-05-15 09:08:40.843646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.843772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.843799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.138 qpair failed and we were unable to recover it. 00:42:46.138 [2024-05-15 09:08:40.843891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.844043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.844072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.138 qpair failed and we were unable to recover it. 00:42:46.138 [2024-05-15 09:08:40.844233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.844367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.844393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.138 qpair failed and we were unable to recover it. 00:42:46.138 [2024-05-15 09:08:40.844507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.844637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.844663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.138 qpair failed and we were unable to recover it. 00:42:46.138 [2024-05-15 09:08:40.844816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.844953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.844982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.138 qpair failed and we were unable to recover it. 00:42:46.138 [2024-05-15 09:08:40.845134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.845247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.845277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.138 qpair failed and we were unable to recover it. 00:42:46.138 [2024-05-15 09:08:40.845431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.845538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.845564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.138 qpair failed and we were unable to recover it. 00:42:46.138 [2024-05-15 09:08:40.845669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.845818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.845846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.138 qpair failed and we were unable to recover it. 00:42:46.138 [2024-05-15 09:08:40.845990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.846130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.846159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.138 qpair failed and we were unable to recover it. 00:42:46.138 [2024-05-15 09:08:40.846290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.846442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.846468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.138 qpair failed and we were unable to recover it. 00:42:46.138 [2024-05-15 09:08:40.846580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.846680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.846706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.138 qpair failed and we were unable to recover it. 00:42:46.138 [2024-05-15 09:08:40.846805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.846935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.846962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.138 qpair failed and we were unable to recover it. 00:42:46.138 [2024-05-15 09:08:40.847091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.847222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.847248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.138 qpair failed and we were unable to recover it. 00:42:46.138 [2024-05-15 09:08:40.847375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.847487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.847514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.138 qpair failed and we were unable to recover it. 00:42:46.138 [2024-05-15 09:08:40.847630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.847736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.847763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.138 qpair failed and we were unable to recover it. 00:42:46.138 [2024-05-15 09:08:40.847896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.138 [2024-05-15 09:08:40.848050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.139 [2024-05-15 09:08:40.848092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.139 qpair failed and we were unable to recover it. 00:42:46.139 [2024-05-15 09:08:40.848206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.139 [2024-05-15 09:08:40.848323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.139 [2024-05-15 09:08:40.848352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.139 qpair failed and we were unable to recover it. 00:42:46.139 [2024-05-15 09:08:40.848486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.139 [2024-05-15 09:08:40.848608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.139 [2024-05-15 09:08:40.848636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.139 qpair failed and we were unable to recover it. 00:42:46.139 [2024-05-15 09:08:40.848789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.139 [2024-05-15 09:08:40.848961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.139 [2024-05-15 09:08:40.848991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.139 qpair failed and we were unable to recover it. 00:42:46.139 [2024-05-15 09:08:40.849116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.139 [2024-05-15 09:08:40.849251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.139 [2024-05-15 09:08:40.849294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.139 qpair failed and we were unable to recover it. 00:42:46.139 [2024-05-15 09:08:40.849422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.139 [2024-05-15 09:08:40.849540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.139 [2024-05-15 09:08:40.849570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.139 qpair failed and we were unable to recover it. 00:42:46.139 [2024-05-15 09:08:40.849724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.139 [2024-05-15 09:08:40.849852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.139 [2024-05-15 09:08:40.849879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.139 qpair failed and we were unable to recover it. 00:42:46.139 [2024-05-15 09:08:40.850013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.139 [2024-05-15 09:08:40.850157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.139 [2024-05-15 09:08:40.850186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.139 qpair failed and we were unable to recover it. 00:42:46.139 [2024-05-15 09:08:40.850354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.139 [2024-05-15 09:08:40.850477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.139 [2024-05-15 09:08:40.850504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.139 qpair failed and we were unable to recover it. 00:42:46.139 [2024-05-15 09:08:40.850674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.139 [2024-05-15 09:08:40.850813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.139 [2024-05-15 09:08:40.850840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.139 qpair failed and we were unable to recover it. 00:42:46.139 [2024-05-15 09:08:40.850997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.139 [2024-05-15 09:08:40.851141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.139 [2024-05-15 09:08:40.851170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.139 qpair failed and we were unable to recover it. 00:42:46.139 [2024-05-15 09:08:40.851354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.139 [2024-05-15 09:08:40.851510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.139 [2024-05-15 09:08:40.851536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.139 qpair failed and we were unable to recover it. 00:42:46.139 [2024-05-15 09:08:40.851713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.139 [2024-05-15 09:08:40.851844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.139 [2024-05-15 09:08:40.851871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.139 qpair failed and we were unable to recover it. 00:42:46.139 [2024-05-15 09:08:40.852024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.139 [2024-05-15 09:08:40.852156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.139 [2024-05-15 09:08:40.852182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.139 qpair failed and we were unable to recover it. 00:42:46.139 [2024-05-15 09:08:40.852293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.139 [2024-05-15 09:08:40.852444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.139 [2024-05-15 09:08:40.852471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.139 qpair failed and we were unable to recover it. 00:42:46.139 [2024-05-15 09:08:40.852604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.139 [2024-05-15 09:08:40.852704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.139 [2024-05-15 09:08:40.852731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.139 qpair failed and we were unable to recover it. 00:42:46.139 [2024-05-15 09:08:40.852909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.139 [2024-05-15 09:08:40.853040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.139 [2024-05-15 09:08:40.853069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.139 qpair failed and we were unable to recover it. 00:42:46.139 [2024-05-15 09:08:40.853237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.139 [2024-05-15 09:08:40.853418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.139 [2024-05-15 09:08:40.853448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.139 qpair failed and we were unable to recover it. 00:42:46.139 [2024-05-15 09:08:40.853622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.139 [2024-05-15 09:08:40.853757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.139 [2024-05-15 09:08:40.853783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.139 qpair failed and we were unable to recover it. 00:42:46.139 [2024-05-15 09:08:40.853904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.139 [2024-05-15 09:08:40.854077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.139 [2024-05-15 09:08:40.854106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.139 qpair failed and we were unable to recover it. 00:42:46.139 [2024-05-15 09:08:40.854256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.139 [2024-05-15 09:08:40.854425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.139 [2024-05-15 09:08:40.854453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.139 qpair failed and we were unable to recover it. 00:42:46.139 [2024-05-15 09:08:40.854600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.139 [2024-05-15 09:08:40.854741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.139 [2024-05-15 09:08:40.854767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.139 qpair failed and we were unable to recover it. 00:42:46.139 [2024-05-15 09:08:40.854924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.139 [2024-05-15 09:08:40.855086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.139 [2024-05-15 09:08:40.855115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.139 qpair failed and we were unable to recover it. 00:42:46.139 [2024-05-15 09:08:40.855272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.139 [2024-05-15 09:08:40.855394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.139 [2024-05-15 09:08:40.855423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.139 qpair failed and we were unable to recover it. 00:42:46.139 [2024-05-15 09:08:40.855607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.139 [2024-05-15 09:08:40.855741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.139 [2024-05-15 09:08:40.855768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.139 qpair failed and we were unable to recover it. 00:42:46.139 [2024-05-15 09:08:40.855929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.139 [2024-05-15 09:08:40.856049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.139 [2024-05-15 09:08:40.856079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.139 qpair failed and we were unable to recover it. 00:42:46.139 [2024-05-15 09:08:40.856233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.139 [2024-05-15 09:08:40.856357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.139 [2024-05-15 09:08:40.856384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.139 qpair failed and we were unable to recover it. 00:42:46.139 [2024-05-15 09:08:40.856508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.139 [2024-05-15 09:08:40.856610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.139 [2024-05-15 09:08:40.856636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.139 qpair failed and we were unable to recover it. 00:42:46.139 [2024-05-15 09:08:40.856794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.139 [2024-05-15 09:08:40.856897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.139 [2024-05-15 09:08:40.856924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.139 qpair failed and we were unable to recover it. 00:42:46.139 [2024-05-15 09:08:40.857084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.139 [2024-05-15 09:08:40.857179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.139 [2024-05-15 09:08:40.857205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.139 qpair failed and we were unable to recover it. 00:42:46.139 [2024-05-15 09:08:40.857370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.139 [2024-05-15 09:08:40.857504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.139 [2024-05-15 09:08:40.857546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.139 qpair failed and we were unable to recover it. 00:42:46.139 [2024-05-15 09:08:40.857712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.139 [2024-05-15 09:08:40.857857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.139 [2024-05-15 09:08:40.857891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.139 qpair failed and we were unable to recover it. 00:42:46.139 [2024-05-15 09:08:40.858039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.139 [2024-05-15 09:08:40.858194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.858228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.140 qpair failed and we were unable to recover it. 00:42:46.140 [2024-05-15 09:08:40.858368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.858477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.858503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.140 qpair failed and we were unable to recover it. 00:42:46.140 [2024-05-15 09:08:40.858617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.858764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.858790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.140 qpair failed and we were unable to recover it. 00:42:46.140 [2024-05-15 09:08:40.858917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.859071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.859100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.140 qpair failed and we were unable to recover it. 00:42:46.140 [2024-05-15 09:08:40.859239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.859350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.859377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.140 qpair failed and we were unable to recover it. 00:42:46.140 [2024-05-15 09:08:40.859488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.859582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.859608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.140 qpair failed and we were unable to recover it. 00:42:46.140 [2024-05-15 09:08:40.859732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.859910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.859939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.140 qpair failed and we were unable to recover it. 00:42:46.140 [2024-05-15 09:08:40.860075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.860186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.860213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.140 qpair failed and we were unable to recover it. 00:42:46.140 [2024-05-15 09:08:40.860391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.860494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.860520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.140 qpair failed and we were unable to recover it. 00:42:46.140 [2024-05-15 09:08:40.860652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.860810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.860844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.140 qpair failed and we were unable to recover it. 00:42:46.140 [2024-05-15 09:08:40.860998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.861133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.861159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.140 qpair failed and we were unable to recover it. 00:42:46.140 [2024-05-15 09:08:40.861325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.861497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.861526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.140 qpair failed and we were unable to recover it. 00:42:46.140 [2024-05-15 09:08:40.861638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.861748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.861777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.140 qpair failed and we were unable to recover it. 00:42:46.140 [2024-05-15 09:08:40.861923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.862081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.862107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.140 qpair failed and we were unable to recover it. 00:42:46.140 [2024-05-15 09:08:40.862286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.862428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.862469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.140 qpair failed and we were unable to recover it. 00:42:46.140 [2024-05-15 09:08:40.862598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.862705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.862730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.140 qpair failed and we were unable to recover it. 00:42:46.140 [2024-05-15 09:08:40.862835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.862987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.863013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.140 qpair failed and we were unable to recover it. 00:42:46.140 [2024-05-15 09:08:40.863144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.863289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.863318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.140 qpair failed and we were unable to recover it. 00:42:46.140 [2024-05-15 09:08:40.863434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.863547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.863577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.140 qpair failed and we were unable to recover it. 00:42:46.140 [2024-05-15 09:08:40.863745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.863871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.863897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.140 qpair failed and we were unable to recover it. 00:42:46.140 [2024-05-15 09:08:40.864056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.864235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.864265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.140 qpair failed and we were unable to recover it. 00:42:46.140 [2024-05-15 09:08:40.864377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.864542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.864572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.140 qpair failed and we were unable to recover it. 00:42:46.140 [2024-05-15 09:08:40.864732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.864838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.864863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.140 qpair failed and we were unable to recover it. 00:42:46.140 [2024-05-15 09:08:40.865019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.865159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.865185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.140 qpair failed and we were unable to recover it. 00:42:46.140 [2024-05-15 09:08:40.865305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.865441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.865467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.140 qpair failed and we were unable to recover it. 00:42:46.140 [2024-05-15 09:08:40.865603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.865719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.865744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.140 qpair failed and we were unable to recover it. 00:42:46.140 [2024-05-15 09:08:40.867322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.867502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.867533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.140 qpair failed and we were unable to recover it. 00:42:46.140 [2024-05-15 09:08:40.867681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.867826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.867854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.140 qpair failed and we were unable to recover it. 00:42:46.140 [2024-05-15 09:08:40.868006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.868116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.868142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.140 qpair failed and we were unable to recover it. 00:42:46.140 [2024-05-15 09:08:40.868272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.868411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.868438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.140 qpair failed and we were unable to recover it. 00:42:46.140 [2024-05-15 09:08:40.868595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.868762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.868791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.140 qpair failed and we were unable to recover it. 00:42:46.140 [2024-05-15 09:08:40.868945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.869069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.869096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.140 qpair failed and we were unable to recover it. 00:42:46.140 [2024-05-15 09:08:40.869229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.869382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.869411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.140 qpair failed and we were unable to recover it. 00:42:46.140 [2024-05-15 09:08:40.869513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.869618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.869647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.140 qpair failed and we were unable to recover it. 00:42:46.140 [2024-05-15 09:08:40.869759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.869868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.869895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.140 qpair failed and we were unable to recover it. 00:42:46.140 [2024-05-15 09:08:40.870071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.870182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.870232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.140 qpair failed and we were unable to recover it. 00:42:46.140 [2024-05-15 09:08:40.870394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.140 [2024-05-15 09:08:40.870536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.870565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.141 qpair failed and we were unable to recover it. 00:42:46.141 [2024-05-15 09:08:40.870691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.870800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.870826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.141 qpair failed and we were unable to recover it. 00:42:46.141 [2024-05-15 09:08:40.870947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.871051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.871078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.141 qpair failed and we were unable to recover it. 00:42:46.141 [2024-05-15 09:08:40.871207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.871338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.871366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.141 qpair failed and we were unable to recover it. 00:42:46.141 [2024-05-15 09:08:40.871516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.871651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.871677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.141 qpair failed and we were unable to recover it. 00:42:46.141 [2024-05-15 09:08:40.871811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.871914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.871957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.141 qpair failed and we were unable to recover it. 00:42:46.141 [2024-05-15 09:08:40.872068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.872195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.872259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.141 qpair failed and we were unable to recover it. 00:42:46.141 [2024-05-15 09:08:40.872423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.872562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.872588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.141 qpair failed and we were unable to recover it. 00:42:46.141 [2024-05-15 09:08:40.872771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.872910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.872939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.141 qpair failed and we were unable to recover it. 00:42:46.141 [2024-05-15 09:08:40.873104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.873211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.873248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.141 qpair failed and we were unable to recover it. 00:42:46.141 [2024-05-15 09:08:40.873373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.873497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.873524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.141 qpair failed and we were unable to recover it. 00:42:46.141 [2024-05-15 09:08:40.873681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.873834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.873863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.141 qpair failed and we were unable to recover it. 00:42:46.141 [2024-05-15 09:08:40.874033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.874173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.874199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.141 qpair failed and we were unable to recover it. 00:42:46.141 [2024-05-15 09:08:40.874348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.874483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.874509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.141 qpair failed and we were unable to recover it. 00:42:46.141 [2024-05-15 09:08:40.874666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.874784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.874813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.141 qpair failed and we were unable to recover it. 00:42:46.141 [2024-05-15 09:08:40.874985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.875155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.875184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.141 qpair failed and we were unable to recover it. 00:42:46.141 [2024-05-15 09:08:40.875332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.875460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.875496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.141 qpair failed and we were unable to recover it. 00:42:46.141 [2024-05-15 09:08:40.875650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.875793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.875819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.141 qpair failed and we were unable to recover it. 00:42:46.141 [2024-05-15 09:08:40.875926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.876073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.876101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.141 qpair failed and we were unable to recover it. 00:42:46.141 [2024-05-15 09:08:40.876267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.876399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.876425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.141 qpair failed and we were unable to recover it. 00:42:46.141 [2024-05-15 09:08:40.876551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.876687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.876716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.141 qpair failed and we were unable to recover it. 00:42:46.141 [2024-05-15 09:08:40.876858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.876974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.877002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.141 qpair failed and we were unable to recover it. 00:42:46.141 [2024-05-15 09:08:40.877133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.877274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.877301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.141 qpair failed and we were unable to recover it. 00:42:46.141 [2024-05-15 09:08:40.877445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.877621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.877649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.141 qpair failed and we were unable to recover it. 00:42:46.141 [2024-05-15 09:08:40.877786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.877923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.877956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.141 qpair failed and we were unable to recover it. 00:42:46.141 [2024-05-15 09:08:40.878124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.878290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.878316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.141 qpair failed and we were unable to recover it. 00:42:46.141 [2024-05-15 09:08:40.878427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.878592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.878618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.141 qpair failed and we were unable to recover it. 00:42:46.141 [2024-05-15 09:08:40.878748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.878877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.878904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.141 qpair failed and we were unable to recover it. 00:42:46.141 [2024-05-15 09:08:40.879037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.879163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.879189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.141 qpair failed and we were unable to recover it. 00:42:46.141 [2024-05-15 09:08:40.879320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.879439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.879465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.141 qpair failed and we were unable to recover it. 00:42:46.141 [2024-05-15 09:08:40.879600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.879741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.879767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.141 qpair failed and we were unable to recover it. 00:42:46.141 [2024-05-15 09:08:40.879901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.880063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.880089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.141 qpair failed and we were unable to recover it. 00:42:46.141 [2024-05-15 09:08:40.880232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.880376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.880404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.141 qpair failed and we were unable to recover it. 00:42:46.141 [2024-05-15 09:08:40.880521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.880683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.880715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.141 qpair failed and we were unable to recover it. 00:42:46.141 [2024-05-15 09:08:40.880856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.880966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.880992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.141 qpair failed and we were unable to recover it. 00:42:46.141 [2024-05-15 09:08:40.881097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.881203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.881237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.141 qpair failed and we were unable to recover it. 00:42:46.141 [2024-05-15 09:08:40.881383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.881525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.881554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.141 qpair failed and we were unable to recover it. 00:42:46.141 [2024-05-15 09:08:40.881704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.141 [2024-05-15 09:08:40.881831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.142 [2024-05-15 09:08:40.881857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.142 qpair failed and we were unable to recover it. 00:42:46.142 [2024-05-15 09:08:40.882002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.142 [2024-05-15 09:08:40.883454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.142 [2024-05-15 09:08:40.883490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.142 qpair failed and we were unable to recover it. 00:42:46.142 [2024-05-15 09:08:40.883614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.142 [2024-05-15 09:08:40.883790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.142 [2024-05-15 09:08:40.883819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.142 qpair failed and we were unable to recover it. 00:42:46.142 [2024-05-15 09:08:40.883980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.142 [2024-05-15 09:08:40.884104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.142 [2024-05-15 09:08:40.884130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.142 qpair failed and we were unable to recover it. 00:42:46.142 [2024-05-15 09:08:40.884301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.142 [2024-05-15 09:08:40.884430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.142 [2024-05-15 09:08:40.884459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.142 qpair failed and we were unable to recover it. 00:42:46.142 [2024-05-15 09:08:40.884591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.142 [2024-05-15 09:08:40.884730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.422 [2024-05-15 09:08:40.884758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.422 qpair failed and we were unable to recover it. 00:42:46.422 [2024-05-15 09:08:40.884936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.422 [2024-05-15 09:08:40.885057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.422 [2024-05-15 09:08:40.885100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.422 qpair failed and we were unable to recover it. 00:42:46.422 [2024-05-15 09:08:40.885269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.422 [2024-05-15 09:08:40.885393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.422 [2024-05-15 09:08:40.885423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.422 qpair failed and we were unable to recover it. 00:42:46.422 [2024-05-15 09:08:40.885596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.422 [2024-05-15 09:08:40.885719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.422 [2024-05-15 09:08:40.885748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.422 qpair failed and we were unable to recover it. 00:42:46.422 [2024-05-15 09:08:40.885874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.422 [2024-05-15 09:08:40.886033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.422 [2024-05-15 09:08:40.886060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.422 qpair failed and we were unable to recover it. 00:42:46.422 [2024-05-15 09:08:40.886205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.422 [2024-05-15 09:08:40.886337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.422 [2024-05-15 09:08:40.886366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.422 qpair failed and we were unable to recover it. 00:42:46.422 [2024-05-15 09:08:40.886495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.422 [2024-05-15 09:08:40.886606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.422 [2024-05-15 09:08:40.886637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.422 qpair failed and we were unable to recover it. 00:42:46.422 [2024-05-15 09:08:40.886792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.422 [2024-05-15 09:08:40.886925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.422 [2024-05-15 09:08:40.886951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.422 qpair failed and we were unable to recover it. 00:42:46.422 [2024-05-15 09:08:40.887053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.422 [2024-05-15 09:08:40.887185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.422 [2024-05-15 09:08:40.887210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.422 qpair failed and we were unable to recover it. 00:42:46.422 [2024-05-15 09:08:40.887381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.422 [2024-05-15 09:08:40.887505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.422 [2024-05-15 09:08:40.887532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.422 qpair failed and we were unable to recover it. 00:42:46.422 [2024-05-15 09:08:40.887658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.422 [2024-05-15 09:08:40.887788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.422 [2024-05-15 09:08:40.887815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.422 qpair failed and we were unable to recover it. 00:42:46.422 [2024-05-15 09:08:40.887980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.422 [2024-05-15 09:08:40.888129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.422 [2024-05-15 09:08:40.888158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.422 qpair failed and we were unable to recover it. 00:42:46.422 [2024-05-15 09:08:40.888297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.422 [2024-05-15 09:08:40.888455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.422 [2024-05-15 09:08:40.888482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.422 qpair failed and we were unable to recover it. 00:42:46.422 [2024-05-15 09:08:40.888593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.422 [2024-05-15 09:08:40.888717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.422 [2024-05-15 09:08:40.888743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.422 qpair failed and we were unable to recover it. 00:42:46.422 [2024-05-15 09:08:40.888910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.422 [2024-05-15 09:08:40.889037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.422 [2024-05-15 09:08:40.889063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.422 qpair failed and we were unable to recover it. 00:42:46.422 [2024-05-15 09:08:40.889161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.422 [2024-05-15 09:08:40.889326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.422 [2024-05-15 09:08:40.889355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.422 qpair failed and we were unable to recover it. 00:42:46.422 [2024-05-15 09:08:40.889481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.422 [2024-05-15 09:08:40.889618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.422 [2024-05-15 09:08:40.889644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.422 qpair failed and we were unable to recover it. 00:42:46.422 [2024-05-15 09:08:40.889750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.422 [2024-05-15 09:08:40.889848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.422 [2024-05-15 09:08:40.889874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.422 qpair failed and we were unable to recover it. 00:42:46.422 [2024-05-15 09:08:40.889979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.422 [2024-05-15 09:08:40.890112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.422 [2024-05-15 09:08:40.890138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.422 qpair failed and we were unable to recover it. 00:42:46.422 [2024-05-15 09:08:40.890239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.422 [2024-05-15 09:08:40.890350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.422 [2024-05-15 09:08:40.890375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.422 qpair failed and we were unable to recover it. 00:42:46.422 [2024-05-15 09:08:40.890501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.422 [2024-05-15 09:08:40.890656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.422 [2024-05-15 09:08:40.890699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.422 qpair failed and we were unable to recover it. 00:42:46.422 [2024-05-15 09:08:40.890848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.422 [2024-05-15 09:08:40.891001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.422 [2024-05-15 09:08:40.891027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.422 qpair failed and we were unable to recover it. 00:42:46.422 [2024-05-15 09:08:40.891142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.422 [2024-05-15 09:08:40.891268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.422 [2024-05-15 09:08:40.891297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.422 qpair failed and we were unable to recover it. 00:42:46.422 [2024-05-15 09:08:40.891440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.422 [2024-05-15 09:08:40.891609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.423 [2024-05-15 09:08:40.891635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.423 qpair failed and we were unable to recover it. 00:42:46.423 [2024-05-15 09:08:40.891753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.423 [2024-05-15 09:08:40.891893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.423 [2024-05-15 09:08:40.891918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.423 qpair failed and we were unable to recover it. 00:42:46.423 [2024-05-15 09:08:40.892082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.423 [2024-05-15 09:08:40.892235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.423 [2024-05-15 09:08:40.892265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.423 qpair failed and we were unable to recover it. 00:42:46.423 [2024-05-15 09:08:40.892422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.423 [2024-05-15 09:08:40.892575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.423 [2024-05-15 09:08:40.892616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.423 qpair failed and we were unable to recover it. 00:42:46.423 [2024-05-15 09:08:40.892764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.423 [2024-05-15 09:08:40.892942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.423 [2024-05-15 09:08:40.892970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.423 qpair failed and we were unable to recover it. 00:42:46.423 [2024-05-15 09:08:40.893107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.423 [2024-05-15 09:08:40.893254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.423 [2024-05-15 09:08:40.893284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.423 qpair failed and we were unable to recover it. 00:42:46.423 [2024-05-15 09:08:40.893409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.423 [2024-05-15 09:08:40.893513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.423 [2024-05-15 09:08:40.893538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.423 qpair failed and we were unable to recover it. 00:42:46.423 [2024-05-15 09:08:40.893662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.423 [2024-05-15 09:08:40.893768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.423 [2024-05-15 09:08:40.893796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.423 qpair failed and we were unable to recover it. 00:42:46.423 [2024-05-15 09:08:40.893947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.423 [2024-05-15 09:08:40.894087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.423 [2024-05-15 09:08:40.894113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.423 qpair failed and we were unable to recover it. 00:42:46.423 [2024-05-15 09:08:40.894243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.423 [2024-05-15 09:08:40.894357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.423 [2024-05-15 09:08:40.894382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.423 qpair failed and we were unable to recover it. 00:42:46.423 [2024-05-15 09:08:40.894486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.423 [2024-05-15 09:08:40.894650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.423 [2024-05-15 09:08:40.894683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.423 qpair failed and we were unable to recover it. 00:42:46.423 [2024-05-15 09:08:40.894848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.423 [2024-05-15 09:08:40.894993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.423 [2024-05-15 09:08:40.895039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.423 qpair failed and we were unable to recover it. 00:42:46.423 [2024-05-15 09:08:40.895182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.423 [2024-05-15 09:08:40.895343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.423 [2024-05-15 09:08:40.895369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.423 qpair failed and we were unable to recover it. 00:42:46.423 [2024-05-15 09:08:40.895476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.423 [2024-05-15 09:08:40.895615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.423 [2024-05-15 09:08:40.895642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.423 qpair failed and we were unable to recover it. 00:42:46.423 [2024-05-15 09:08:40.895784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.423 [2024-05-15 09:08:40.895953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.423 [2024-05-15 09:08:40.895989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.423 qpair failed and we were unable to recover it. 00:42:46.423 [2024-05-15 09:08:40.896096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.423 [2024-05-15 09:08:40.896228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.423 [2024-05-15 09:08:40.896255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.423 qpair failed and we were unable to recover it. 00:42:46.423 [2024-05-15 09:08:40.896391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.423 [2024-05-15 09:08:40.896502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.423 [2024-05-15 09:08:40.896530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.423 qpair failed and we were unable to recover it. 00:42:46.423 [2024-05-15 09:08:40.896678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.423 [2024-05-15 09:08:40.896825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.423 [2024-05-15 09:08:40.896851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.423 qpair failed and we were unable to recover it. 00:42:46.423 [2024-05-15 09:08:40.896976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.423 [2024-05-15 09:08:40.897107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.423 [2024-05-15 09:08:40.897132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.423 qpair failed and we were unable to recover it. 00:42:46.423 [2024-05-15 09:08:40.897265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.423 [2024-05-15 09:08:40.897363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.423 [2024-05-15 09:08:40.897406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.423 qpair failed and we were unable to recover it. 00:42:46.423 [2024-05-15 09:08:40.897586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.423 [2024-05-15 09:08:40.897718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.423 [2024-05-15 09:08:40.897743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.423 qpair failed and we were unable to recover it. 00:42:46.423 [2024-05-15 09:08:40.897907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.423 [2024-05-15 09:08:40.898010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.423 [2024-05-15 09:08:40.898035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.423 qpair failed and we were unable to recover it. 00:42:46.423 [2024-05-15 09:08:40.898190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.423 [2024-05-15 09:08:40.898349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.423 [2024-05-15 09:08:40.898378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.423 qpair failed and we were unable to recover it. 00:42:46.423 [2024-05-15 09:08:40.898558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.423 [2024-05-15 09:08:40.898657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.423 [2024-05-15 09:08:40.898681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.423 qpair failed and we were unable to recover it. 00:42:46.423 [2024-05-15 09:08:40.898813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.423 [2024-05-15 09:08:40.898937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.423 [2024-05-15 09:08:40.898962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.423 qpair failed and we were unable to recover it. 00:42:46.423 [2024-05-15 09:08:40.899120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.423 [2024-05-15 09:08:40.899281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.423 [2024-05-15 09:08:40.899309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.423 qpair failed and we were unable to recover it. 00:42:46.423 [2024-05-15 09:08:40.899462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.423 [2024-05-15 09:08:40.899605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.423 [2024-05-15 09:08:40.899630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.423 qpair failed and we were unable to recover it. 00:42:46.423 [2024-05-15 09:08:40.899736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.423 [2024-05-15 09:08:40.899837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.423 [2024-05-15 09:08:40.899863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.423 qpair failed and we were unable to recover it. 00:42:46.423 [2024-05-15 09:08:40.899962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.423 [2024-05-15 09:08:40.900097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.423 [2024-05-15 09:08:40.900122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.423 qpair failed and we were unable to recover it. 00:42:46.423 [2024-05-15 09:08:40.900292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.424 [2024-05-15 09:08:40.900452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.424 [2024-05-15 09:08:40.900479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.424 qpair failed and we were unable to recover it. 00:42:46.424 [2024-05-15 09:08:40.900655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.424 [2024-05-15 09:08:40.900780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.424 [2024-05-15 09:08:40.900806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.424 qpair failed and we were unable to recover it. 00:42:46.424 [2024-05-15 09:08:40.900981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.424 [2024-05-15 09:08:40.901110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.424 [2024-05-15 09:08:40.901134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.424 qpair failed and we were unable to recover it. 00:42:46.424 [2024-05-15 09:08:40.901261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.424 [2024-05-15 09:08:40.901369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.424 [2024-05-15 09:08:40.901393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.424 qpair failed and we were unable to recover it. 00:42:46.424 [2024-05-15 09:08:40.901539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.424 [2024-05-15 09:08:40.901689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.424 [2024-05-15 09:08:40.901731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.424 qpair failed and we were unable to recover it. 00:42:46.424 [2024-05-15 09:08:40.901850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.424 [2024-05-15 09:08:40.901991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.424 [2024-05-15 09:08:40.902018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.424 qpair failed and we were unable to recover it. 00:42:46.424 [2024-05-15 09:08:40.902163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.424 [2024-05-15 09:08:40.902319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.424 [2024-05-15 09:08:40.902346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.424 qpair failed and we were unable to recover it. 00:42:46.424 [2024-05-15 09:08:40.902454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.424 [2024-05-15 09:08:40.902594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.424 [2024-05-15 09:08:40.902621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.424 qpair failed and we were unable to recover it. 00:42:46.424 [2024-05-15 09:08:40.902775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.424 [2024-05-15 09:08:40.902921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.424 [2024-05-15 09:08:40.902949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.424 qpair failed and we were unable to recover it. 00:42:46.424 [2024-05-15 09:08:40.903058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.424 [2024-05-15 09:08:40.903201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.424 [2024-05-15 09:08:40.903232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.424 qpair failed and we were unable to recover it. 00:42:46.424 [2024-05-15 09:08:40.903344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.424 [2024-05-15 09:08:40.903453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.424 [2024-05-15 09:08:40.903481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.424 qpair failed and we were unable to recover it. 00:42:46.424 [2024-05-15 09:08:40.903634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.424 [2024-05-15 09:08:40.903791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.424 [2024-05-15 09:08:40.903816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.424 qpair failed and we were unable to recover it. 00:42:46.424 [2024-05-15 09:08:40.903975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.424 [2024-05-15 09:08:40.904142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.424 [2024-05-15 09:08:40.904176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.424 qpair failed and we were unable to recover it. 00:42:46.424 [2024-05-15 09:08:40.904344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.424 [2024-05-15 09:08:40.904478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.424 [2024-05-15 09:08:40.904505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.424 qpair failed and we were unable to recover it. 00:42:46.424 [2024-05-15 09:08:40.904668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.424 [2024-05-15 09:08:40.904807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.424 [2024-05-15 09:08:40.904833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.424 qpair failed and we were unable to recover it. 00:42:46.424 [2024-05-15 09:08:40.904964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.424 [2024-05-15 09:08:40.905096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.424 [2024-05-15 09:08:40.905126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.424 qpair failed and we were unable to recover it. 00:42:46.424 [2024-05-15 09:08:40.905335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.424 [2024-05-15 09:08:40.905449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.424 [2024-05-15 09:08:40.905475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.424 qpair failed and we were unable to recover it. 00:42:46.424 [2024-05-15 09:08:40.905635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.424 [2024-05-15 09:08:40.905778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.424 [2024-05-15 09:08:40.905806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.424 qpair failed and we were unable to recover it. 00:42:46.424 [2024-05-15 09:08:40.905958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.424 [2024-05-15 09:08:40.906124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.424 [2024-05-15 09:08:40.906152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.424 qpair failed and we were unable to recover it. 00:42:46.424 [2024-05-15 09:08:40.906284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.424 [2024-05-15 09:08:40.906393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.424 [2024-05-15 09:08:40.906421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.424 qpair failed and we were unable to recover it. 00:42:46.424 [2024-05-15 09:08:40.906544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.424 [2024-05-15 09:08:40.906689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.424 [2024-05-15 09:08:40.906715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.424 qpair failed and we were unable to recover it. 00:42:46.424 [2024-05-15 09:08:40.906821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.424 [2024-05-15 09:08:40.906958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.424 [2024-05-15 09:08:40.906985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.424 qpair failed and we were unable to recover it. 00:42:46.424 [2024-05-15 09:08:40.907121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.424 [2024-05-15 09:08:40.907260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.424 [2024-05-15 09:08:40.907285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.424 qpair failed and we were unable to recover it. 00:42:46.424 [2024-05-15 09:08:40.907400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.424 [2024-05-15 09:08:40.907575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.424 [2024-05-15 09:08:40.907603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.424 qpair failed and we were unable to recover it. 00:42:46.424 [2024-05-15 09:08:40.907755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.424 [2024-05-15 09:08:40.907923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.424 [2024-05-15 09:08:40.907949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.424 qpair failed and we were unable to recover it. 00:42:46.424 [2024-05-15 09:08:40.908078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.424 [2024-05-15 09:08:40.908208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.424 [2024-05-15 09:08:40.908238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.424 qpair failed and we were unable to recover it. 00:42:46.424 [2024-05-15 09:08:40.908350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.424 [2024-05-15 09:08:40.908451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.424 [2024-05-15 09:08:40.908477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.424 qpair failed and we were unable to recover it. 00:42:46.424 [2024-05-15 09:08:40.908609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.424 [2024-05-15 09:08:40.908745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.424 [2024-05-15 09:08:40.908774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.424 qpair failed and we were unable to recover it. 00:42:46.424 [2024-05-15 09:08:40.908893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.424 [2024-05-15 09:08:40.909014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.424 [2024-05-15 09:08:40.909040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.425 qpair failed and we were unable to recover it. 00:42:46.425 [2024-05-15 09:08:40.909187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.425 [2024-05-15 09:08:40.909318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.425 [2024-05-15 09:08:40.909344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.425 qpair failed and we were unable to recover it. 00:42:46.425 [2024-05-15 09:08:40.909476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.425 [2024-05-15 09:08:40.909618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.425 [2024-05-15 09:08:40.909643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.425 qpair failed and we were unable to recover it. 00:42:46.425 [2024-05-15 09:08:40.909783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.425 [2024-05-15 09:08:40.909920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.425 [2024-05-15 09:08:40.909947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.425 qpair failed and we were unable to recover it. 00:42:46.425 [2024-05-15 09:08:40.910055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.425 [2024-05-15 09:08:40.910168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.425 [2024-05-15 09:08:40.910194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.425 qpair failed and we were unable to recover it. 00:42:46.425 [2024-05-15 09:08:40.910330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.425 [2024-05-15 09:08:40.910456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.425 [2024-05-15 09:08:40.910506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.425 qpair failed and we were unable to recover it. 00:42:46.425 [2024-05-15 09:08:40.910706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.425 [2024-05-15 09:08:40.910890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.425 [2024-05-15 09:08:40.910920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.425 qpair failed and we were unable to recover it. 00:42:46.425 [2024-05-15 09:08:40.911048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.425 [2024-05-15 09:08:40.911160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.425 [2024-05-15 09:08:40.911186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.425 qpair failed and we were unable to recover it. 00:42:46.425 [2024-05-15 09:08:40.911302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.425 [2024-05-15 09:08:40.911433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.425 [2024-05-15 09:08:40.911460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.425 qpair failed and we were unable to recover it. 00:42:46.425 [2024-05-15 09:08:40.911679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.425 [2024-05-15 09:08:40.911871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.425 [2024-05-15 09:08:40.911899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.425 qpair failed and we were unable to recover it. 00:42:46.425 [2024-05-15 09:08:40.912022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.425 [2024-05-15 09:08:40.912180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.425 [2024-05-15 09:08:40.912206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.425 qpair failed and we were unable to recover it. 00:42:46.425 [2024-05-15 09:08:40.912316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.425 [2024-05-15 09:08:40.912472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.425 [2024-05-15 09:08:40.912502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.425 qpair failed and we were unable to recover it. 00:42:46.425 [2024-05-15 09:08:40.912653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.425 [2024-05-15 09:08:40.912777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.425 [2024-05-15 09:08:40.912804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.425 qpair failed and we were unable to recover it. 00:42:46.425 [2024-05-15 09:08:40.912926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.425 [2024-05-15 09:08:40.913080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.425 [2024-05-15 09:08:40.913106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.425 qpair failed and we were unable to recover it. 00:42:46.425 [2024-05-15 09:08:40.913211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.425 [2024-05-15 09:08:40.913377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.425 [2024-05-15 09:08:40.913407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.425 qpair failed and we were unable to recover it. 00:42:46.425 [2024-05-15 09:08:40.913576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.425 [2024-05-15 09:08:40.913720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.425 [2024-05-15 09:08:40.913749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.425 qpair failed and we were unable to recover it. 00:42:46.425 [2024-05-15 09:08:40.913869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.425 [2024-05-15 09:08:40.914015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.425 [2024-05-15 09:08:40.914044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.425 qpair failed and we were unable to recover it. 00:42:46.425 [2024-05-15 09:08:40.914171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.425 [2024-05-15 09:08:40.914321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.425 [2024-05-15 09:08:40.914347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.425 qpair failed and we were unable to recover it. 00:42:46.425 [2024-05-15 09:08:40.914480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.425 [2024-05-15 09:08:40.914616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.425 [2024-05-15 09:08:40.914644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.425 qpair failed and we were unable to recover it. 00:42:46.425 [2024-05-15 09:08:40.914774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.425 [2024-05-15 09:08:40.914929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.425 [2024-05-15 09:08:40.914958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.425 qpair failed and we were unable to recover it. 00:42:46.425 [2024-05-15 09:08:40.915161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.425 [2024-05-15 09:08:40.915287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.425 [2024-05-15 09:08:40.915313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.425 qpair failed and we were unable to recover it. 00:42:46.425 [2024-05-15 09:08:40.915441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.425 [2024-05-15 09:08:40.915569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.425 [2024-05-15 09:08:40.915599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.425 qpair failed and we were unable to recover it. 00:42:46.425 [2024-05-15 09:08:40.915746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.425 [2024-05-15 09:08:40.915881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.425 [2024-05-15 09:08:40.915910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.425 qpair failed and we were unable to recover it. 00:42:46.425 [2024-05-15 09:08:40.916042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.425 [2024-05-15 09:08:40.916193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.425 [2024-05-15 09:08:40.916231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.425 qpair failed and we were unable to recover it. 00:42:46.425 [2024-05-15 09:08:40.916363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.425 [2024-05-15 09:08:40.916476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.425 [2024-05-15 09:08:40.916522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.425 qpair failed and we were unable to recover it. 00:42:46.425 [2024-05-15 09:08:40.916666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.425 [2024-05-15 09:08:40.916841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.425 [2024-05-15 09:08:40.916866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.425 qpair failed and we were unable to recover it. 00:42:46.425 [2024-05-15 09:08:40.917047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.425 [2024-05-15 09:08:40.917161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.425 [2024-05-15 09:08:40.917190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.425 qpair failed and we were unable to recover it. 00:42:46.425 [2024-05-15 09:08:40.917354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.425 [2024-05-15 09:08:40.917467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.425 [2024-05-15 09:08:40.917509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.425 qpair failed and we were unable to recover it. 00:42:46.425 [2024-05-15 09:08:40.917617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.425 [2024-05-15 09:08:40.917731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.425 [2024-05-15 09:08:40.917760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.425 qpair failed and we were unable to recover it. 00:42:46.425 [2024-05-15 09:08:40.917883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.426 [2024-05-15 09:08:40.918069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.426 [2024-05-15 09:08:40.918098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.426 qpair failed and we were unable to recover it. 00:42:46.426 [2024-05-15 09:08:40.918276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.426 [2024-05-15 09:08:40.918379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.426 [2024-05-15 09:08:40.918406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.426 qpair failed and we were unable to recover it. 00:42:46.426 [2024-05-15 09:08:40.918526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.426 [2024-05-15 09:08:40.918678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.426 [2024-05-15 09:08:40.918707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.426 qpair failed and we were unable to recover it. 00:42:46.426 [2024-05-15 09:08:40.918849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.426 [2024-05-15 09:08:40.918966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.426 [2024-05-15 09:08:40.918995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.426 qpair failed and we were unable to recover it. 00:42:46.426 [2024-05-15 09:08:40.919139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.426 [2024-05-15 09:08:40.919313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.426 [2024-05-15 09:08:40.919340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.426 qpair failed and we were unable to recover it. 00:42:46.426 [2024-05-15 09:08:40.919449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.426 [2024-05-15 09:08:40.919591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.426 [2024-05-15 09:08:40.919617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.426 qpair failed and we were unable to recover it. 00:42:46.426 [2024-05-15 09:08:40.919812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.426 [2024-05-15 09:08:40.919958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.426 [2024-05-15 09:08:40.919987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.426 qpair failed and we were unable to recover it. 00:42:46.426 [2024-05-15 09:08:40.920155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.426 [2024-05-15 09:08:40.920288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.426 [2024-05-15 09:08:40.920316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.426 qpair failed and we were unable to recover it. 00:42:46.426 [2024-05-15 09:08:40.920450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.426 [2024-05-15 09:08:40.920562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.426 [2024-05-15 09:08:40.920588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.426 qpair failed and we were unable to recover it. 00:42:46.426 [2024-05-15 09:08:40.920692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.426 [2024-05-15 09:08:40.920796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.426 [2024-05-15 09:08:40.920821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.426 qpair failed and we were unable to recover it. 00:42:46.426 [2024-05-15 09:08:40.920974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.426 [2024-05-15 09:08:40.921095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.426 [2024-05-15 09:08:40.921124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.426 qpair failed and we were unable to recover it. 00:42:46.426 [2024-05-15 09:08:40.921285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.426 [2024-05-15 09:08:40.921415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.426 [2024-05-15 09:08:40.921441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.426 qpair failed and we were unable to recover it. 00:42:46.426 [2024-05-15 09:08:40.921590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.426 [2024-05-15 09:08:40.921775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.426 [2024-05-15 09:08:40.921805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.426 qpair failed and we were unable to recover it. 00:42:46.426 [2024-05-15 09:08:40.921989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.426 [2024-05-15 09:08:40.922146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.426 [2024-05-15 09:08:40.922172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.426 qpair failed and we were unable to recover it. 00:42:46.426 [2024-05-15 09:08:40.922310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.426 [2024-05-15 09:08:40.922423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.426 [2024-05-15 09:08:40.922450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.426 qpair failed and we were unable to recover it. 00:42:46.426 [2024-05-15 09:08:40.922601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.426 [2024-05-15 09:08:40.922720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.426 [2024-05-15 09:08:40.922761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.426 qpair failed and we were unable to recover it. 00:42:46.426 [2024-05-15 09:08:40.922907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.426 [2024-05-15 09:08:40.923059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.426 [2024-05-15 09:08:40.923087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.426 qpair failed and we were unable to recover it. 00:42:46.426 [2024-05-15 09:08:40.923201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.426 [2024-05-15 09:08:40.923338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.426 [2024-05-15 09:08:40.923365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.426 qpair failed and we were unable to recover it. 00:42:46.426 [2024-05-15 09:08:40.923472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.426 [2024-05-15 09:08:40.923579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.426 [2024-05-15 09:08:40.923606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.426 qpair failed and we were unable to recover it. 00:42:46.426 [2024-05-15 09:08:40.923764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.426 [2024-05-15 09:08:40.923907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.426 [2024-05-15 09:08:40.923936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.426 qpair failed and we were unable to recover it. 00:42:46.426 [2024-05-15 09:08:40.924092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.426 [2024-05-15 09:08:40.924248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.426 [2024-05-15 09:08:40.924274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.426 qpair failed and we were unable to recover it. 00:42:46.426 [2024-05-15 09:08:40.924398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.426 [2024-05-15 09:08:40.924518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.426 [2024-05-15 09:08:40.924547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.426 qpair failed and we were unable to recover it. 00:42:46.426 [2024-05-15 09:08:40.924665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.426 [2024-05-15 09:08:40.924776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.426 [2024-05-15 09:08:40.924810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.426 qpair failed and we were unable to recover it. 00:42:46.426 [2024-05-15 09:08:40.924954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.427 [2024-05-15 09:08:40.925076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.427 [2024-05-15 09:08:40.925105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.427 qpair failed and we were unable to recover it. 00:42:46.427 [2024-05-15 09:08:40.925299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.427 [2024-05-15 09:08:40.925435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.427 [2024-05-15 09:08:40.925462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.427 qpair failed and we were unable to recover it. 00:42:46.427 [2024-05-15 09:08:40.925599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.427 [2024-05-15 09:08:40.925706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.427 [2024-05-15 09:08:40.925737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.427 qpair failed and we were unable to recover it. 00:42:46.427 [2024-05-15 09:08:40.925851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.427 [2024-05-15 09:08:40.925999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.427 [2024-05-15 09:08:40.926028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.427 qpair failed and we were unable to recover it. 00:42:46.427 [2024-05-15 09:08:40.926168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.427 [2024-05-15 09:08:40.926310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.427 [2024-05-15 09:08:40.926338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.427 qpair failed and we were unable to recover it. 00:42:46.427 [2024-05-15 09:08:40.926442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.427 [2024-05-15 09:08:40.926568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.427 [2024-05-15 09:08:40.926594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.427 qpair failed and we were unable to recover it. 00:42:46.427 [2024-05-15 09:08:40.926774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.427 [2024-05-15 09:08:40.926922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.427 [2024-05-15 09:08:40.926963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.427 qpair failed and we were unable to recover it. 00:42:46.427 [2024-05-15 09:08:40.927073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.427 [2024-05-15 09:08:40.927269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.427 [2024-05-15 09:08:40.927295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.427 qpair failed and we were unable to recover it. 00:42:46.427 [2024-05-15 09:08:40.927398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.427 [2024-05-15 09:08:40.927501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.427 [2024-05-15 09:08:40.927529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.427 qpair failed and we were unable to recover it. 00:42:46.427 [2024-05-15 09:08:40.927697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.427 [2024-05-15 09:08:40.927839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.427 [2024-05-15 09:08:40.927867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.427 qpair failed and we were unable to recover it. 00:42:46.427 [2024-05-15 09:08:40.928013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.427 [2024-05-15 09:08:40.928142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.427 [2024-05-15 09:08:40.928168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.427 qpair failed and we were unable to recover it. 00:42:46.427 [2024-05-15 09:08:40.928285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.427 [2024-05-15 09:08:40.928414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.427 [2024-05-15 09:08:40.928441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.427 qpair failed and we were unable to recover it. 00:42:46.427 [2024-05-15 09:08:40.928565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.427 [2024-05-15 09:08:40.928719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.427 [2024-05-15 09:08:40.928750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.427 qpair failed and we were unable to recover it. 00:42:46.427 [2024-05-15 09:08:40.928886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.427 [2024-05-15 09:08:40.929022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.427 [2024-05-15 09:08:40.929066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.427 qpair failed and we were unable to recover it. 00:42:46.427 [2024-05-15 09:08:40.929241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.427 [2024-05-15 09:08:40.929387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.427 [2024-05-15 09:08:40.929413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.427 qpair failed and we were unable to recover it. 00:42:46.427 [2024-05-15 09:08:40.929543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.427 [2024-05-15 09:08:40.929683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.427 [2024-05-15 09:08:40.929712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.427 qpair failed and we were unable to recover it. 00:42:46.427 [2024-05-15 09:08:40.929853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.427 [2024-05-15 09:08:40.929986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.427 [2024-05-15 09:08:40.930015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.427 qpair failed and we were unable to recover it. 00:42:46.427 [2024-05-15 09:08:40.930171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.427 [2024-05-15 09:08:40.930347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.427 [2024-05-15 09:08:40.930373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.427 qpair failed and we were unable to recover it. 00:42:46.427 [2024-05-15 09:08:40.930475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.427 [2024-05-15 09:08:40.930627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.427 [2024-05-15 09:08:40.930670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.427 qpair failed and we were unable to recover it. 00:42:46.427 [2024-05-15 09:08:40.930775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.427 [2024-05-15 09:08:40.930877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.427 [2024-05-15 09:08:40.930921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.427 qpair failed and we were unable to recover it. 00:42:46.427 [2024-05-15 09:08:40.931072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.427 [2024-05-15 09:08:40.931220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.427 [2024-05-15 09:08:40.931250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.427 qpair failed and we were unable to recover it. 00:42:46.427 [2024-05-15 09:08:40.931411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.427 [2024-05-15 09:08:40.931541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.427 [2024-05-15 09:08:40.931571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.427 qpair failed and we were unable to recover it. 00:42:46.427 [2024-05-15 09:08:40.931692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.427 [2024-05-15 09:08:40.931841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.427 [2024-05-15 09:08:40.931875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.427 qpair failed and we were unable to recover it. 00:42:46.427 [2024-05-15 09:08:40.932018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.427 [2024-05-15 09:08:40.932174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.427 [2024-05-15 09:08:40.932201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.427 qpair failed and we were unable to recover it. 00:42:46.427 [2024-05-15 09:08:40.932329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.427 [2024-05-15 09:08:40.932467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.427 [2024-05-15 09:08:40.932503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.427 qpair failed and we were unable to recover it. 00:42:46.427 [2024-05-15 09:08:40.932656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.427 [2024-05-15 09:08:40.932766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.427 [2024-05-15 09:08:40.932796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.427 qpair failed and we were unable to recover it. 00:42:46.427 [2024-05-15 09:08:40.932953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.427 [2024-05-15 09:08:40.933095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.427 [2024-05-15 09:08:40.933124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.427 qpair failed and we were unable to recover it. 00:42:46.427 [2024-05-15 09:08:40.933282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.427 [2024-05-15 09:08:40.933433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.427 [2024-05-15 09:08:40.933459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.427 qpair failed and we were unable to recover it. 00:42:46.427 [2024-05-15 09:08:40.933581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.427 [2024-05-15 09:08:40.933698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.427 [2024-05-15 09:08:40.933724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.428 qpair failed and we were unable to recover it. 00:42:46.428 [2024-05-15 09:08:40.933894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.428 [2024-05-15 09:08:40.934007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.428 [2024-05-15 09:08:40.934036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.428 qpair failed and we were unable to recover it. 00:42:46.428 [2024-05-15 09:08:40.934155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.428 [2024-05-15 09:08:40.934343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.428 [2024-05-15 09:08:40.934369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.428 qpair failed and we were unable to recover it. 00:42:46.428 [2024-05-15 09:08:40.934477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.428 [2024-05-15 09:08:40.934621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.428 [2024-05-15 09:08:40.934650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.428 qpair failed and we were unable to recover it. 00:42:46.428 [2024-05-15 09:08:40.934790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.428 [2024-05-15 09:08:40.934908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.428 [2024-05-15 09:08:40.934941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.428 qpair failed and we were unable to recover it. 00:42:46.428 [2024-05-15 09:08:40.935089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.428 [2024-05-15 09:08:40.935203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.428 [2024-05-15 09:08:40.935237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.428 qpair failed and we were unable to recover it. 00:42:46.428 [2024-05-15 09:08:40.935372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.428 [2024-05-15 09:08:40.935468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.428 [2024-05-15 09:08:40.935516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.428 qpair failed and we were unable to recover it. 00:42:46.428 [2024-05-15 09:08:40.935653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.428 [2024-05-15 09:08:40.935782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.428 [2024-05-15 09:08:40.935809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.428 qpair failed and we were unable to recover it. 00:42:46.428 [2024-05-15 09:08:40.935959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.428 [2024-05-15 09:08:40.936132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.428 [2024-05-15 09:08:40.936161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.428 qpair failed and we were unable to recover it. 00:42:46.428 [2024-05-15 09:08:40.936306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.428 [2024-05-15 09:08:40.936412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.428 [2024-05-15 09:08:40.936439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.428 qpair failed and we were unable to recover it. 00:42:46.428 [2024-05-15 09:08:40.936546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.428 [2024-05-15 09:08:40.936662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.428 [2024-05-15 09:08:40.936704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.428 qpair failed and we were unable to recover it. 00:42:46.428 [2024-05-15 09:08:40.936824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.428 [2024-05-15 09:08:40.936972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.428 [2024-05-15 09:08:40.937001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.428 qpair failed and we were unable to recover it. 00:42:46.428 [2024-05-15 09:08:40.937153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.428 [2024-05-15 09:08:40.937291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.428 [2024-05-15 09:08:40.937319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.428 qpair failed and we were unable to recover it. 00:42:46.428 [2024-05-15 09:08:40.937451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.428 [2024-05-15 09:08:40.937557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.428 [2024-05-15 09:08:40.937584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.428 qpair failed and we were unable to recover it. 00:42:46.428 [2024-05-15 09:08:40.937713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.428 [2024-05-15 09:08:40.937826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.428 [2024-05-15 09:08:40.937857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.428 qpair failed and we were unable to recover it. 00:42:46.428 [2024-05-15 09:08:40.938008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.428 [2024-05-15 09:08:40.938170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.428 [2024-05-15 09:08:40.938196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.428 qpair failed and we were unable to recover it. 00:42:46.428 [2024-05-15 09:08:40.938319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.428 [2024-05-15 09:08:40.938447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.428 [2024-05-15 09:08:40.938485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.428 qpair failed and we were unable to recover it. 00:42:46.428 [2024-05-15 09:08:40.938668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.428 [2024-05-15 09:08:40.938831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.428 [2024-05-15 09:08:40.938858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.428 qpair failed and we were unable to recover it. 00:42:46.428 [2024-05-15 09:08:40.939022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.428 [2024-05-15 09:08:40.939166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.428 [2024-05-15 09:08:40.939195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.428 qpair failed and we were unable to recover it. 00:42:46.428 [2024-05-15 09:08:40.939338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.428 [2024-05-15 09:08:40.939467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.428 [2024-05-15 09:08:40.939500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.428 qpair failed and we were unable to recover it. 00:42:46.428 [2024-05-15 09:08:40.939607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.428 [2024-05-15 09:08:40.939727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.428 [2024-05-15 09:08:40.939755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.428 qpair failed and we were unable to recover it. 00:42:46.428 [2024-05-15 09:08:40.939893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.428 [2024-05-15 09:08:40.940005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.428 [2024-05-15 09:08:40.940034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.428 qpair failed and we were unable to recover it. 00:42:46.428 [2024-05-15 09:08:40.940155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.428 [2024-05-15 09:08:40.940293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.428 [2024-05-15 09:08:40.940321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.428 qpair failed and we were unable to recover it. 00:42:46.428 [2024-05-15 09:08:40.940434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.428 [2024-05-15 09:08:40.940593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.428 [2024-05-15 09:08:40.940622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.428 qpair failed and we were unable to recover it. 00:42:46.428 [2024-05-15 09:08:40.940770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.428 [2024-05-15 09:08:40.940910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.428 [2024-05-15 09:08:40.940939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.428 qpair failed and we were unable to recover it. 00:42:46.428 [2024-05-15 09:08:40.941083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.428 [2024-05-15 09:08:40.941230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.428 [2024-05-15 09:08:40.941276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.428 qpair failed and we were unable to recover it. 00:42:46.428 [2024-05-15 09:08:40.941391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.428 [2024-05-15 09:08:40.941520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.428 [2024-05-15 09:08:40.941547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.428 qpair failed and we were unable to recover it. 00:42:46.428 [2024-05-15 09:08:40.941709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.428 [2024-05-15 09:08:40.941832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.428 [2024-05-15 09:08:40.941861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.428 qpair failed and we were unable to recover it. 00:42:46.428 [2024-05-15 09:08:40.942058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.428 [2024-05-15 09:08:40.942190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.428 [2024-05-15 09:08:40.942221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.428 qpair failed and we were unable to recover it. 00:42:46.428 [2024-05-15 09:08:40.942342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.428 [2024-05-15 09:08:40.942470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.429 [2024-05-15 09:08:40.942497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.429 qpair failed and we were unable to recover it. 00:42:46.429 [2024-05-15 09:08:40.942625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.429 [2024-05-15 09:08:40.942767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.429 [2024-05-15 09:08:40.942796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.429 qpair failed and we were unable to recover it. 00:42:46.429 [2024-05-15 09:08:40.942966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.429 [2024-05-15 09:08:40.943112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.429 [2024-05-15 09:08:40.943140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.429 qpair failed and we were unable to recover it. 00:42:46.429 [2024-05-15 09:08:40.943310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.429 [2024-05-15 09:08:40.943418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.429 [2024-05-15 09:08:40.943444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.429 qpair failed and we were unable to recover it. 00:42:46.429 [2024-05-15 09:08:40.943624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.429 [2024-05-15 09:08:40.943781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.429 [2024-05-15 09:08:40.943811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.429 qpair failed and we were unable to recover it. 00:42:46.429 [2024-05-15 09:08:40.944039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.429 [2024-05-15 09:08:40.944198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.429 [2024-05-15 09:08:40.944232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.429 qpair failed and we were unable to recover it. 00:42:46.429 [2024-05-15 09:08:40.944375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.429 [2024-05-15 09:08:40.944469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.429 [2024-05-15 09:08:40.944498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.429 qpair failed and we were unable to recover it. 00:42:46.429 [2024-05-15 09:08:40.944595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.429 [2024-05-15 09:08:40.944746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.429 [2024-05-15 09:08:40.944776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.429 qpair failed and we were unable to recover it. 00:42:46.429 [2024-05-15 09:08:40.944924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.429 [2024-05-15 09:08:40.945093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.429 [2024-05-15 09:08:40.945122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.429 qpair failed and we were unable to recover it. 00:42:46.429 [2024-05-15 09:08:40.945246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.429 [2024-05-15 09:08:40.945365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.429 [2024-05-15 09:08:40.945391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.429 qpair failed and we were unable to recover it. 00:42:46.429 [2024-05-15 09:08:40.945497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.429 [2024-05-15 09:08:40.945625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.429 [2024-05-15 09:08:40.945655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.429 qpair failed and we were unable to recover it. 00:42:46.429 [2024-05-15 09:08:40.945859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.429 [2024-05-15 09:08:40.945999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.429 [2024-05-15 09:08:40.946027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.429 qpair failed and we were unable to recover it. 00:42:46.429 [2024-05-15 09:08:40.946156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.429 [2024-05-15 09:08:40.946270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.429 [2024-05-15 09:08:40.946296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.429 qpair failed and we were unable to recover it. 00:42:46.429 [2024-05-15 09:08:40.946396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.429 [2024-05-15 09:08:40.946551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.429 [2024-05-15 09:08:40.946592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.429 qpair failed and we were unable to recover it. 00:42:46.429 [2024-05-15 09:08:40.946741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.429 [2024-05-15 09:08:40.946910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.429 [2024-05-15 09:08:40.946939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.429 qpair failed and we were unable to recover it. 00:42:46.429 [2024-05-15 09:08:40.947099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.429 [2024-05-15 09:08:40.947244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.429 [2024-05-15 09:08:40.947273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.429 qpair failed and we were unable to recover it. 00:42:46.429 [2024-05-15 09:08:40.947383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.429 [2024-05-15 09:08:40.947511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.429 [2024-05-15 09:08:40.947543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.429 qpair failed and we were unable to recover it. 00:42:46.429 [2024-05-15 09:08:40.947718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.429 [2024-05-15 09:08:40.947885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.429 [2024-05-15 09:08:40.947913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.429 qpair failed and we were unable to recover it. 00:42:46.429 [2024-05-15 09:08:40.948051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.429 [2024-05-15 09:08:40.948195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.429 [2024-05-15 09:08:40.948230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.429 qpair failed and we were unable to recover it. 00:42:46.429 [2024-05-15 09:08:40.948370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.429 [2024-05-15 09:08:40.948532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.429 [2024-05-15 09:08:40.948559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.429 qpair failed and we were unable to recover it. 00:42:46.429 [2024-05-15 09:08:40.948666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.429 [2024-05-15 09:08:40.948796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.429 [2024-05-15 09:08:40.948823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.429 qpair failed and we were unable to recover it. 00:42:46.429 [2024-05-15 09:08:40.948974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.429 [2024-05-15 09:08:40.949086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.429 [2024-05-15 09:08:40.949116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.429 qpair failed and we were unable to recover it. 00:42:46.429 [2024-05-15 09:08:40.949284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.429 [2024-05-15 09:08:40.949386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.429 [2024-05-15 09:08:40.949412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.429 qpair failed and we were unable to recover it. 00:42:46.429 [2024-05-15 09:08:40.949551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.429 [2024-05-15 09:08:40.949649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.429 [2024-05-15 09:08:40.949675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.429 qpair failed and we were unable to recover it. 00:42:46.429 [2024-05-15 09:08:40.949837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.429 [2024-05-15 09:08:40.950008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.429 [2024-05-15 09:08:40.950037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.429 qpair failed and we were unable to recover it. 00:42:46.429 [2024-05-15 09:08:40.950192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.429 [2024-05-15 09:08:40.950318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.429 [2024-05-15 09:08:40.950344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.429 qpair failed and we were unable to recover it. 00:42:46.429 [2024-05-15 09:08:40.950482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.429 [2024-05-15 09:08:40.950585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.429 [2024-05-15 09:08:40.950610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.429 qpair failed and we were unable to recover it. 00:42:46.429 [2024-05-15 09:08:40.950724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.429 [2024-05-15 09:08:40.950823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.429 [2024-05-15 09:08:40.950849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.429 qpair failed and we were unable to recover it. 00:42:46.429 [2024-05-15 09:08:40.950978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.429 [2024-05-15 09:08:40.951129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.429 [2024-05-15 09:08:40.951158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.429 qpair failed and we were unable to recover it. 00:42:46.430 [2024-05-15 09:08:40.951282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.430 [2024-05-15 09:08:40.951408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.430 [2024-05-15 09:08:40.951434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.430 qpair failed and we were unable to recover it. 00:42:46.430 [2024-05-15 09:08:40.951562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.430 [2024-05-15 09:08:40.951706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.430 [2024-05-15 09:08:40.951736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.430 qpair failed and we were unable to recover it. 00:42:46.430 [2024-05-15 09:08:40.951917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.430 [2024-05-15 09:08:40.952106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.430 [2024-05-15 09:08:40.952135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.430 qpair failed and we were unable to recover it. 00:42:46.430 [2024-05-15 09:08:40.952257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.430 [2024-05-15 09:08:40.952385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.430 [2024-05-15 09:08:40.952412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.430 qpair failed and we were unable to recover it. 00:42:46.430 [2024-05-15 09:08:40.952540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.430 [2024-05-15 09:08:40.952653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.430 [2024-05-15 09:08:40.952682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.430 qpair failed and we were unable to recover it. 00:42:46.430 [2024-05-15 09:08:40.952852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.430 [2024-05-15 09:08:40.953002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.430 [2024-05-15 09:08:40.953031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.430 qpair failed and we were unable to recover it. 00:42:46.430 [2024-05-15 09:08:40.953144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.430 [2024-05-15 09:08:40.953262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.430 [2024-05-15 09:08:40.953290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.430 qpair failed and we were unable to recover it. 00:42:46.430 [2024-05-15 09:08:40.953406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.430 [2024-05-15 09:08:40.953513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.430 [2024-05-15 09:08:40.953539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.430 qpair failed and we were unable to recover it. 00:42:46.430 [2024-05-15 09:08:40.953675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.430 [2024-05-15 09:08:40.953856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.430 [2024-05-15 09:08:40.953885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.430 qpair failed and we were unable to recover it. 00:42:46.430 [2024-05-15 09:08:40.954059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.430 [2024-05-15 09:08:40.954194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.430 [2024-05-15 09:08:40.954248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.430 qpair failed and we were unable to recover it. 00:42:46.430 [2024-05-15 09:08:40.954385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.430 [2024-05-15 09:08:40.954520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.430 [2024-05-15 09:08:40.954546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.430 qpair failed and we were unable to recover it. 00:42:46.430 [2024-05-15 09:08:40.954656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.430 [2024-05-15 09:08:40.954759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.430 [2024-05-15 09:08:40.954785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.430 qpair failed and we were unable to recover it. 00:42:46.430 [2024-05-15 09:08:40.954884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.430 [2024-05-15 09:08:40.955018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.430 [2024-05-15 09:08:40.955044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.430 qpair failed and we were unable to recover it. 00:42:46.430 [2024-05-15 09:08:40.955198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.430 [2024-05-15 09:08:40.955363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.430 [2024-05-15 09:08:40.955390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.430 qpair failed and we were unable to recover it. 00:42:46.430 [2024-05-15 09:08:40.955546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.430 [2024-05-15 09:08:40.955675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.430 [2024-05-15 09:08:40.955701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.430 qpair failed and we were unable to recover it. 00:42:46.430 [2024-05-15 09:08:40.955857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.430 [2024-05-15 09:08:40.956010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.430 [2024-05-15 09:08:40.956048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.430 qpair failed and we were unable to recover it. 00:42:46.430 [2024-05-15 09:08:40.956209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.430 [2024-05-15 09:08:40.956367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.430 [2024-05-15 09:08:40.956394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.430 qpair failed and we were unable to recover it. 00:42:46.430 [2024-05-15 09:08:40.956559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.430 [2024-05-15 09:08:40.956734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.430 [2024-05-15 09:08:40.956763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.430 qpair failed and we were unable to recover it. 00:42:46.430 [2024-05-15 09:08:40.956914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.430 [2024-05-15 09:08:40.957047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.430 [2024-05-15 09:08:40.957072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.430 qpair failed and we were unable to recover it. 00:42:46.430 [2024-05-15 09:08:40.957205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.430 [2024-05-15 09:08:40.957329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.430 [2024-05-15 09:08:40.957359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.430 qpair failed and we were unable to recover it. 00:42:46.430 [2024-05-15 09:08:40.957528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.430 [2024-05-15 09:08:40.957688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.430 [2024-05-15 09:08:40.957713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.430 qpair failed and we were unable to recover it. 00:42:46.430 [2024-05-15 09:08:40.957848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.430 [2024-05-15 09:08:40.957974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.430 [2024-05-15 09:08:40.958000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.430 qpair failed and we were unable to recover it. 00:42:46.430 [2024-05-15 09:08:40.958138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.430 [2024-05-15 09:08:40.958269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.430 [2024-05-15 09:08:40.958299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.430 qpair failed and we were unable to recover it. 00:42:46.430 [2024-05-15 09:08:40.958399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.430 [2024-05-15 09:08:40.958515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.430 [2024-05-15 09:08:40.958544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.430 qpair failed and we were unable to recover it. 00:42:46.430 [2024-05-15 09:08:40.958700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.430 [2024-05-15 09:08:40.958823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.430 [2024-05-15 09:08:40.958849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.430 qpair failed and we were unable to recover it. 00:42:46.430 [2024-05-15 09:08:40.959043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.430 [2024-05-15 09:08:40.959174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.430 [2024-05-15 09:08:40.959200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.430 qpair failed and we were unable to recover it. 00:42:46.430 [2024-05-15 09:08:40.959361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.430 [2024-05-15 09:08:40.959546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.430 [2024-05-15 09:08:40.959572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.430 qpair failed and we were unable to recover it. 00:42:46.430 [2024-05-15 09:08:40.959671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.430 [2024-05-15 09:08:40.959833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.430 [2024-05-15 09:08:40.959859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.430 qpair failed and we were unable to recover it. 00:42:46.430 [2024-05-15 09:08:40.959982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.430 [2024-05-15 09:08:40.960157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.430 [2024-05-15 09:08:40.960184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.431 qpair failed and we were unable to recover it. 00:42:46.431 [2024-05-15 09:08:40.960311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.431 [2024-05-15 09:08:40.960420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.431 [2024-05-15 09:08:40.960446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.431 qpair failed and we were unable to recover it. 00:42:46.431 [2024-05-15 09:08:40.960559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.431 [2024-05-15 09:08:40.960697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.431 [2024-05-15 09:08:40.960723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.431 qpair failed and we were unable to recover it. 00:42:46.431 [2024-05-15 09:08:40.960913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.431 [2024-05-15 09:08:40.961038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.431 [2024-05-15 09:08:40.961064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.431 qpair failed and we were unable to recover it. 00:42:46.431 [2024-05-15 09:08:40.961247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.431 [2024-05-15 09:08:40.961364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.431 [2024-05-15 09:08:40.961390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.431 qpair failed and we were unable to recover it. 00:42:46.431 [2024-05-15 09:08:40.961490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.431 [2024-05-15 09:08:40.961621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.431 [2024-05-15 09:08:40.961647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.431 qpair failed and we were unable to recover it. 00:42:46.431 [2024-05-15 09:08:40.961771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.431 [2024-05-15 09:08:40.961873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.431 [2024-05-15 09:08:40.961899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.431 qpair failed and we were unable to recover it. 00:42:46.431 [2024-05-15 09:08:40.962046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.431 [2024-05-15 09:08:40.962226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.431 [2024-05-15 09:08:40.962262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.431 qpair failed and we were unable to recover it. 00:42:46.431 [2024-05-15 09:08:40.962363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.431 [2024-05-15 09:08:40.962521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.431 [2024-05-15 09:08:40.962547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.431 qpair failed and we were unable to recover it. 00:42:46.431 [2024-05-15 09:08:40.962705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.431 [2024-05-15 09:08:40.962834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.431 [2024-05-15 09:08:40.962860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.432 qpair failed and we were unable to recover it. 00:42:46.432 [2024-05-15 09:08:40.962990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.432 [2024-05-15 09:08:40.963167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.432 [2024-05-15 09:08:40.963197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.432 qpair failed and we were unable to recover it. 00:42:46.432 [2024-05-15 09:08:40.963338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.432 [2024-05-15 09:08:40.963440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.432 [2024-05-15 09:08:40.963466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.432 qpair failed and we were unable to recover it. 00:42:46.432 [2024-05-15 09:08:40.963611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.432 [2024-05-15 09:08:40.963766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.432 [2024-05-15 09:08:40.963795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.432 qpair failed and we were unable to recover it. 00:42:46.432 [2024-05-15 09:08:40.963935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.432 [2024-05-15 09:08:40.964104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.432 [2024-05-15 09:08:40.964133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.432 qpair failed and we were unable to recover it. 00:42:46.432 [2024-05-15 09:08:40.964297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.432 [2024-05-15 09:08:40.964409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.432 [2024-05-15 09:08:40.964435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.432 qpair failed and we were unable to recover it. 00:42:46.432 [2024-05-15 09:08:40.964604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.432 [2024-05-15 09:08:40.964733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.432 [2024-05-15 09:08:40.964760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.432 qpair failed and we were unable to recover it. 00:42:46.432 [2024-05-15 09:08:40.964920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.432 [2024-05-15 09:08:40.965044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.432 [2024-05-15 09:08:40.965070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.432 qpair failed and we were unable to recover it. 00:42:46.432 [2024-05-15 09:08:40.965200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.432 [2024-05-15 09:08:40.965373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.432 [2024-05-15 09:08:40.965417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.432 qpair failed and we were unable to recover it. 00:42:46.432 [2024-05-15 09:08:40.965567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.432 [2024-05-15 09:08:40.965676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.432 [2024-05-15 09:08:40.965705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.432 qpair failed and we were unable to recover it. 00:42:46.432 [2024-05-15 09:08:40.965838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.432 [2024-05-15 09:08:40.966012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.432 [2024-05-15 09:08:40.966038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.432 qpair failed and we were unable to recover it. 00:42:46.433 [2024-05-15 09:08:40.966164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.433 [2024-05-15 09:08:40.966268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.433 [2024-05-15 09:08:40.966294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.433 qpair failed and we were unable to recover it. 00:42:46.433 [2024-05-15 09:08:40.966428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.433 [2024-05-15 09:08:40.966593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.433 [2024-05-15 09:08:40.966623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.433 qpair failed and we were unable to recover it. 00:42:46.433 [2024-05-15 09:08:40.966766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.433 [2024-05-15 09:08:40.966908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.433 [2024-05-15 09:08:40.966937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.433 qpair failed and we were unable to recover it. 00:42:46.433 [2024-05-15 09:08:40.967080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.433 [2024-05-15 09:08:40.967203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.433 [2024-05-15 09:08:40.967234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.433 qpair failed and we were unable to recover it. 00:42:46.433 [2024-05-15 09:08:40.967390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.433 [2024-05-15 09:08:40.967536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.433 [2024-05-15 09:08:40.967565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.433 qpair failed and we were unable to recover it. 00:42:46.433 [2024-05-15 09:08:40.967717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.433 [2024-05-15 09:08:40.967873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.433 [2024-05-15 09:08:40.967899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.433 qpair failed and we were unable to recover it. 00:42:46.433 [2024-05-15 09:08:40.968042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.433 [2024-05-15 09:08:40.968170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.433 [2024-05-15 09:08:40.968197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.433 qpair failed and we were unable to recover it. 00:42:46.433 [2024-05-15 09:08:40.968374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.433 [2024-05-15 09:08:40.968512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.433 [2024-05-15 09:08:40.968540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.433 qpair failed and we were unable to recover it. 00:42:46.433 [2024-05-15 09:08:40.968694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.433 [2024-05-15 09:08:40.968843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.433 [2024-05-15 09:08:40.968872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.433 qpair failed and we were unable to recover it. 00:42:46.433 [2024-05-15 09:08:40.969041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.433 [2024-05-15 09:08:40.969191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.433 [2024-05-15 09:08:40.969252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.433 qpair failed and we were unable to recover it. 00:42:46.433 [2024-05-15 09:08:40.969402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.433 [2024-05-15 09:08:40.969546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.433 [2024-05-15 09:08:40.969573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.433 qpair failed and we were unable to recover it. 00:42:46.433 [2024-05-15 09:08:40.969688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.433 [2024-05-15 09:08:40.969843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.433 [2024-05-15 09:08:40.969872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.433 qpair failed and we were unable to recover it. 00:42:46.433 [2024-05-15 09:08:40.969995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.433 [2024-05-15 09:08:40.970107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.433 [2024-05-15 09:08:40.970136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.433 qpair failed and we were unable to recover it. 00:42:46.433 [2024-05-15 09:08:40.970274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.433 [2024-05-15 09:08:40.970385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.433 [2024-05-15 09:08:40.970411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.433 qpair failed and we were unable to recover it. 00:42:46.433 [2024-05-15 09:08:40.970511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.433 [2024-05-15 09:08:40.970643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.433 [2024-05-15 09:08:40.970669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.433 qpair failed and we were unable to recover it. 00:42:46.433 [2024-05-15 09:08:40.970798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.433 [2024-05-15 09:08:40.970902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.433 [2024-05-15 09:08:40.970931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.433 qpair failed and we were unable to recover it. 00:42:46.433 [2024-05-15 09:08:40.971079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.433 [2024-05-15 09:08:40.971237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.433 [2024-05-15 09:08:40.971264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.433 qpair failed and we were unable to recover it. 00:42:46.433 [2024-05-15 09:08:40.971395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.433 [2024-05-15 09:08:40.971522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.433 [2024-05-15 09:08:40.971548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.433 qpair failed and we were unable to recover it. 00:42:46.433 [2024-05-15 09:08:40.971651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.433 [2024-05-15 09:08:40.971758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.433 [2024-05-15 09:08:40.971784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.433 qpair failed and we were unable to recover it. 00:42:46.433 [2024-05-15 09:08:40.971914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.433 [2024-05-15 09:08:40.975229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.433 [2024-05-15 09:08:40.975283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.433 qpair failed and we were unable to recover it. 00:42:46.433 [2024-05-15 09:08:40.975454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.433 [2024-05-15 09:08:40.975625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.433 [2024-05-15 09:08:40.975653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.433 qpair failed and we were unable to recover it. 00:42:46.433 [2024-05-15 09:08:40.975834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.433 [2024-05-15 09:08:40.975996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.433 [2024-05-15 09:08:40.976026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.433 qpair failed and we were unable to recover it. 00:42:46.433 [2024-05-15 09:08:40.976141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.433 [2024-05-15 09:08:40.976308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.433 [2024-05-15 09:08:40.976336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.433 qpair failed and we were unable to recover it. 00:42:46.433 [2024-05-15 09:08:40.976442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.433 [2024-05-15 09:08:40.976552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.433 [2024-05-15 09:08:40.976580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.433 qpair failed and we were unable to recover it. 00:42:46.433 [2024-05-15 09:08:40.976719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.433 [2024-05-15 09:08:40.976876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.433 [2024-05-15 09:08:40.976906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.433 qpair failed and we were unable to recover it. 00:42:46.433 [2024-05-15 09:08:40.977086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.433 [2024-05-15 09:08:40.977223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.433 [2024-05-15 09:08:40.977251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.433 qpair failed and we were unable to recover it. 00:42:46.433 [2024-05-15 09:08:40.977393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.433 [2024-05-15 09:08:40.977534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.433 [2024-05-15 09:08:40.977562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.433 qpair failed and we were unable to recover it. 00:42:46.433 [2024-05-15 09:08:40.977684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.433 [2024-05-15 09:08:40.977829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.433 [2024-05-15 09:08:40.977861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.433 qpair failed and we were unable to recover it. 00:42:46.433 [2024-05-15 09:08:40.978033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.433 [2024-05-15 09:08:40.978186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.434 [2024-05-15 09:08:40.978222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.434 qpair failed and we were unable to recover it. 00:42:46.434 [2024-05-15 09:08:40.978359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.434 [2024-05-15 09:08:40.978470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.434 [2024-05-15 09:08:40.978512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.434 qpair failed and we were unable to recover it. 00:42:46.434 [2024-05-15 09:08:40.978669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.434 [2024-05-15 09:08:40.978812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.434 [2024-05-15 09:08:40.978843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.434 qpair failed and we were unable to recover it. 00:42:46.434 [2024-05-15 09:08:40.979030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.434 [2024-05-15 09:08:40.979164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.434 [2024-05-15 09:08:40.979191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.434 qpair failed and we were unable to recover it. 00:42:46.434 [2024-05-15 09:08:40.979330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.434 [2024-05-15 09:08:40.979433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.434 [2024-05-15 09:08:40.979459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.434 qpair failed and we were unable to recover it. 00:42:46.434 [2024-05-15 09:08:40.979595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.434 [2024-05-15 09:08:40.979737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.434 [2024-05-15 09:08:40.979764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.434 qpair failed and we were unable to recover it. 00:42:46.434 [2024-05-15 09:08:40.979946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.434 [2024-05-15 09:08:40.980079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.434 [2024-05-15 09:08:40.980106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.434 qpair failed and we were unable to recover it. 00:42:46.434 [2024-05-15 09:08:40.980244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.434 [2024-05-15 09:08:40.980355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.434 [2024-05-15 09:08:40.980381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.434 qpair failed and we were unable to recover it. 00:42:46.434 [2024-05-15 09:08:40.980514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.434 [2024-05-15 09:08:40.980660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.434 [2024-05-15 09:08:40.980689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.434 qpair failed and we were unable to recover it. 00:42:46.434 [2024-05-15 09:08:40.980799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.434 [2024-05-15 09:08:40.980984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.434 [2024-05-15 09:08:40.981010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.434 qpair failed and we were unable to recover it. 00:42:46.434 [2024-05-15 09:08:40.981143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.434 [2024-05-15 09:08:40.981266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.434 [2024-05-15 09:08:40.981291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.434 qpair failed and we were unable to recover it. 00:42:46.434 [2024-05-15 09:08:40.981420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.434 [2024-05-15 09:08:40.981633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.434 [2024-05-15 09:08:40.981663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.434 qpair failed and we were unable to recover it. 00:42:46.434 [2024-05-15 09:08:40.981826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.434 [2024-05-15 09:08:40.985234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.434 [2024-05-15 09:08:40.985293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.434 qpair failed and we were unable to recover it. 00:42:46.434 [2024-05-15 09:08:40.985470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.434 [2024-05-15 09:08:40.985640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.434 [2024-05-15 09:08:40.985674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.434 qpair failed and we were unable to recover it. 00:42:46.434 [2024-05-15 09:08:40.985880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.434 [2024-05-15 09:08:40.986056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.434 [2024-05-15 09:08:40.986085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.434 qpair failed and we were unable to recover it. 00:42:46.434 [2024-05-15 09:08:40.986235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.434 [2024-05-15 09:08:40.986365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.434 [2024-05-15 09:08:40.986392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.434 qpair failed and we were unable to recover it. 00:42:46.434 [2024-05-15 09:08:40.986512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.434 [2024-05-15 09:08:40.986656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.434 [2024-05-15 09:08:40.986682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.434 qpair failed and we were unable to recover it. 00:42:46.434 [2024-05-15 09:08:40.986814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.434 [2024-05-15 09:08:40.986927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.434 [2024-05-15 09:08:40.986953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.434 qpair failed and we were unable to recover it. 00:42:46.434 [2024-05-15 09:08:40.987060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.434 [2024-05-15 09:08:40.987154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.434 [2024-05-15 09:08:40.987180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.434 qpair failed and we were unable to recover it. 00:42:46.434 [2024-05-15 09:08:40.987308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.434 [2024-05-15 09:08:40.987413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.434 [2024-05-15 09:08:40.987439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.434 qpair failed and we were unable to recover it. 00:42:46.434 [2024-05-15 09:08:40.987581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.434 [2024-05-15 09:08:40.987716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.434 [2024-05-15 09:08:40.987742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.434 qpair failed and we were unable to recover it. 00:42:46.434 [2024-05-15 09:08:40.987849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.434 [2024-05-15 09:08:40.987980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.434 [2024-05-15 09:08:40.988007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.434 qpair failed and we were unable to recover it. 00:42:46.434 [2024-05-15 09:08:40.988145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.434 [2024-05-15 09:08:40.988292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.434 [2024-05-15 09:08:40.988319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.434 qpair failed and we were unable to recover it. 00:42:46.434 [2024-05-15 09:08:40.988448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.434 [2024-05-15 09:08:40.988564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.434 [2024-05-15 09:08:40.988591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.434 qpair failed and we were unable to recover it. 00:42:46.434 [2024-05-15 09:08:40.988697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.434 [2024-05-15 09:08:40.988811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.434 [2024-05-15 09:08:40.988838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.434 qpair failed and we were unable to recover it. 00:42:46.434 [2024-05-15 09:08:40.988937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.434 [2024-05-15 09:08:40.989065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.434 [2024-05-15 09:08:40.989091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.434 qpair failed and we were unable to recover it. 00:42:46.434 [2024-05-15 09:08:40.989228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.434 [2024-05-15 09:08:40.989338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.434 [2024-05-15 09:08:40.989366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.434 qpair failed and we were unable to recover it. 00:42:46.434 [2024-05-15 09:08:40.989494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.434 [2024-05-15 09:08:40.989621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.434 [2024-05-15 09:08:40.989646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.434 qpair failed and we were unable to recover it. 00:42:46.434 [2024-05-15 09:08:40.989778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.434 [2024-05-15 09:08:40.989901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.434 [2024-05-15 09:08:40.989927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.434 qpair failed and we were unable to recover it. 00:42:46.434 [2024-05-15 09:08:40.990068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.435 [2024-05-15 09:08:40.990177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.435 [2024-05-15 09:08:40.990203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.435 qpair failed and we were unable to recover it. 00:42:46.435 [2024-05-15 09:08:40.990323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.435 [2024-05-15 09:08:40.990428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.435 [2024-05-15 09:08:40.990453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.435 qpair failed and we were unable to recover it. 00:42:46.435 [2024-05-15 09:08:40.990582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.435 [2024-05-15 09:08:40.990711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.435 [2024-05-15 09:08:40.990736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.435 qpair failed and we were unable to recover it. 00:42:46.435 [2024-05-15 09:08:40.990872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.435 [2024-05-15 09:08:40.990975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.435 [2024-05-15 09:08:40.991001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.435 qpair failed and we were unable to recover it. 00:42:46.435 [2024-05-15 09:08:40.991094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.435 [2024-05-15 09:08:40.991243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.435 [2024-05-15 09:08:40.991270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.435 qpair failed and we were unable to recover it. 00:42:46.435 [2024-05-15 09:08:40.991403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.435 [2024-05-15 09:08:40.991531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.435 [2024-05-15 09:08:40.991557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.435 qpair failed and we were unable to recover it. 00:42:46.435 [2024-05-15 09:08:40.991690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.435 [2024-05-15 09:08:40.991787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.435 [2024-05-15 09:08:40.991814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.435 qpair failed and we were unable to recover it. 00:42:46.435 [2024-05-15 09:08:40.991950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.435 [2024-05-15 09:08:40.992081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.435 [2024-05-15 09:08:40.992107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.435 qpair failed and we were unable to recover it. 00:42:46.435 [2024-05-15 09:08:40.992226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.435 [2024-05-15 09:08:40.992346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.435 [2024-05-15 09:08:40.992373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.435 qpair failed and we were unable to recover it. 00:42:46.435 [2024-05-15 09:08:40.992502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.435 [2024-05-15 09:08:40.992628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.435 [2024-05-15 09:08:40.992653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.435 qpair failed and we were unable to recover it. 00:42:46.435 [2024-05-15 09:08:40.992777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.435 [2024-05-15 09:08:40.992903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.435 [2024-05-15 09:08:40.992928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.435 qpair failed and we were unable to recover it. 00:42:46.435 [2024-05-15 09:08:40.993067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.435 [2024-05-15 09:08:40.993193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.435 [2024-05-15 09:08:40.993224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.435 qpair failed and we were unable to recover it. 00:42:46.435 [2024-05-15 09:08:40.993324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.435 [2024-05-15 09:08:40.993422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.435 [2024-05-15 09:08:40.993448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.435 qpair failed and we were unable to recover it. 00:42:46.435 [2024-05-15 09:08:40.993587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.435 [2024-05-15 09:08:40.993716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.435 [2024-05-15 09:08:40.993750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.435 qpair failed and we were unable to recover it. 00:42:46.435 [2024-05-15 09:08:40.993855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.435 [2024-05-15 09:08:40.993973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.435 [2024-05-15 09:08:40.993998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.435 qpair failed and we were unable to recover it. 00:42:46.435 [2024-05-15 09:08:40.994158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.435 [2024-05-15 09:08:40.994282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.435 [2024-05-15 09:08:40.994308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.435 qpair failed and we were unable to recover it. 00:42:46.435 [2024-05-15 09:08:40.994413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.435 [2024-05-15 09:08:40.994554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.435 [2024-05-15 09:08:40.994591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.435 qpair failed and we were unable to recover it. 00:42:46.435 [2024-05-15 09:08:40.994715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.435 [2024-05-15 09:08:40.994829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.435 [2024-05-15 09:08:40.994855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.435 qpair failed and we were unable to recover it. 00:42:46.435 [2024-05-15 09:08:40.994979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.435 [2024-05-15 09:08:40.995143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.435 [2024-05-15 09:08:40.995171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.435 qpair failed and we were unable to recover it. 00:42:46.435 [2024-05-15 09:08:40.995312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.435 [2024-05-15 09:08:40.995410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.435 [2024-05-15 09:08:40.995436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.435 qpair failed and we were unable to recover it. 00:42:46.435 [2024-05-15 09:08:40.995551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.435 [2024-05-15 09:08:40.995681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.435 [2024-05-15 09:08:40.995707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.435 qpair failed and we were unable to recover it. 00:42:46.435 [2024-05-15 09:08:40.995816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.435 [2024-05-15 09:08:40.995916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.435 [2024-05-15 09:08:40.995942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.435 qpair failed and we were unable to recover it. 00:42:46.435 [2024-05-15 09:08:40.996054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.435 [2024-05-15 09:08:40.996182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.435 [2024-05-15 09:08:40.996208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.435 qpair failed and we were unable to recover it. 00:42:46.435 [2024-05-15 09:08:40.996328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.435 [2024-05-15 09:08:40.996436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.435 [2024-05-15 09:08:40.996462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.435 qpair failed and we were unable to recover it. 00:42:46.435 [2024-05-15 09:08:40.996588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.435 [2024-05-15 09:08:40.996690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.435 [2024-05-15 09:08:40.996715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.435 qpair failed and we were unable to recover it. 00:42:46.435 [2024-05-15 09:08:40.996870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.435 [2024-05-15 09:08:40.996974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.435 [2024-05-15 09:08:40.997000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.435 qpair failed and we were unable to recover it. 00:42:46.435 [2024-05-15 09:08:40.997116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.435 [2024-05-15 09:08:40.997258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.435 [2024-05-15 09:08:40.997284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.435 qpair failed and we were unable to recover it. 00:42:46.435 [2024-05-15 09:08:40.997401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.435 [2024-05-15 09:08:40.997507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.435 [2024-05-15 09:08:40.997535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.435 qpair failed and we were unable to recover it. 00:42:46.435 [2024-05-15 09:08:40.997661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.435 [2024-05-15 09:08:40.997790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.435 [2024-05-15 09:08:40.997816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.436 qpair failed and we were unable to recover it. 00:42:46.436 [2024-05-15 09:08:40.997948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.436 [2024-05-15 09:08:40.998080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.436 [2024-05-15 09:08:40.998106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.436 qpair failed and we were unable to recover it. 00:42:46.436 [2024-05-15 09:08:40.998249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.436 [2024-05-15 09:08:40.998363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.436 [2024-05-15 09:08:40.998388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.436 qpair failed and we were unable to recover it. 00:42:46.436 [2024-05-15 09:08:40.998522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.436 [2024-05-15 09:08:40.998644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.436 [2024-05-15 09:08:40.998669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.436 qpair failed and we were unable to recover it. 00:42:46.436 [2024-05-15 09:08:40.998802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.436 [2024-05-15 09:08:40.998905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.436 [2024-05-15 09:08:40.998930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.436 qpair failed and we were unable to recover it. 00:42:46.436 [2024-05-15 09:08:40.999040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.436 [2024-05-15 09:08:40.999157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.436 [2024-05-15 09:08:40.999183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.436 qpair failed and we were unable to recover it. 00:42:46.436 [2024-05-15 09:08:40.999371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.436 [2024-05-15 09:08:40.999496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.436 [2024-05-15 09:08:40.999523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.436 qpair failed and we were unable to recover it. 00:42:46.436 [2024-05-15 09:08:40.999677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.436 [2024-05-15 09:08:40.999816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.436 [2024-05-15 09:08:40.999842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.436 qpair failed and we were unable to recover it. 00:42:46.436 [2024-05-15 09:08:41.000004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.436 [2024-05-15 09:08:41.000165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.436 [2024-05-15 09:08:41.000193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.436 qpair failed and we were unable to recover it. 00:42:46.436 [2024-05-15 09:08:41.000329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.436 [2024-05-15 09:08:41.000442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.436 [2024-05-15 09:08:41.000466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.436 qpair failed and we were unable to recover it. 00:42:46.436 [2024-05-15 09:08:41.000604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.436 [2024-05-15 09:08:41.000733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.436 [2024-05-15 09:08:41.000758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.436 qpair failed and we were unable to recover it. 00:42:46.436 [2024-05-15 09:08:41.000873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.436 [2024-05-15 09:08:41.001033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.436 [2024-05-15 09:08:41.001061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.436 qpair failed and we were unable to recover it. 00:42:46.436 [2024-05-15 09:08:41.001236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.436 [2024-05-15 09:08:41.001345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.436 [2024-05-15 09:08:41.001370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.436 qpair failed and we were unable to recover it. 00:42:46.436 [2024-05-15 09:08:41.001506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.436 [2024-05-15 09:08:41.001632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.436 [2024-05-15 09:08:41.001658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.436 qpair failed and we were unable to recover it. 00:42:46.436 [2024-05-15 09:08:41.001871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.436 [2024-05-15 09:08:41.002023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.436 [2024-05-15 09:08:41.002049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.436 qpair failed and we were unable to recover it. 00:42:46.436 [2024-05-15 09:08:41.002183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.436 [2024-05-15 09:08:41.002306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.436 [2024-05-15 09:08:41.002332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.436 qpair failed and we were unable to recover it. 00:42:46.436 [2024-05-15 09:08:41.002435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.436 [2024-05-15 09:08:41.002541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.436 [2024-05-15 09:08:41.002566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.436 qpair failed and we were unable to recover it. 00:42:46.436 [2024-05-15 09:08:41.002691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.436 [2024-05-15 09:08:41.002854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.436 [2024-05-15 09:08:41.002879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.436 qpair failed and we were unable to recover it. 00:42:46.436 [2024-05-15 09:08:41.002978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.436 [2024-05-15 09:08:41.003081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.436 [2024-05-15 09:08:41.003106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.436 qpair failed and we were unable to recover it. 00:42:46.436 [2024-05-15 09:08:41.003251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.436 [2024-05-15 09:08:41.003374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.436 [2024-05-15 09:08:41.003400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.436 qpair failed and we were unable to recover it. 00:42:46.436 [2024-05-15 09:08:41.003563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.436 [2024-05-15 09:08:41.003689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.436 [2024-05-15 09:08:41.003714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.436 qpair failed and we were unable to recover it. 00:42:46.436 [2024-05-15 09:08:41.003825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.436 [2024-05-15 09:08:41.003981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.436 [2024-05-15 09:08:41.004006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.436 qpair failed and we were unable to recover it. 00:42:46.436 [2024-05-15 09:08:41.004137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.436 [2024-05-15 09:08:41.004272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.436 [2024-05-15 09:08:41.004298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.436 qpair failed and we were unable to recover it. 00:42:46.436 [2024-05-15 09:08:41.004407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.436 [2024-05-15 09:08:41.004532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.436 [2024-05-15 09:08:41.004557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.436 qpair failed and we were unable to recover it. 00:42:46.436 [2024-05-15 09:08:41.004686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.436 [2024-05-15 09:08:41.004776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.436 [2024-05-15 09:08:41.004800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.436 qpair failed and we were unable to recover it. 00:42:46.436 [2024-05-15 09:08:41.004933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.436 [2024-05-15 09:08:41.005078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.436 [2024-05-15 09:08:41.005110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.436 qpair failed and we were unable to recover it. 00:42:46.436 [2024-05-15 09:08:41.005252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.436 [2024-05-15 09:08:41.005406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.436 [2024-05-15 09:08:41.005431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.436 qpair failed and we were unable to recover it. 00:42:46.436 [2024-05-15 09:08:41.005594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.436 [2024-05-15 09:08:41.005776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.436 [2024-05-15 09:08:41.005827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.436 qpair failed and we were unable to recover it. 00:42:46.436 [2024-05-15 09:08:41.005977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.436 [2024-05-15 09:08:41.006071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.436 [2024-05-15 09:08:41.006097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.436 qpair failed and we were unable to recover it. 00:42:46.436 [2024-05-15 09:08:41.006210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.436 [2024-05-15 09:08:41.006325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.437 [2024-05-15 09:08:41.006350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.437 qpair failed and we were unable to recover it. 00:42:46.437 [2024-05-15 09:08:41.006459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.437 [2024-05-15 09:08:41.006597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.437 [2024-05-15 09:08:41.006622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.437 qpair failed and we were unable to recover it. 00:42:46.437 [2024-05-15 09:08:41.006767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.437 [2024-05-15 09:08:41.006862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.437 [2024-05-15 09:08:41.006888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.437 qpair failed and we were unable to recover it. 00:42:46.437 [2024-05-15 09:08:41.006998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.437 [2024-05-15 09:08:41.007130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.437 [2024-05-15 09:08:41.007155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.437 qpair failed and we were unable to recover it. 00:42:46.437 [2024-05-15 09:08:41.007270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.437 [2024-05-15 09:08:41.007376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.437 [2024-05-15 09:08:41.007401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.437 qpair failed and we were unable to recover it. 00:42:46.437 [2024-05-15 09:08:41.007508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.437 [2024-05-15 09:08:41.007638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.437 [2024-05-15 09:08:41.007663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.437 qpair failed and we were unable to recover it. 00:42:46.437 [2024-05-15 09:08:41.007787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.437 [2024-05-15 09:08:41.007917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.437 [2024-05-15 09:08:41.007945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.437 qpair failed and we were unable to recover it. 00:42:46.437 [2024-05-15 09:08:41.008070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.437 [2024-05-15 09:08:41.008222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.437 [2024-05-15 09:08:41.008249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.437 qpair failed and we were unable to recover it. 00:42:46.437 [2024-05-15 09:08:41.008357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.437 [2024-05-15 09:08:41.008468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.437 [2024-05-15 09:08:41.008493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.437 qpair failed and we were unable to recover it. 00:42:46.437 [2024-05-15 09:08:41.008633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.437 [2024-05-15 09:08:41.008745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.437 [2024-05-15 09:08:41.008770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.437 qpair failed and we were unable to recover it. 00:42:46.437 [2024-05-15 09:08:41.008880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.437 [2024-05-15 09:08:41.009014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.437 [2024-05-15 09:08:41.009041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.437 qpair failed and we were unable to recover it. 00:42:46.437 [2024-05-15 09:08:41.009162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.437 [2024-05-15 09:08:41.009278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.437 [2024-05-15 09:08:41.009303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.437 qpair failed and we were unable to recover it. 00:42:46.437 [2024-05-15 09:08:41.009436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.437 [2024-05-15 09:08:41.009568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.437 [2024-05-15 09:08:41.009593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.437 qpair failed and we were unable to recover it. 00:42:46.437 [2024-05-15 09:08:41.009693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.437 [2024-05-15 09:08:41.009854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.437 [2024-05-15 09:08:41.009888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.437 qpair failed and we were unable to recover it. 00:42:46.437 [2024-05-15 09:08:41.010014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.437 [2024-05-15 09:08:41.010164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.437 [2024-05-15 09:08:41.010189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.437 qpair failed and we were unable to recover it. 00:42:46.437 [2024-05-15 09:08:41.010303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.437 [2024-05-15 09:08:41.010428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.437 [2024-05-15 09:08:41.010453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.437 qpair failed and we were unable to recover it. 00:42:46.437 [2024-05-15 09:08:41.010567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.437 [2024-05-15 09:08:41.010667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.437 [2024-05-15 09:08:41.010698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.437 qpair failed and we were unable to recover it. 00:42:46.437 [2024-05-15 09:08:41.010835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.437 [2024-05-15 09:08:41.010953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.437 [2024-05-15 09:08:41.010978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.437 qpair failed and we were unable to recover it. 00:42:46.437 [2024-05-15 09:08:41.011076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.437 [2024-05-15 09:08:41.011220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.437 [2024-05-15 09:08:41.011246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.437 qpair failed and we were unable to recover it. 00:42:46.437 [2024-05-15 09:08:41.011378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.437 [2024-05-15 09:08:41.011487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.437 [2024-05-15 09:08:41.011512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.437 qpair failed and we were unable to recover it. 00:42:46.437 [2024-05-15 09:08:41.011650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.437 [2024-05-15 09:08:41.011764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.437 [2024-05-15 09:08:41.011789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.437 qpair failed and we were unable to recover it. 00:42:46.437 [2024-05-15 09:08:41.011887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.437 [2024-05-15 09:08:41.012001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.437 [2024-05-15 09:08:41.012025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.437 qpair failed and we were unable to recover it. 00:42:46.437 [2024-05-15 09:08:41.012125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.437 [2024-05-15 09:08:41.012269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.437 [2024-05-15 09:08:41.012295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.437 qpair failed and we were unable to recover it. 00:42:46.437 [2024-05-15 09:08:41.012424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.437 [2024-05-15 09:08:41.012521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.437 [2024-05-15 09:08:41.012546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.437 qpair failed and we were unable to recover it. 00:42:46.437 [2024-05-15 09:08:41.012653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.437 [2024-05-15 09:08:41.012755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.437 [2024-05-15 09:08:41.012779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.437 qpair failed and we were unable to recover it. 00:42:46.437 [2024-05-15 09:08:41.012909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.438 [2024-05-15 09:08:41.013009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.438 [2024-05-15 09:08:41.013035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.438 qpair failed and we were unable to recover it. 00:42:46.438 [2024-05-15 09:08:41.013141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.438 [2024-05-15 09:08:41.013269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.438 [2024-05-15 09:08:41.013295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.438 qpair failed and we were unable to recover it. 00:42:46.438 [2024-05-15 09:08:41.013433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.438 [2024-05-15 09:08:41.013564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.438 [2024-05-15 09:08:41.013589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.438 qpair failed and we were unable to recover it. 00:42:46.438 [2024-05-15 09:08:41.013721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.438 [2024-05-15 09:08:41.013850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.438 [2024-05-15 09:08:41.013875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.438 qpair failed and we were unable to recover it. 00:42:46.438 [2024-05-15 09:08:41.014007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.438 [2024-05-15 09:08:41.014114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.438 [2024-05-15 09:08:41.014138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.438 qpair failed and we were unable to recover it. 00:42:46.438 [2024-05-15 09:08:41.014280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.438 [2024-05-15 09:08:41.014391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.438 [2024-05-15 09:08:41.014415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.438 qpair failed and we were unable to recover it. 00:42:46.438 [2024-05-15 09:08:41.014513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.438 [2024-05-15 09:08:41.014628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.438 [2024-05-15 09:08:41.014653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.438 qpair failed and we were unable to recover it. 00:42:46.438 [2024-05-15 09:08:41.014775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.438 [2024-05-15 09:08:41.014903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.438 [2024-05-15 09:08:41.014928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.438 qpair failed and we were unable to recover it. 00:42:46.438 [2024-05-15 09:08:41.015036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.438 [2024-05-15 09:08:41.015161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.438 [2024-05-15 09:08:41.015186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.438 qpair failed and we were unable to recover it. 00:42:46.438 [2024-05-15 09:08:41.015329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.438 [2024-05-15 09:08:41.015439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.438 [2024-05-15 09:08:41.015464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.438 qpair failed and we were unable to recover it. 00:42:46.438 [2024-05-15 09:08:41.015600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.438 [2024-05-15 09:08:41.015732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.438 [2024-05-15 09:08:41.015758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.438 qpair failed and we were unable to recover it. 00:42:46.438 [2024-05-15 09:08:41.015870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.438 [2024-05-15 09:08:41.015978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.438 [2024-05-15 09:08:41.016003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.438 qpair failed and we were unable to recover it. 00:42:46.438 [2024-05-15 09:08:41.016112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.438 [2024-05-15 09:08:41.016246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.438 [2024-05-15 09:08:41.016272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.438 qpair failed and we were unable to recover it. 00:42:46.438 [2024-05-15 09:08:41.016404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.438 [2024-05-15 09:08:41.016513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.438 [2024-05-15 09:08:41.016538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.438 qpair failed and we were unable to recover it. 00:42:46.438 [2024-05-15 09:08:41.016672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.438 [2024-05-15 09:08:41.016827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.438 [2024-05-15 09:08:41.016852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.438 qpair failed and we were unable to recover it. 00:42:46.438 [2024-05-15 09:08:41.016956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.438 [2024-05-15 09:08:41.017086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.438 [2024-05-15 09:08:41.017111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.438 qpair failed and we were unable to recover it. 00:42:46.438 [2024-05-15 09:08:41.017223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.438 [2024-05-15 09:08:41.017325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.438 [2024-05-15 09:08:41.017350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.438 qpair failed and we were unable to recover it. 00:42:46.438 [2024-05-15 09:08:41.017460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.438 [2024-05-15 09:08:41.017573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.438 [2024-05-15 09:08:41.017598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.438 qpair failed and we were unable to recover it. 00:42:46.438 [2024-05-15 09:08:41.017711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.438 [2024-05-15 09:08:41.017862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.438 [2024-05-15 09:08:41.017887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.438 qpair failed and we were unable to recover it. 00:42:46.438 [2024-05-15 09:08:41.017993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.438 [2024-05-15 09:08:41.018124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.438 [2024-05-15 09:08:41.018149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.438 qpair failed and we were unable to recover it. 00:42:46.438 [2024-05-15 09:08:41.018261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.438 [2024-05-15 09:08:41.018383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.438 [2024-05-15 09:08:41.018408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.438 qpair failed and we were unable to recover it. 00:42:46.438 [2024-05-15 09:08:41.018558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.438 [2024-05-15 09:08:41.018657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.438 [2024-05-15 09:08:41.018681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.438 qpair failed and we were unable to recover it. 00:42:46.438 [2024-05-15 09:08:41.018780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.438 [2024-05-15 09:08:41.018929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.438 [2024-05-15 09:08:41.018954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.438 qpair failed and we were unable to recover it. 00:42:46.438 [2024-05-15 09:08:41.019106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.438 [2024-05-15 09:08:41.019213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.438 [2024-05-15 09:08:41.019243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.438 qpair failed and we were unable to recover it. 00:42:46.438 [2024-05-15 09:08:41.019347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.438 [2024-05-15 09:08:41.019483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.438 [2024-05-15 09:08:41.019508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.438 qpair failed and we were unable to recover it. 00:42:46.438 [2024-05-15 09:08:41.019642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.438 [2024-05-15 09:08:41.019770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.438 [2024-05-15 09:08:41.019795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.438 qpair failed and we were unable to recover it. 00:42:46.438 [2024-05-15 09:08:41.019925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.438 [2024-05-15 09:08:41.020031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.438 [2024-05-15 09:08:41.020057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.438 qpair failed and we were unable to recover it. 00:42:46.438 [2024-05-15 09:08:41.020163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.438 [2024-05-15 09:08:41.020277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.438 [2024-05-15 09:08:41.020303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.438 qpair failed and we were unable to recover it. 00:42:46.438 [2024-05-15 09:08:41.020404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.438 [2024-05-15 09:08:41.020511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.438 [2024-05-15 09:08:41.020536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.438 qpair failed and we were unable to recover it. 00:42:46.438 [2024-05-15 09:08:41.020637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.439 [2024-05-15 09:08:41.020742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.439 [2024-05-15 09:08:41.020766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.439 qpair failed and we were unable to recover it. 00:42:46.439 [2024-05-15 09:08:41.020875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.439 [2024-05-15 09:08:41.021005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.439 [2024-05-15 09:08:41.021029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.439 qpair failed and we were unable to recover it. 00:42:46.439 [2024-05-15 09:08:41.021130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.439 [2024-05-15 09:08:41.021267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.439 [2024-05-15 09:08:41.021292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.439 qpair failed and we were unable to recover it. 00:42:46.439 [2024-05-15 09:08:41.021394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.439 [2024-05-15 09:08:41.021521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.439 [2024-05-15 09:08:41.021548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.439 qpair failed and we were unable to recover it. 00:42:46.439 [2024-05-15 09:08:41.021658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.439 [2024-05-15 09:08:41.021763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.439 [2024-05-15 09:08:41.021787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.439 qpair failed and we were unable to recover it. 00:42:46.439 [2024-05-15 09:08:41.021942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.439 [2024-05-15 09:08:41.022071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.439 [2024-05-15 09:08:41.022096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.439 qpair failed and we were unable to recover it. 00:42:46.439 [2024-05-15 09:08:41.022201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.439 [2024-05-15 09:08:41.022317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.439 [2024-05-15 09:08:41.022342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.439 qpair failed and we were unable to recover it. 00:42:46.439 [2024-05-15 09:08:41.022469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.439 [2024-05-15 09:08:41.022603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.439 [2024-05-15 09:08:41.022628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.439 qpair failed and we were unable to recover it. 00:42:46.439 [2024-05-15 09:08:41.022759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.439 [2024-05-15 09:08:41.022888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.439 [2024-05-15 09:08:41.022914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.439 qpair failed and we were unable to recover it. 00:42:46.439 [2024-05-15 09:08:41.023037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.439 [2024-05-15 09:08:41.023169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.439 [2024-05-15 09:08:41.023194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.439 qpair failed and we were unable to recover it. 00:42:46.439 [2024-05-15 09:08:41.023332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.439 [2024-05-15 09:08:41.023436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.439 [2024-05-15 09:08:41.023461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.439 qpair failed and we were unable to recover it. 00:42:46.439 [2024-05-15 09:08:41.023580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.439 [2024-05-15 09:08:41.023679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.439 [2024-05-15 09:08:41.023704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.439 qpair failed and we were unable to recover it. 00:42:46.439 [2024-05-15 09:08:41.023831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.439 [2024-05-15 09:08:41.023988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.439 [2024-05-15 09:08:41.024013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.439 qpair failed and we were unable to recover it. 00:42:46.439 [2024-05-15 09:08:41.024161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.439 [2024-05-15 09:08:41.024304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.439 [2024-05-15 09:08:41.024333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.439 qpair failed and we were unable to recover it. 00:42:46.439 [2024-05-15 09:08:41.024469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.439 [2024-05-15 09:08:41.024637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.439 [2024-05-15 09:08:41.024662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.439 qpair failed and we were unable to recover it. 00:42:46.439 [2024-05-15 09:08:41.024790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.439 [2024-05-15 09:08:41.024895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.439 [2024-05-15 09:08:41.024920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.439 qpair failed and we were unable to recover it. 00:42:46.439 [2024-05-15 09:08:41.025014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.439 [2024-05-15 09:08:41.025147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.439 [2024-05-15 09:08:41.025172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.439 qpair failed and we were unable to recover it. 00:42:46.439 [2024-05-15 09:08:41.025337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.439 [2024-05-15 09:08:41.025442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.439 [2024-05-15 09:08:41.025467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.439 qpair failed and we were unable to recover it. 00:42:46.439 [2024-05-15 09:08:41.025572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.439 [2024-05-15 09:08:41.025698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.439 [2024-05-15 09:08:41.025723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.439 qpair failed and we were unable to recover it. 00:42:46.439 [2024-05-15 09:08:41.025853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.439 [2024-05-15 09:08:41.025957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.439 [2024-05-15 09:08:41.025982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.439 qpair failed and we were unable to recover it. 00:42:46.439 [2024-05-15 09:08:41.026094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.439 [2024-05-15 09:08:41.026202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.439 [2024-05-15 09:08:41.026233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.439 qpair failed and we were unable to recover it. 00:42:46.439 [2024-05-15 09:08:41.026344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.439 [2024-05-15 09:08:41.026450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.439 [2024-05-15 09:08:41.026474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.439 qpair failed and we were unable to recover it. 00:42:46.439 [2024-05-15 09:08:41.026615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.439 [2024-05-15 09:08:41.026716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.439 [2024-05-15 09:08:41.026741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.439 qpair failed and we were unable to recover it. 00:42:46.439 [2024-05-15 09:08:41.026872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.439 [2024-05-15 09:08:41.027005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.439 [2024-05-15 09:08:41.027030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.439 qpair failed and we were unable to recover it. 00:42:46.439 [2024-05-15 09:08:41.027187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.439 [2024-05-15 09:08:41.027306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.439 [2024-05-15 09:08:41.027332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.439 qpair failed and we were unable to recover it. 00:42:46.439 [2024-05-15 09:08:41.027461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.439 [2024-05-15 09:08:41.027619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.439 [2024-05-15 09:08:41.027644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.439 qpair failed and we were unable to recover it. 00:42:46.439 [2024-05-15 09:08:41.027776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.439 [2024-05-15 09:08:41.027934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.439 [2024-05-15 09:08:41.027959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.439 qpair failed and we were unable to recover it. 00:42:46.439 [2024-05-15 09:08:41.028114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.439 [2024-05-15 09:08:41.028269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.439 [2024-05-15 09:08:41.028297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.439 qpair failed and we were unable to recover it. 00:42:46.439 [2024-05-15 09:08:41.028463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.439 [2024-05-15 09:08:41.028628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.439 [2024-05-15 09:08:41.028655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.439 qpair failed and we were unable to recover it. 00:42:46.440 [2024-05-15 09:08:41.028872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.440 [2024-05-15 09:08:41.029028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.440 [2024-05-15 09:08:41.029069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.440 qpair failed and we were unable to recover it. 00:42:46.440 [2024-05-15 09:08:41.029222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.440 [2024-05-15 09:08:41.029325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.440 [2024-05-15 09:08:41.029351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.440 qpair failed and we were unable to recover it. 00:42:46.440 [2024-05-15 09:08:41.029448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.440 [2024-05-15 09:08:41.029551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.440 [2024-05-15 09:08:41.029577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.440 qpair failed and we were unable to recover it. 00:42:46.440 [2024-05-15 09:08:41.029685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.440 [2024-05-15 09:08:41.029813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.440 [2024-05-15 09:08:41.029838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.440 qpair failed and we were unable to recover it. 00:42:46.440 [2024-05-15 09:08:41.029945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.440 [2024-05-15 09:08:41.030071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.440 [2024-05-15 09:08:41.030096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.440 qpair failed and we were unable to recover it. 00:42:46.440 [2024-05-15 09:08:41.030193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.440 [2024-05-15 09:08:41.030309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.440 [2024-05-15 09:08:41.030335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.440 qpair failed and we were unable to recover it. 00:42:46.440 [2024-05-15 09:08:41.030452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.440 [2024-05-15 09:08:41.030565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.440 [2024-05-15 09:08:41.030590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.440 qpair failed and we were unable to recover it. 00:42:46.440 [2024-05-15 09:08:41.030724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.440 [2024-05-15 09:08:41.030829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.440 [2024-05-15 09:08:41.030854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.440 qpair failed and we were unable to recover it. 00:42:46.440 [2024-05-15 09:08:41.030990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.440 [2024-05-15 09:08:41.031111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.440 [2024-05-15 09:08:41.031136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.440 qpair failed and we were unable to recover it. 00:42:46.440 [2024-05-15 09:08:41.031278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.440 [2024-05-15 09:08:41.031401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.440 [2024-05-15 09:08:41.031426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.440 qpair failed and we were unable to recover it. 00:42:46.440 [2024-05-15 09:08:41.031540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.440 [2024-05-15 09:08:41.031695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.440 [2024-05-15 09:08:41.031719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.440 qpair failed and we were unable to recover it. 00:42:46.440 [2024-05-15 09:08:41.031849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.440 [2024-05-15 09:08:41.031943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.440 [2024-05-15 09:08:41.031968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.440 qpair failed and we were unable to recover it. 00:42:46.440 [2024-05-15 09:08:41.032135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.440 [2024-05-15 09:08:41.032292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.440 [2024-05-15 09:08:41.032317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.440 qpair failed and we were unable to recover it. 00:42:46.440 [2024-05-15 09:08:41.032453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.440 [2024-05-15 09:08:41.032611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.440 [2024-05-15 09:08:41.032636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.440 qpair failed and we were unable to recover it. 00:42:46.440 [2024-05-15 09:08:41.032766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.440 [2024-05-15 09:08:41.032924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.440 [2024-05-15 09:08:41.032949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.440 qpair failed and we were unable to recover it. 00:42:46.440 [2024-05-15 09:08:41.033055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.440 [2024-05-15 09:08:41.033225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.440 [2024-05-15 09:08:41.033251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.440 qpair failed and we were unable to recover it. 00:42:46.440 [2024-05-15 09:08:41.033367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.440 [2024-05-15 09:08:41.033479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.440 [2024-05-15 09:08:41.033504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.440 qpair failed and we were unable to recover it. 00:42:46.440 [2024-05-15 09:08:41.033632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.440 [2024-05-15 09:08:41.033764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.440 [2024-05-15 09:08:41.033789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.440 qpair failed and we were unable to recover it. 00:42:46.440 [2024-05-15 09:08:41.033942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.440 [2024-05-15 09:08:41.034053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.440 [2024-05-15 09:08:41.034077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.440 qpair failed and we were unable to recover it. 00:42:46.440 [2024-05-15 09:08:41.034179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.440 [2024-05-15 09:08:41.034316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.440 [2024-05-15 09:08:41.034341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.440 qpair failed and we were unable to recover it. 00:42:46.440 [2024-05-15 09:08:41.034470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.440 [2024-05-15 09:08:41.034598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.440 [2024-05-15 09:08:41.034623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.440 qpair failed and we were unable to recover it. 00:42:46.440 [2024-05-15 09:08:41.034778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.440 [2024-05-15 09:08:41.034883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.440 [2024-05-15 09:08:41.034908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.440 qpair failed and we were unable to recover it. 00:42:46.440 [2024-05-15 09:08:41.035055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.440 [2024-05-15 09:08:41.035169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.440 [2024-05-15 09:08:41.035194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.440 qpair failed and we were unable to recover it. 00:42:46.440 [2024-05-15 09:08:41.035326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.440 [2024-05-15 09:08:41.035434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.440 [2024-05-15 09:08:41.035459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.440 qpair failed and we were unable to recover it. 00:42:46.440 [2024-05-15 09:08:41.035574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.440 [2024-05-15 09:08:41.035690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.440 [2024-05-15 09:08:41.035715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.440 qpair failed and we were unable to recover it. 00:42:46.440 [2024-05-15 09:08:41.035852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.440 [2024-05-15 09:08:41.035953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.440 [2024-05-15 09:08:41.035979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.440 qpair failed and we were unable to recover it. 00:42:46.440 [2024-05-15 09:08:41.036085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.440 [2024-05-15 09:08:41.036210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.440 [2024-05-15 09:08:41.036247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.440 qpair failed and we were unable to recover it. 00:42:46.440 [2024-05-15 09:08:41.036382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.440 [2024-05-15 09:08:41.036505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.440 [2024-05-15 09:08:41.036529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.440 qpair failed and we were unable to recover it. 00:42:46.440 [2024-05-15 09:08:41.036662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.440 [2024-05-15 09:08:41.036792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.440 [2024-05-15 09:08:41.036817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.441 qpair failed and we were unable to recover it. 00:42:46.441 [2024-05-15 09:08:41.036954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.441 [2024-05-15 09:08:41.037087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.441 [2024-05-15 09:08:41.037112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.441 qpair failed and we were unable to recover it. 00:42:46.441 [2024-05-15 09:08:41.037257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.441 [2024-05-15 09:08:41.037388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.441 [2024-05-15 09:08:41.037414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.441 qpair failed and we were unable to recover it. 00:42:46.441 [2024-05-15 09:08:41.037514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.441 [2024-05-15 09:08:41.037620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.441 [2024-05-15 09:08:41.037644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.441 qpair failed and we were unable to recover it. 00:42:46.441 [2024-05-15 09:08:41.037746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.441 [2024-05-15 09:08:41.037878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.441 [2024-05-15 09:08:41.037903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.441 qpair failed and we were unable to recover it. 00:42:46.441 [2024-05-15 09:08:41.038026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.441 [2024-05-15 09:08:41.038149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.441 [2024-05-15 09:08:41.038174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.441 qpair failed and we were unable to recover it. 00:42:46.441 [2024-05-15 09:08:41.038320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.441 [2024-05-15 09:08:41.038420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.441 [2024-05-15 09:08:41.038445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.441 qpair failed and we were unable to recover it. 00:42:46.441 [2024-05-15 09:08:41.038576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.441 [2024-05-15 09:08:41.038685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.441 [2024-05-15 09:08:41.038714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.441 qpair failed and we were unable to recover it. 00:42:46.441 [2024-05-15 09:08:41.038821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.441 [2024-05-15 09:08:41.038949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.441 [2024-05-15 09:08:41.038974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.441 qpair failed and we were unable to recover it. 00:42:46.441 [2024-05-15 09:08:41.039087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.441 [2024-05-15 09:08:41.039226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.441 [2024-05-15 09:08:41.039251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.441 qpair failed and we were unable to recover it. 00:42:46.441 [2024-05-15 09:08:41.039389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.441 [2024-05-15 09:08:41.039518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.441 [2024-05-15 09:08:41.039543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.441 qpair failed and we were unable to recover it. 00:42:46.441 [2024-05-15 09:08:41.039653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.441 [2024-05-15 09:08:41.039778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.441 [2024-05-15 09:08:41.039802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.441 qpair failed and we were unable to recover it. 00:42:46.441 [2024-05-15 09:08:41.039931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.441 [2024-05-15 09:08:41.040022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.441 [2024-05-15 09:08:41.040047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.441 qpair failed and we were unable to recover it. 00:42:46.441 [2024-05-15 09:08:41.040151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.441 [2024-05-15 09:08:41.040290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.441 [2024-05-15 09:08:41.040315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.441 qpair failed and we were unable to recover it. 00:42:46.441 [2024-05-15 09:08:41.040423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.441 [2024-05-15 09:08:41.040558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.441 [2024-05-15 09:08:41.040583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.441 qpair failed and we were unable to recover it. 00:42:46.441 [2024-05-15 09:08:41.040734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.441 [2024-05-15 09:08:41.040864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.441 [2024-05-15 09:08:41.040889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.441 qpair failed and we were unable to recover it. 00:42:46.441 [2024-05-15 09:08:41.041017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.441 [2024-05-15 09:08:41.041145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.441 [2024-05-15 09:08:41.041170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.441 qpair failed and we were unable to recover it. 00:42:46.441 [2024-05-15 09:08:41.041287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.441 [2024-05-15 09:08:41.041380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.441 [2024-05-15 09:08:41.041405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.441 qpair failed and we were unable to recover it. 00:42:46.441 [2024-05-15 09:08:41.041517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.441 [2024-05-15 09:08:41.041642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.441 [2024-05-15 09:08:41.041666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.441 qpair failed and we were unable to recover it. 00:42:46.441 [2024-05-15 09:08:41.041767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.441 [2024-05-15 09:08:41.041891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.441 [2024-05-15 09:08:41.041916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.441 qpair failed and we were unable to recover it. 00:42:46.441 [2024-05-15 09:08:41.042052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.441 [2024-05-15 09:08:41.042228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.441 [2024-05-15 09:08:41.042271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.441 qpair failed and we were unable to recover it. 00:42:46.441 [2024-05-15 09:08:41.042392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.441 [2024-05-15 09:08:41.042488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.441 [2024-05-15 09:08:41.042513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.441 qpair failed and we were unable to recover it. 00:42:46.441 [2024-05-15 09:08:41.042638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.441 [2024-05-15 09:08:41.042742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.441 [2024-05-15 09:08:41.042767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.441 qpair failed and we were unable to recover it. 00:42:46.441 [2024-05-15 09:08:41.042925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.441 [2024-05-15 09:08:41.043051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.441 [2024-05-15 09:08:41.043076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.441 qpair failed and we were unable to recover it. 00:42:46.441 [2024-05-15 09:08:41.043187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.441 [2024-05-15 09:08:41.043323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.441 [2024-05-15 09:08:41.043348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.441 qpair failed and we were unable to recover it. 00:42:46.441 [2024-05-15 09:08:41.043481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.442 [2024-05-15 09:08:41.043584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.442 [2024-05-15 09:08:41.043610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.442 qpair failed and we were unable to recover it. 00:42:46.442 [2024-05-15 09:08:41.043737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.442 [2024-05-15 09:08:41.043865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.442 [2024-05-15 09:08:41.043890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.442 qpair failed and we were unable to recover it. 00:42:46.442 [2024-05-15 09:08:41.043986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.442 [2024-05-15 09:08:41.044093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.442 [2024-05-15 09:08:41.044118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.442 qpair failed and we were unable to recover it. 00:42:46.442 [2024-05-15 09:08:41.044259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.442 [2024-05-15 09:08:41.044366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.442 [2024-05-15 09:08:41.044391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.442 qpair failed and we were unable to recover it. 00:42:46.442 [2024-05-15 09:08:41.044518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.442 [2024-05-15 09:08:41.044671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.442 [2024-05-15 09:08:41.044696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.442 qpair failed and we were unable to recover it. 00:42:46.442 [2024-05-15 09:08:41.044858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.442 [2024-05-15 09:08:41.044986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.442 [2024-05-15 09:08:41.045027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.442 qpair failed and we were unable to recover it. 00:42:46.442 [2024-05-15 09:08:41.045189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.442 [2024-05-15 09:08:41.045339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.442 [2024-05-15 09:08:41.045365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.442 qpair failed and we were unable to recover it. 00:42:46.442 [2024-05-15 09:08:41.045492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.442 [2024-05-15 09:08:41.045646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.442 [2024-05-15 09:08:41.045671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.442 qpair failed and we were unable to recover it. 00:42:46.442 [2024-05-15 09:08:41.045796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.442 [2024-05-15 09:08:41.045964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.442 [2024-05-15 09:08:41.045989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.442 qpair failed and we were unable to recover it. 00:42:46.442 [2024-05-15 09:08:41.046143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.442 [2024-05-15 09:08:41.046291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.442 [2024-05-15 09:08:41.046316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.442 qpair failed and we were unable to recover it. 00:42:46.442 [2024-05-15 09:08:41.046430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.442 [2024-05-15 09:08:41.046562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.442 [2024-05-15 09:08:41.046587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.442 qpair failed and we were unable to recover it. 00:42:46.442 [2024-05-15 09:08:41.046723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.442 [2024-05-15 09:08:41.046847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.442 [2024-05-15 09:08:41.046872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.442 qpair failed and we were unable to recover it. 00:42:46.442 [2024-05-15 09:08:41.047002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.442 [2024-05-15 09:08:41.047109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.442 [2024-05-15 09:08:41.047134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.442 qpair failed and we were unable to recover it. 00:42:46.442 [2024-05-15 09:08:41.047249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.442 [2024-05-15 09:08:41.047374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.442 [2024-05-15 09:08:41.047399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.442 qpair failed and we were unable to recover it. 00:42:46.442 [2024-05-15 09:08:41.047490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.442 [2024-05-15 09:08:41.047646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.442 [2024-05-15 09:08:41.047670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.442 qpair failed and we were unable to recover it. 00:42:46.442 [2024-05-15 09:08:41.047776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.442 [2024-05-15 09:08:41.047870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.442 [2024-05-15 09:08:41.047895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.442 qpair failed and we were unable to recover it. 00:42:46.442 [2024-05-15 09:08:41.047997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.442 [2024-05-15 09:08:41.048126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.442 [2024-05-15 09:08:41.048151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.442 qpair failed and we were unable to recover it. 00:42:46.442 [2024-05-15 09:08:41.048264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.442 [2024-05-15 09:08:41.048418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.442 [2024-05-15 09:08:41.048443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.442 qpair failed and we were unable to recover it. 00:42:46.442 [2024-05-15 09:08:41.048578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.442 [2024-05-15 09:08:41.048705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.442 [2024-05-15 09:08:41.048731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.442 qpair failed and we were unable to recover it. 00:42:46.442 [2024-05-15 09:08:41.048839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.442 [2024-05-15 09:08:41.048975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.442 [2024-05-15 09:08:41.049000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.442 qpair failed and we were unable to recover it. 00:42:46.442 [2024-05-15 09:08:41.049122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.442 [2024-05-15 09:08:41.049251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.442 [2024-05-15 09:08:41.049277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.442 qpair failed and we were unable to recover it. 00:42:46.442 [2024-05-15 09:08:41.049396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.442 [2024-05-15 09:08:41.049503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.442 [2024-05-15 09:08:41.049527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.442 qpair failed and we were unable to recover it. 00:42:46.442 [2024-05-15 09:08:41.049625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.442 [2024-05-15 09:08:41.049757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.442 [2024-05-15 09:08:41.049782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.442 qpair failed and we were unable to recover it. 00:42:46.442 [2024-05-15 09:08:41.049910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.442 [2024-05-15 09:08:41.050049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.442 [2024-05-15 09:08:41.050074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.442 qpair failed and we were unable to recover it. 00:42:46.442 [2024-05-15 09:08:41.050231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.442 [2024-05-15 09:08:41.050363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.442 [2024-05-15 09:08:41.050388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.442 qpair failed and we were unable to recover it. 00:42:46.442 [2024-05-15 09:08:41.050521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.442 [2024-05-15 09:08:41.050652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.442 [2024-05-15 09:08:41.050678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.442 qpair failed and we were unable to recover it. 00:42:46.442 [2024-05-15 09:08:41.050832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.442 [2024-05-15 09:08:41.050934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.442 [2024-05-15 09:08:41.050959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.442 qpair failed and we were unable to recover it. 00:42:46.442 [2024-05-15 09:08:41.051114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.442 [2024-05-15 09:08:41.051223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.442 [2024-05-15 09:08:41.051248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.442 qpair failed and we were unable to recover it. 00:42:46.442 [2024-05-15 09:08:41.051360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.442 [2024-05-15 09:08:41.051463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.442 [2024-05-15 09:08:41.051488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.442 qpair failed and we were unable to recover it. 00:42:46.442 [2024-05-15 09:08:41.051602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.443 [2024-05-15 09:08:41.051710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.443 [2024-05-15 09:08:41.051735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.443 qpair failed and we were unable to recover it. 00:42:46.443 [2024-05-15 09:08:41.051863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.443 [2024-05-15 09:08:41.051990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.443 [2024-05-15 09:08:41.052015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.443 qpair failed and we were unable to recover it. 00:42:46.443 [2024-05-15 09:08:41.052152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.443 [2024-05-15 09:08:41.052262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.443 [2024-05-15 09:08:41.052288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.443 qpair failed and we were unable to recover it. 00:42:46.443 [2024-05-15 09:08:41.052416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.443 [2024-05-15 09:08:41.052549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.443 [2024-05-15 09:08:41.052574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.443 qpair failed and we were unable to recover it. 00:42:46.443 [2024-05-15 09:08:41.052677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.443 [2024-05-15 09:08:41.052808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.443 [2024-05-15 09:08:41.052837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.443 qpair failed and we were unable to recover it. 00:42:46.443 [2024-05-15 09:08:41.052945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.443 [2024-05-15 09:08:41.053071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.443 [2024-05-15 09:08:41.053096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.443 qpair failed and we were unable to recover it. 00:42:46.443 [2024-05-15 09:08:41.053197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.443 [2024-05-15 09:08:41.053335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.443 [2024-05-15 09:08:41.053360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.443 qpair failed and we were unable to recover it. 00:42:46.443 [2024-05-15 09:08:41.053464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.443 [2024-05-15 09:08:41.053564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.443 [2024-05-15 09:08:41.053589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.443 qpair failed and we were unable to recover it. 00:42:46.443 [2024-05-15 09:08:41.053717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.443 [2024-05-15 09:08:41.053842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.443 [2024-05-15 09:08:41.053866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.443 qpair failed and we were unable to recover it. 00:42:46.443 [2024-05-15 09:08:41.054022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.443 [2024-05-15 09:08:41.054120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.443 [2024-05-15 09:08:41.054145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.443 qpair failed and we were unable to recover it. 00:42:46.443 [2024-05-15 09:08:41.054267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.443 [2024-05-15 09:08:41.054371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.443 [2024-05-15 09:08:41.054396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.443 qpair failed and we were unable to recover it. 00:42:46.443 [2024-05-15 09:08:41.054567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.443 [2024-05-15 09:08:41.054673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.443 [2024-05-15 09:08:41.054698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.443 qpair failed and we were unable to recover it. 00:42:46.443 [2024-05-15 09:08:41.054831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.443 [2024-05-15 09:08:41.054961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.443 [2024-05-15 09:08:41.054986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.443 qpair failed and we were unable to recover it. 00:42:46.443 [2024-05-15 09:08:41.055116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.443 [2024-05-15 09:08:41.055248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.443 [2024-05-15 09:08:41.055275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.443 qpair failed and we were unable to recover it. 00:42:46.443 [2024-05-15 09:08:41.055406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.443 [2024-05-15 09:08:41.055510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.443 [2024-05-15 09:08:41.055535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.443 qpair failed and we were unable to recover it. 00:42:46.443 [2024-05-15 09:08:41.055633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.443 [2024-05-15 09:08:41.055755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.443 [2024-05-15 09:08:41.055780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.443 qpair failed and we were unable to recover it. 00:42:46.443 [2024-05-15 09:08:41.055932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.443 [2024-05-15 09:08:41.056036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.443 [2024-05-15 09:08:41.056061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.443 qpair failed and we were unable to recover it. 00:42:46.443 [2024-05-15 09:08:41.056221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.443 [2024-05-15 09:08:41.056355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.443 [2024-05-15 09:08:41.056380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.443 qpair failed and we were unable to recover it. 00:42:46.443 [2024-05-15 09:08:41.056486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.443 [2024-05-15 09:08:41.056615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.443 [2024-05-15 09:08:41.056641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.443 qpair failed and we were unable to recover it. 00:42:46.443 [2024-05-15 09:08:41.056738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.443 [2024-05-15 09:08:41.056870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.443 [2024-05-15 09:08:41.056895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.443 qpair failed and we were unable to recover it. 00:42:46.443 [2024-05-15 09:08:41.056995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.443 [2024-05-15 09:08:41.057129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.443 [2024-05-15 09:08:41.057154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.443 qpair failed and we were unable to recover it. 00:42:46.443 [2024-05-15 09:08:41.057271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.443 [2024-05-15 09:08:41.057406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.443 [2024-05-15 09:08:41.057430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.443 qpair failed and we were unable to recover it. 00:42:46.443 [2024-05-15 09:08:41.057554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.443 [2024-05-15 09:08:41.057657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.443 [2024-05-15 09:08:41.057681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.443 qpair failed and we were unable to recover it. 00:42:46.443 [2024-05-15 09:08:41.057806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.443 [2024-05-15 09:08:41.057958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.443 [2024-05-15 09:08:41.057983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.443 qpair failed and we were unable to recover it. 00:42:46.443 [2024-05-15 09:08:41.058095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.443 [2024-05-15 09:08:41.058227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.443 [2024-05-15 09:08:41.058252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.443 qpair failed and we were unable to recover it. 00:42:46.443 [2024-05-15 09:08:41.058363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.443 [2024-05-15 09:08:41.058495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.443 [2024-05-15 09:08:41.058520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.443 qpair failed and we were unable to recover it. 00:42:46.443 [2024-05-15 09:08:41.058618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.443 [2024-05-15 09:08:41.058750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.443 [2024-05-15 09:08:41.058775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.443 qpair failed and we were unable to recover it. 00:42:46.443 [2024-05-15 09:08:41.058920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.443 [2024-05-15 09:08:41.059064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.443 [2024-05-15 09:08:41.059092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.443 qpair failed and we were unable to recover it. 00:42:46.443 [2024-05-15 09:08:41.059239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.443 [2024-05-15 09:08:41.059361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.443 [2024-05-15 09:08:41.059386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.443 qpair failed and we were unable to recover it. 00:42:46.444 [2024-05-15 09:08:41.059512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.444 [2024-05-15 09:08:41.059615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.444 [2024-05-15 09:08:41.059641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.444 qpair failed and we were unable to recover it. 00:42:46.444 [2024-05-15 09:08:41.059820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.444 [2024-05-15 09:08:41.059962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.444 [2024-05-15 09:08:41.059990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.444 qpair failed and we were unable to recover it. 00:42:46.444 [2024-05-15 09:08:41.060170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.444 [2024-05-15 09:08:41.060332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.444 [2024-05-15 09:08:41.060358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.444 qpair failed and we were unable to recover it. 00:42:46.444 [2024-05-15 09:08:41.060465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.444 [2024-05-15 09:08:41.060575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.444 [2024-05-15 09:08:41.060600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.444 qpair failed and we were unable to recover it. 00:42:46.444 [2024-05-15 09:08:41.060777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.444 [2024-05-15 09:08:41.060910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.444 [2024-05-15 09:08:41.060937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.444 qpair failed and we were unable to recover it. 00:42:46.444 [2024-05-15 09:08:41.061053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.444 [2024-05-15 09:08:41.061182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.444 [2024-05-15 09:08:41.061206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.444 qpair failed and we were unable to recover it. 00:42:46.444 [2024-05-15 09:08:41.061345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.444 [2024-05-15 09:08:41.061475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.444 [2024-05-15 09:08:41.061501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.444 qpair failed and we were unable to recover it. 00:42:46.444 [2024-05-15 09:08:41.061612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.444 [2024-05-15 09:08:41.061712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.444 [2024-05-15 09:08:41.061738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.444 qpair failed and we were unable to recover it. 00:42:46.444 [2024-05-15 09:08:41.061836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.444 [2024-05-15 09:08:41.061968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.444 [2024-05-15 09:08:41.061993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.444 qpair failed and we were unable to recover it. 00:42:46.444 [2024-05-15 09:08:41.062128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.444 [2024-05-15 09:08:41.062250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.444 [2024-05-15 09:08:41.062275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.444 qpair failed and we were unable to recover it. 00:42:46.444 [2024-05-15 09:08:41.062432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.444 [2024-05-15 09:08:41.062579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.444 [2024-05-15 09:08:41.062607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.444 qpair failed and we were unable to recover it. 00:42:46.444 [2024-05-15 09:08:41.062778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.444 [2024-05-15 09:08:41.062886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.444 [2024-05-15 09:08:41.062914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.444 qpair failed and we were unable to recover it. 00:42:46.444 [2024-05-15 09:08:41.063052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.444 [2024-05-15 09:08:41.063153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.444 [2024-05-15 09:08:41.063179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.444 qpair failed and we were unable to recover it. 00:42:46.444 [2024-05-15 09:08:41.063338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.444 [2024-05-15 09:08:41.063481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.444 [2024-05-15 09:08:41.063509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.444 qpair failed and we were unable to recover it. 00:42:46.444 [2024-05-15 09:08:41.063676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.444 [2024-05-15 09:08:41.063789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.444 [2024-05-15 09:08:41.063817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.444 qpair failed and we were unable to recover it. 00:42:46.444 [2024-05-15 09:08:41.063987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.444 [2024-05-15 09:08:41.064119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.444 [2024-05-15 09:08:41.064144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.444 qpair failed and we were unable to recover it. 00:42:46.444 [2024-05-15 09:08:41.064320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.444 [2024-05-15 09:08:41.064444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.444 [2024-05-15 09:08:41.064473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.444 qpair failed and we were unable to recover it. 00:42:46.444 [2024-05-15 09:08:41.064593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.444 [2024-05-15 09:08:41.064741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.444 [2024-05-15 09:08:41.064769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.444 qpair failed and we were unable to recover it. 00:42:46.444 [2024-05-15 09:08:41.064923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.444 [2024-05-15 09:08:41.065074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.444 [2024-05-15 09:08:41.065099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.444 qpair failed and we were unable to recover it. 00:42:46.444 [2024-05-15 09:08:41.065262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.444 [2024-05-15 09:08:41.065407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.444 [2024-05-15 09:08:41.065436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.444 qpair failed and we were unable to recover it. 00:42:46.444 [2024-05-15 09:08:41.065583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.444 [2024-05-15 09:08:41.065749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.444 [2024-05-15 09:08:41.065777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.444 qpair failed and we were unable to recover it. 00:42:46.444 [2024-05-15 09:08:41.065925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.444 [2024-05-15 09:08:41.066058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.444 [2024-05-15 09:08:41.066083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.444 qpair failed and we were unable to recover it. 00:42:46.444 [2024-05-15 09:08:41.066247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.444 [2024-05-15 09:08:41.066376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.444 [2024-05-15 09:08:41.066401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.444 qpair failed and we were unable to recover it. 00:42:46.444 [2024-05-15 09:08:41.066565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.444 [2024-05-15 09:08:41.066706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.444 [2024-05-15 09:08:41.066731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.444 qpair failed and we were unable to recover it. 00:42:46.444 [2024-05-15 09:08:41.066868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.444 [2024-05-15 09:08:41.066966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.444 [2024-05-15 09:08:41.066990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.444 qpair failed and we were unable to recover it. 00:42:46.444 [2024-05-15 09:08:41.067134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.444 [2024-05-15 09:08:41.067281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.444 [2024-05-15 09:08:41.067307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.444 qpair failed and we were unable to recover it. 00:42:46.444 [2024-05-15 09:08:41.067411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.444 [2024-05-15 09:08:41.067514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.444 [2024-05-15 09:08:41.067543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.444 qpair failed and we were unable to recover it. 00:42:46.444 [2024-05-15 09:08:41.067696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.444 [2024-05-15 09:08:41.067865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.444 [2024-05-15 09:08:41.067892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.444 qpair failed and we were unable to recover it. 00:42:46.444 [2024-05-15 09:08:41.068069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.444 [2024-05-15 09:08:41.068196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.444 [2024-05-15 09:08:41.068227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.445 qpair failed and we were unable to recover it. 00:42:46.445 [2024-05-15 09:08:41.068376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.445 [2024-05-15 09:08:41.068488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.445 [2024-05-15 09:08:41.068516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.445 qpair failed and we were unable to recover it. 00:42:46.445 [2024-05-15 09:08:41.068662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.445 [2024-05-15 09:08:41.068821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.445 [2024-05-15 09:08:41.068863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.445 qpair failed and we were unable to recover it. 00:42:46.445 [2024-05-15 09:08:41.068982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.445 [2024-05-15 09:08:41.069127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.445 [2024-05-15 09:08:41.069155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.445 qpair failed and we were unable to recover it. 00:42:46.445 [2024-05-15 09:08:41.069322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.445 [2024-05-15 09:08:41.069428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.445 [2024-05-15 09:08:41.069453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.445 qpair failed and we were unable to recover it. 00:42:46.445 [2024-05-15 09:08:41.069579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.445 [2024-05-15 09:08:41.069735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.445 [2024-05-15 09:08:41.069776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.445 qpair failed and we were unable to recover it. 00:42:46.445 [2024-05-15 09:08:41.069915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.445 [2024-05-15 09:08:41.070049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.445 [2024-05-15 09:08:41.070077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.445 qpair failed and we were unable to recover it. 00:42:46.445 [2024-05-15 09:08:41.070186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.445 [2024-05-15 09:08:41.070340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.445 [2024-05-15 09:08:41.070365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.445 qpair failed and we were unable to recover it. 00:42:46.445 [2024-05-15 09:08:41.070519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.445 [2024-05-15 09:08:41.070625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.445 [2024-05-15 09:08:41.070655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.445 qpair failed and we were unable to recover it. 00:42:46.445 [2024-05-15 09:08:41.070839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.445 [2024-05-15 09:08:41.070982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.445 [2024-05-15 09:08:41.071010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.445 qpair failed and we were unable to recover it. 00:42:46.445 [2024-05-15 09:08:41.071179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.445 [2024-05-15 09:08:41.071340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.445 [2024-05-15 09:08:41.071369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.445 qpair failed and we were unable to recover it. 00:42:46.445 [2024-05-15 09:08:41.071493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.445 [2024-05-15 09:08:41.071620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.445 [2024-05-15 09:08:41.071646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.445 qpair failed and we were unable to recover it. 00:42:46.445 [2024-05-15 09:08:41.071786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.445 [2024-05-15 09:08:41.071929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.445 [2024-05-15 09:08:41.071954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.445 qpair failed and we were unable to recover it. 00:42:46.445 [2024-05-15 09:08:41.072108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.445 [2024-05-15 09:08:41.072246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.445 [2024-05-15 09:08:41.072275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.445 qpair failed and we were unable to recover it. 00:42:46.445 [2024-05-15 09:08:41.072458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.445 [2024-05-15 09:08:41.072579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.445 [2024-05-15 09:08:41.072619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.445 qpair failed and we were unable to recover it. 00:42:46.445 [2024-05-15 09:08:41.072778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.445 [2024-05-15 09:08:41.072883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.445 [2024-05-15 09:08:41.072908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.445 qpair failed and we were unable to recover it. 00:42:46.445 [2024-05-15 09:08:41.073043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.445 [2024-05-15 09:08:41.073186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.445 [2024-05-15 09:08:41.073214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.445 qpair failed and we were unable to recover it. 00:42:46.445 [2024-05-15 09:08:41.073404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.445 [2024-05-15 09:08:41.073544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.445 [2024-05-15 09:08:41.073569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.445 qpair failed and we were unable to recover it. 00:42:46.445 [2024-05-15 09:08:41.073729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.445 [2024-05-15 09:08:41.073866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.445 [2024-05-15 09:08:41.073894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.445 qpair failed and we were unable to recover it. 00:42:46.445 [2024-05-15 09:08:41.074044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.445 [2024-05-15 09:08:41.074197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.445 [2024-05-15 09:08:41.074229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.445 qpair failed and we were unable to recover it. 00:42:46.445 [2024-05-15 09:08:41.074365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.445 [2024-05-15 09:08:41.074521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.445 [2024-05-15 09:08:41.074564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.445 qpair failed and we were unable to recover it. 00:42:46.445 [2024-05-15 09:08:41.074751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.445 [2024-05-15 09:08:41.075061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.445 [2024-05-15 09:08:41.075113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.445 qpair failed and we were unable to recover it. 00:42:46.445 [2024-05-15 09:08:41.075271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.445 [2024-05-15 09:08:41.075399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.445 [2024-05-15 09:08:41.075424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.445 qpair failed and we were unable to recover it. 00:42:46.445 [2024-05-15 09:08:41.075561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.445 [2024-05-15 09:08:41.075693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.445 [2024-05-15 09:08:41.075718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.445 qpair failed and we were unable to recover it. 00:42:46.445 [2024-05-15 09:08:41.075821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.445 [2024-05-15 09:08:41.075919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.445 [2024-05-15 09:08:41.075943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.445 qpair failed and we were unable to recover it. 00:42:46.445 [2024-05-15 09:08:41.076074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.445 [2024-05-15 09:08:41.076240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.445 [2024-05-15 09:08:41.076266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.445 qpair failed and we were unable to recover it. 00:42:46.445 [2024-05-15 09:08:41.076396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.445 [2024-05-15 09:08:41.076500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.445 [2024-05-15 09:08:41.076524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.445 qpair failed and we were unable to recover it. 00:42:46.445 [2024-05-15 09:08:41.076682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.445 [2024-05-15 09:08:41.076831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.445 [2024-05-15 09:08:41.076856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.445 qpair failed and we were unable to recover it. 00:42:46.445 [2024-05-15 09:08:41.076953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.445 [2024-05-15 09:08:41.077116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.445 [2024-05-15 09:08:41.077140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.445 qpair failed and we were unable to recover it. 00:42:46.445 [2024-05-15 09:08:41.077303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.446 [2024-05-15 09:08:41.077404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.446 [2024-05-15 09:08:41.077429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.446 qpair failed and we were unable to recover it. 00:42:46.446 [2024-05-15 09:08:41.077542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.446 [2024-05-15 09:08:41.077646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.446 [2024-05-15 09:08:41.077671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.446 qpair failed and we were unable to recover it. 00:42:46.446 [2024-05-15 09:08:41.077806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.446 [2024-05-15 09:08:41.077962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.446 [2024-05-15 09:08:41.077987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.446 qpair failed and we were unable to recover it. 00:42:46.446 [2024-05-15 09:08:41.078114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.446 [2024-05-15 09:08:41.078223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.446 [2024-05-15 09:08:41.078248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.446 qpair failed and we were unable to recover it. 00:42:46.446 [2024-05-15 09:08:41.078432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.446 [2024-05-15 09:08:41.078612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.446 [2024-05-15 09:08:41.078637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.446 qpair failed and we were unable to recover it. 00:42:46.446 [2024-05-15 09:08:41.078744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.446 [2024-05-15 09:08:41.078893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.446 [2024-05-15 09:08:41.078920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.446 qpair failed and we were unable to recover it. 00:42:46.446 [2024-05-15 09:08:41.079099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.446 [2024-05-15 09:08:41.079205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.446 [2024-05-15 09:08:41.079237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.446 qpair failed and we were unable to recover it. 00:42:46.446 [2024-05-15 09:08:41.079343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.446 [2024-05-15 09:08:41.079507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.446 [2024-05-15 09:08:41.079532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.446 qpair failed and we were unable to recover it. 00:42:46.446 [2024-05-15 09:08:41.079681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.446 [2024-05-15 09:08:41.079821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.446 [2024-05-15 09:08:41.079849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.446 qpair failed and we were unable to recover it. 00:42:46.446 [2024-05-15 09:08:41.080037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.446 [2024-05-15 09:08:41.080204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.446 [2024-05-15 09:08:41.080239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.446 qpair failed and we were unable to recover it. 00:42:46.446 [2024-05-15 09:08:41.080387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.446 [2024-05-15 09:08:41.080524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.446 [2024-05-15 09:08:41.080550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.446 qpair failed and we were unable to recover it. 00:42:46.446 [2024-05-15 09:08:41.080690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.446 [2024-05-15 09:08:41.080847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.446 [2024-05-15 09:08:41.080875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.446 qpair failed and we were unable to recover it. 00:42:46.446 [2024-05-15 09:08:41.081023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.446 [2024-05-15 09:08:41.081125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.446 [2024-05-15 09:08:41.081150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.446 qpair failed and we were unable to recover it. 00:42:46.446 [2024-05-15 09:08:41.081323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.446 [2024-05-15 09:08:41.081465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.446 [2024-05-15 09:08:41.081489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.446 qpair failed and we were unable to recover it. 00:42:46.446 [2024-05-15 09:08:41.081668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.446 [2024-05-15 09:08:41.081782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.446 [2024-05-15 09:08:41.081809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.446 qpair failed and we were unable to recover it. 00:42:46.446 [2024-05-15 09:08:41.081986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.446 [2024-05-15 09:08:41.082119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.446 [2024-05-15 09:08:41.082143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.446 qpair failed and we were unable to recover it. 00:42:46.446 [2024-05-15 09:08:41.082276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.446 [2024-05-15 09:08:41.082379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.446 [2024-05-15 09:08:41.082406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.446 qpair failed and we were unable to recover it. 00:42:46.446 [2024-05-15 09:08:41.082534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.446 [2024-05-15 09:08:41.082692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.446 [2024-05-15 09:08:41.082720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.446 qpair failed and we were unable to recover it. 00:42:46.446 [2024-05-15 09:08:41.082877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.446 [2024-05-15 09:08:41.082985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.446 [2024-05-15 09:08:41.083010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.446 qpair failed and we were unable to recover it. 00:42:46.446 [2024-05-15 09:08:41.083143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.446 [2024-05-15 09:08:41.083273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.446 [2024-05-15 09:08:41.083298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.446 qpair failed and we were unable to recover it. 00:42:46.446 [2024-05-15 09:08:41.083401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.446 [2024-05-15 09:08:41.083529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.446 [2024-05-15 09:08:41.083554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.446 qpair failed and we were unable to recover it. 00:42:46.446 [2024-05-15 09:08:41.083689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.446 [2024-05-15 09:08:41.083817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.446 [2024-05-15 09:08:41.083842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.446 qpair failed and we were unable to recover it. 00:42:46.446 [2024-05-15 09:08:41.084004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.446 [2024-05-15 09:08:41.084183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.446 [2024-05-15 09:08:41.084208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.446 qpair failed and we were unable to recover it. 00:42:46.446 [2024-05-15 09:08:41.084340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.446 [2024-05-15 09:08:41.084500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.446 [2024-05-15 09:08:41.084526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.446 qpair failed and we were unable to recover it. 00:42:46.446 [2024-05-15 09:08:41.084679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.446 [2024-05-15 09:08:41.084851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.446 [2024-05-15 09:08:41.084879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.446 qpair failed and we were unable to recover it. 00:42:46.447 [2024-05-15 09:08:41.085050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.447 [2024-05-15 09:08:41.085179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.447 [2024-05-15 09:08:41.085206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.447 qpair failed and we were unable to recover it. 00:42:46.447 [2024-05-15 09:08:41.085388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.447 [2024-05-15 09:08:41.085517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.447 [2024-05-15 09:08:41.085542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.447 qpair failed and we were unable to recover it. 00:42:46.447 [2024-05-15 09:08:41.085671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.447 [2024-05-15 09:08:41.085767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.447 [2024-05-15 09:08:41.085792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.447 qpair failed and we were unable to recover it. 00:42:46.447 [2024-05-15 09:08:41.085927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.447 [2024-05-15 09:08:41.086083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.447 [2024-05-15 09:08:41.086111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.447 qpair failed and we were unable to recover it. 00:42:46.447 [2024-05-15 09:08:41.086267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.447 [2024-05-15 09:08:41.086375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.447 [2024-05-15 09:08:41.086400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.447 qpair failed and we were unable to recover it. 00:42:46.447 [2024-05-15 09:08:41.086568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.447 [2024-05-15 09:08:41.086671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.447 [2024-05-15 09:08:41.086719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.447 qpair failed and we were unable to recover it. 00:42:46.447 [2024-05-15 09:08:41.086840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.447 [2024-05-15 09:08:41.087013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.447 [2024-05-15 09:08:41.087038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.447 qpair failed and we were unable to recover it. 00:42:46.447 [2024-05-15 09:08:41.087139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.447 [2024-05-15 09:08:41.087245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.447 [2024-05-15 09:08:41.087270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.447 qpair failed and we were unable to recover it. 00:42:46.447 [2024-05-15 09:08:41.087381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.447 [2024-05-15 09:08:41.087535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.447 [2024-05-15 09:08:41.087560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.447 qpair failed and we were unable to recover it. 00:42:46.447 [2024-05-15 09:08:41.087688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.447 [2024-05-15 09:08:41.087787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.447 [2024-05-15 09:08:41.087812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.447 qpair failed and we were unable to recover it. 00:42:46.447 [2024-05-15 09:08:41.087944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.447 [2024-05-15 09:08:41.088081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.447 [2024-05-15 09:08:41.088105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.447 qpair failed and we were unable to recover it. 00:42:46.447 [2024-05-15 09:08:41.088226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.447 [2024-05-15 09:08:41.088335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.447 [2024-05-15 09:08:41.088361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.447 qpair failed and we were unable to recover it. 00:42:46.447 [2024-05-15 09:08:41.088489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.447 [2024-05-15 09:08:41.088597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.447 [2024-05-15 09:08:41.088623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.447 qpair failed and we were unable to recover it. 00:42:46.447 [2024-05-15 09:08:41.088728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.447 [2024-05-15 09:08:41.088828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.447 [2024-05-15 09:08:41.088853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.447 qpair failed and we were unable to recover it. 00:42:46.447 [2024-05-15 09:08:41.088992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.447 [2024-05-15 09:08:41.089094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.447 [2024-05-15 09:08:41.089119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.447 qpair failed and we were unable to recover it. 00:42:46.447 [2024-05-15 09:08:41.089223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.447 [2024-05-15 09:08:41.089341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.447 [2024-05-15 09:08:41.089366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.447 qpair failed and we were unable to recover it. 00:42:46.447 [2024-05-15 09:08:41.089506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.447 [2024-05-15 09:08:41.089604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.447 [2024-05-15 09:08:41.089629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.447 qpair failed and we were unable to recover it. 00:42:46.447 [2024-05-15 09:08:41.089761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.447 [2024-05-15 09:08:41.089868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.447 [2024-05-15 09:08:41.089893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.447 qpair failed and we were unable to recover it. 00:42:46.447 [2024-05-15 09:08:41.090021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.447 [2024-05-15 09:08:41.090178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.447 [2024-05-15 09:08:41.090203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.447 qpair failed and we were unable to recover it. 00:42:46.447 [2024-05-15 09:08:41.090335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.447 [2024-05-15 09:08:41.090464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.447 [2024-05-15 09:08:41.090490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.447 qpair failed and we were unable to recover it. 00:42:46.447 [2024-05-15 09:08:41.090622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.447 [2024-05-15 09:08:41.090752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.447 [2024-05-15 09:08:41.090777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.447 qpair failed and we were unable to recover it. 00:42:46.447 [2024-05-15 09:08:41.090938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.447 [2024-05-15 09:08:41.091045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.447 [2024-05-15 09:08:41.091070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.447 qpair failed and we were unable to recover it. 00:42:46.447 [2024-05-15 09:08:41.091195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.447 [2024-05-15 09:08:41.091379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.447 [2024-05-15 09:08:41.091405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.447 qpair failed and we were unable to recover it. 00:42:46.447 [2024-05-15 09:08:41.091516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.447 [2024-05-15 09:08:41.091649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.447 [2024-05-15 09:08:41.091674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.447 qpair failed and we were unable to recover it. 00:42:46.447 [2024-05-15 09:08:41.091802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.447 [2024-05-15 09:08:41.091933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.447 [2024-05-15 09:08:41.091958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.447 qpair failed and we were unable to recover it. 00:42:46.447 [2024-05-15 09:08:41.092082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.447 [2024-05-15 09:08:41.092214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.447 [2024-05-15 09:08:41.092248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.447 qpair failed and we were unable to recover it. 00:42:46.447 [2024-05-15 09:08:41.092414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.447 [2024-05-15 09:08:41.092548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.447 [2024-05-15 09:08:41.092573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.447 qpair failed and we were unable to recover it. 00:42:46.447 [2024-05-15 09:08:41.092725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.447 [2024-05-15 09:08:41.092854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.447 [2024-05-15 09:08:41.092879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.447 qpair failed and we were unable to recover it. 00:42:46.447 [2024-05-15 09:08:41.092981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.447 [2024-05-15 09:08:41.093078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.447 [2024-05-15 09:08:41.093103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.447 qpair failed and we were unable to recover it. 00:42:46.448 [2024-05-15 09:08:41.093234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.448 [2024-05-15 09:08:41.093344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.448 [2024-05-15 09:08:41.093369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.448 qpair failed and we were unable to recover it. 00:42:46.448 [2024-05-15 09:08:41.093502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.448 [2024-05-15 09:08:41.093634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.448 [2024-05-15 09:08:41.093660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.448 qpair failed and we were unable to recover it. 00:42:46.448 [2024-05-15 09:08:41.093761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.448 [2024-05-15 09:08:41.093867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.448 [2024-05-15 09:08:41.093893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.448 qpair failed and we were unable to recover it. 00:42:46.448 [2024-05-15 09:08:41.093994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.448 [2024-05-15 09:08:41.094145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.448 [2024-05-15 09:08:41.094170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.448 qpair failed and we were unable to recover it. 00:42:46.448 [2024-05-15 09:08:41.094296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.448 [2024-05-15 09:08:41.094401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.448 [2024-05-15 09:08:41.094427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.448 qpair failed and we were unable to recover it. 00:42:46.448 [2024-05-15 09:08:41.094582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.448 [2024-05-15 09:08:41.094709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.448 [2024-05-15 09:08:41.094734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.448 qpair failed and we were unable to recover it. 00:42:46.448 [2024-05-15 09:08:41.094858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.448 [2024-05-15 09:08:41.094975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.448 [2024-05-15 09:08:41.095003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.448 qpair failed and we were unable to recover it. 00:42:46.448 [2024-05-15 09:08:41.095167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.448 [2024-05-15 09:08:41.095329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.448 [2024-05-15 09:08:41.095355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.448 qpair failed and we were unable to recover it. 00:42:46.448 [2024-05-15 09:08:41.095449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.448 [2024-05-15 09:08:41.095605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.448 [2024-05-15 09:08:41.095630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.448 qpair failed and we were unable to recover it. 00:42:46.448 [2024-05-15 09:08:41.095758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.448 [2024-05-15 09:08:41.095861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.448 [2024-05-15 09:08:41.095888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.448 qpair failed and we were unable to recover it. 00:42:46.448 [2024-05-15 09:08:41.096014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.448 [2024-05-15 09:08:41.096137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.448 [2024-05-15 09:08:41.096162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.448 qpair failed and we were unable to recover it. 00:42:46.448 [2024-05-15 09:08:41.096256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.448 [2024-05-15 09:08:41.096383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.448 [2024-05-15 09:08:41.096408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.448 qpair failed and we were unable to recover it. 00:42:46.448 [2024-05-15 09:08:41.096519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.448 [2024-05-15 09:08:41.096644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.448 [2024-05-15 09:08:41.096669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.448 qpair failed and we were unable to recover it. 00:42:46.448 [2024-05-15 09:08:41.096802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.448 [2024-05-15 09:08:41.096928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.448 [2024-05-15 09:08:41.096953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.448 qpair failed and we were unable to recover it. 00:42:46.448 [2024-05-15 09:08:41.097085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.448 [2024-05-15 09:08:41.097181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.448 [2024-05-15 09:08:41.097206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.448 qpair failed and we were unable to recover it. 00:42:46.448 [2024-05-15 09:08:41.097352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.448 [2024-05-15 09:08:41.097448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.448 [2024-05-15 09:08:41.097473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.448 qpair failed and we were unable to recover it. 00:42:46.448 [2024-05-15 09:08:41.097624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.448 [2024-05-15 09:08:41.097785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.448 [2024-05-15 09:08:41.097809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.448 qpair failed and we were unable to recover it. 00:42:46.448 [2024-05-15 09:08:41.097943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.448 [2024-05-15 09:08:41.098092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.448 [2024-05-15 09:08:41.098119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.448 qpair failed and we were unable to recover it. 00:42:46.448 [2024-05-15 09:08:41.098278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.448 [2024-05-15 09:08:41.098409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.448 [2024-05-15 09:08:41.098434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.448 qpair failed and we were unable to recover it. 00:42:46.448 [2024-05-15 09:08:41.098533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.448 [2024-05-15 09:08:41.098660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.448 [2024-05-15 09:08:41.098685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.448 qpair failed and we were unable to recover it. 00:42:46.448 [2024-05-15 09:08:41.098836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.448 [2024-05-15 09:08:41.098995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.448 [2024-05-15 09:08:41.099020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.448 qpair failed and we were unable to recover it. 00:42:46.448 [2024-05-15 09:08:41.099176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.448 [2024-05-15 09:08:41.099305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.448 [2024-05-15 09:08:41.099331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.448 qpair failed and we were unable to recover it. 00:42:46.448 [2024-05-15 09:08:41.099439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.448 [2024-05-15 09:08:41.099566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.448 [2024-05-15 09:08:41.099591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.448 qpair failed and we were unable to recover it. 00:42:46.448 [2024-05-15 09:08:41.099744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.448 [2024-05-15 09:08:41.099912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.448 [2024-05-15 09:08:41.099937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.448 qpair failed and we were unable to recover it. 00:42:46.448 [2024-05-15 09:08:41.100068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.448 [2024-05-15 09:08:41.100261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.448 [2024-05-15 09:08:41.100287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.448 qpair failed and we were unable to recover it. 00:42:46.448 [2024-05-15 09:08:41.100414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.448 [2024-05-15 09:08:41.100519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.448 [2024-05-15 09:08:41.100545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.448 qpair failed and we were unable to recover it. 00:42:46.448 [2024-05-15 09:08:41.100698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.448 [2024-05-15 09:08:41.100850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.448 [2024-05-15 09:08:41.100875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.448 qpair failed and we were unable to recover it. 00:42:46.448 [2024-05-15 09:08:41.100990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.448 [2024-05-15 09:08:41.101122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.448 [2024-05-15 09:08:41.101151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.448 qpair failed and we were unable to recover it. 00:42:46.448 [2024-05-15 09:08:41.101278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.448 [2024-05-15 09:08:41.101385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.449 [2024-05-15 09:08:41.101411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.449 qpair failed and we were unable to recover it. 00:42:46.449 [2024-05-15 09:08:41.101540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.449 [2024-05-15 09:08:41.101692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.449 [2024-05-15 09:08:41.101717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.449 qpair failed and we were unable to recover it. 00:42:46.449 [2024-05-15 09:08:41.101874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.449 [2024-05-15 09:08:41.101998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.449 [2024-05-15 09:08:41.102023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.449 qpair failed and we were unable to recover it. 00:42:46.449 [2024-05-15 09:08:41.102152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.449 [2024-05-15 09:08:41.102282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.449 [2024-05-15 09:08:41.102308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.449 qpair failed and we were unable to recover it. 00:42:46.449 [2024-05-15 09:08:41.102413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.449 [2024-05-15 09:08:41.102516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.449 [2024-05-15 09:08:41.102541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.449 qpair failed and we were unable to recover it. 00:42:46.449 [2024-05-15 09:08:41.102650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.449 [2024-05-15 09:08:41.102758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.449 [2024-05-15 09:08:41.102783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.449 qpair failed and we were unable to recover it. 00:42:46.449 [2024-05-15 09:08:41.102886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.449 [2024-05-15 09:08:41.102981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.449 [2024-05-15 09:08:41.103005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.449 qpair failed and we were unable to recover it. 00:42:46.449 [2024-05-15 09:08:41.103135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.449 [2024-05-15 09:08:41.103234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.449 [2024-05-15 09:08:41.103259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.449 qpair failed and we were unable to recover it. 00:42:46.449 [2024-05-15 09:08:41.103414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.449 [2024-05-15 09:08:41.103570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.449 [2024-05-15 09:08:41.103595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.449 qpair failed and we were unable to recover it. 00:42:46.449 [2024-05-15 09:08:41.103724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.449 [2024-05-15 09:08:41.103856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.449 [2024-05-15 09:08:41.103881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.449 qpair failed and we were unable to recover it. 00:42:46.449 [2024-05-15 09:08:41.103990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.449 [2024-05-15 09:08:41.104119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.449 [2024-05-15 09:08:41.104143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.449 qpair failed and we were unable to recover it. 00:42:46.449 [2024-05-15 09:08:41.104299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.449 [2024-05-15 09:08:41.104427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.449 [2024-05-15 09:08:41.104452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.449 qpair failed and we were unable to recover it. 00:42:46.449 [2024-05-15 09:08:41.104605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.449 [2024-05-15 09:08:41.104701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.449 [2024-05-15 09:08:41.104726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.449 qpair failed and we were unable to recover it. 00:42:46.449 [2024-05-15 09:08:41.104855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.449 [2024-05-15 09:08:41.104984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.449 [2024-05-15 09:08:41.105010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.449 qpair failed and we were unable to recover it. 00:42:46.449 [2024-05-15 09:08:41.105150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.449 [2024-05-15 09:08:41.105261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.449 [2024-05-15 09:08:41.105288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.449 qpair failed and we were unable to recover it. 00:42:46.449 [2024-05-15 09:08:41.105388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.449 [2024-05-15 09:08:41.105520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.449 [2024-05-15 09:08:41.105545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.449 qpair failed and we were unable to recover it. 00:42:46.449 [2024-05-15 09:08:41.105644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.449 [2024-05-15 09:08:41.105742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.449 [2024-05-15 09:08:41.105767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.449 qpair failed and we were unable to recover it. 00:42:46.449 [2024-05-15 09:08:41.105878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.449 [2024-05-15 09:08:41.105996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.449 [2024-05-15 09:08:41.106021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.449 qpair failed and we were unable to recover it. 00:42:46.449 [2024-05-15 09:08:41.106147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.449 [2024-05-15 09:08:41.106274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.449 [2024-05-15 09:08:41.106300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.449 qpair failed and we were unable to recover it. 00:42:46.449 [2024-05-15 09:08:41.106434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.449 [2024-05-15 09:08:41.106538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.449 [2024-05-15 09:08:41.106563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.449 qpair failed and we were unable to recover it. 00:42:46.449 [2024-05-15 09:08:41.106692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.449 [2024-05-15 09:08:41.106814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.449 [2024-05-15 09:08:41.106839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.449 qpair failed and we were unable to recover it. 00:42:46.449 [2024-05-15 09:08:41.106967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.449 [2024-05-15 09:08:41.107090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.449 [2024-05-15 09:08:41.107115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.449 qpair failed and we were unable to recover it. 00:42:46.449 [2024-05-15 09:08:41.107226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.449 [2024-05-15 09:08:41.107372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.449 [2024-05-15 09:08:41.107397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.449 qpair failed and we were unable to recover it. 00:42:46.449 [2024-05-15 09:08:41.107507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.449 [2024-05-15 09:08:41.107636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.449 [2024-05-15 09:08:41.107662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.449 qpair failed and we were unable to recover it. 00:42:46.449 [2024-05-15 09:08:41.107767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.449 [2024-05-15 09:08:41.107864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.449 [2024-05-15 09:08:41.107889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.449 qpair failed and we were unable to recover it. 00:42:46.449 [2024-05-15 09:08:41.108016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.449 [2024-05-15 09:08:41.108170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.449 [2024-05-15 09:08:41.108195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.449 qpair failed and we were unable to recover it. 00:42:46.449 [2024-05-15 09:08:41.108309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.449 [2024-05-15 09:08:41.108459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.449 [2024-05-15 09:08:41.108484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.449 qpair failed and we were unable to recover it. 00:42:46.449 [2024-05-15 09:08:41.108618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.449 [2024-05-15 09:08:41.108749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.449 [2024-05-15 09:08:41.108774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.449 qpair failed and we were unable to recover it. 00:42:46.449 [2024-05-15 09:08:41.108899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.449 [2024-05-15 09:08:41.109058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.449 [2024-05-15 09:08:41.109082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.449 qpair failed and we were unable to recover it. 00:42:46.449 [2024-05-15 09:08:41.109220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.449 [2024-05-15 09:08:41.109322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.450 [2024-05-15 09:08:41.109347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.450 qpair failed and we were unable to recover it. 00:42:46.450 [2024-05-15 09:08:41.109457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.450 [2024-05-15 09:08:41.109584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.450 [2024-05-15 09:08:41.109609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.450 qpair failed and we were unable to recover it. 00:42:46.450 [2024-05-15 09:08:41.109740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.450 [2024-05-15 09:08:41.109874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.450 [2024-05-15 09:08:41.109898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.450 qpair failed and we were unable to recover it. 00:42:46.450 [2024-05-15 09:08:41.110047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.450 [2024-05-15 09:08:41.110173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.450 [2024-05-15 09:08:41.110197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.450 qpair failed and we were unable to recover it. 00:42:46.450 [2024-05-15 09:08:41.110308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.450 [2024-05-15 09:08:41.110434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.450 [2024-05-15 09:08:41.110460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.450 qpair failed and we were unable to recover it. 00:42:46.450 [2024-05-15 09:08:41.110585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.450 [2024-05-15 09:08:41.110690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.450 [2024-05-15 09:08:41.110715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.450 qpair failed and we were unable to recover it. 00:42:46.450 [2024-05-15 09:08:41.110842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.450 [2024-05-15 09:08:41.110997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.450 [2024-05-15 09:08:41.111022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.450 qpair failed and we were unable to recover it. 00:42:46.450 [2024-05-15 09:08:41.111144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.450 [2024-05-15 09:08:41.111277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.450 [2024-05-15 09:08:41.111302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.450 qpair failed and we were unable to recover it. 00:42:46.450 [2024-05-15 09:08:41.111413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.450 [2024-05-15 09:08:41.111550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.450 [2024-05-15 09:08:41.111575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.450 qpair failed and we were unable to recover it. 00:42:46.450 [2024-05-15 09:08:41.111707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.450 [2024-05-15 09:08:41.111839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.450 [2024-05-15 09:08:41.111864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.450 qpair failed and we were unable to recover it. 00:42:46.450 [2024-05-15 09:08:41.111993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.450 [2024-05-15 09:08:41.112094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.450 [2024-05-15 09:08:41.112119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.450 qpair failed and we were unable to recover it. 00:42:46.450 [2024-05-15 09:08:41.112270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.450 [2024-05-15 09:08:41.112377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.450 [2024-05-15 09:08:41.112402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.450 qpair failed and we were unable to recover it. 00:42:46.450 [2024-05-15 09:08:41.112558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.450 [2024-05-15 09:08:41.112715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.450 [2024-05-15 09:08:41.112740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.450 qpair failed and we were unable to recover it. 00:42:46.450 [2024-05-15 09:08:41.112876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.450 [2024-05-15 09:08:41.112998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.450 [2024-05-15 09:08:41.113023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.450 qpair failed and we were unable to recover it. 00:42:46.450 [2024-05-15 09:08:41.113115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.450 [2024-05-15 09:08:41.113227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.450 [2024-05-15 09:08:41.113252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.450 qpair failed and we were unable to recover it. 00:42:46.450 [2024-05-15 09:08:41.113381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.450 [2024-05-15 09:08:41.113549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.450 [2024-05-15 09:08:41.113574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.450 qpair failed and we were unable to recover it. 00:42:46.450 [2024-05-15 09:08:41.113680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.450 [2024-05-15 09:08:41.113800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.450 [2024-05-15 09:08:41.113824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.450 qpair failed and we were unable to recover it. 00:42:46.450 [2024-05-15 09:08:41.113929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.450 [2024-05-15 09:08:41.114049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.450 [2024-05-15 09:08:41.114073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.450 qpair failed and we were unable to recover it. 00:42:46.450 [2024-05-15 09:08:41.114203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.450 [2024-05-15 09:08:41.114340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.450 [2024-05-15 09:08:41.114365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.450 qpair failed and we were unable to recover it. 00:42:46.450 [2024-05-15 09:08:41.114496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.450 [2024-05-15 09:08:41.114600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.450 [2024-05-15 09:08:41.114625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.450 qpair failed and we were unable to recover it. 00:42:46.450 [2024-05-15 09:08:41.114750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.450 [2024-05-15 09:08:41.114851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.450 [2024-05-15 09:08:41.114876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.450 qpair failed and we were unable to recover it. 00:42:46.450 [2024-05-15 09:08:41.115036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.450 [2024-05-15 09:08:41.115172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.450 [2024-05-15 09:08:41.115200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.450 qpair failed and we were unable to recover it. 00:42:46.450 [2024-05-15 09:08:41.115339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.450 [2024-05-15 09:08:41.115465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.450 [2024-05-15 09:08:41.115490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.450 qpair failed and we were unable to recover it. 00:42:46.450 [2024-05-15 09:08:41.115623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.450 [2024-05-15 09:08:41.115718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.450 [2024-05-15 09:08:41.115742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.450 qpair failed and we were unable to recover it. 00:42:46.450 [2024-05-15 09:08:41.115898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.450 [2024-05-15 09:08:41.116002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.450 [2024-05-15 09:08:41.116027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.450 qpair failed and we were unable to recover it. 00:42:46.450 [2024-05-15 09:08:41.116188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.450 [2024-05-15 09:08:41.116352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.450 [2024-05-15 09:08:41.116377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.450 qpair failed and we were unable to recover it. 00:42:46.450 [2024-05-15 09:08:41.116499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.450 [2024-05-15 09:08:41.116629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.450 [2024-05-15 09:08:41.116654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.451 qpair failed and we were unable to recover it. 00:42:46.451 [2024-05-15 09:08:41.116782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.451 [2024-05-15 09:08:41.116911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.451 [2024-05-15 09:08:41.116937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.451 qpair failed and we were unable to recover it. 00:42:46.451 [2024-05-15 09:08:41.117067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.451 [2024-05-15 09:08:41.117227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.451 [2024-05-15 09:08:41.117253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.451 qpair failed and we were unable to recover it. 00:42:46.451 [2024-05-15 09:08:41.117386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.451 [2024-05-15 09:08:41.117508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.451 [2024-05-15 09:08:41.117533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.451 qpair failed and we were unable to recover it. 00:42:46.451 [2024-05-15 09:08:41.117664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.451 [2024-05-15 09:08:41.117792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.451 [2024-05-15 09:08:41.117817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.451 qpair failed and we were unable to recover it. 00:42:46.451 [2024-05-15 09:08:41.117940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.451 [2024-05-15 09:08:41.118086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.451 [2024-05-15 09:08:41.118115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.451 qpair failed and we were unable to recover it. 00:42:46.451 [2024-05-15 09:08:41.118256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.451 [2024-05-15 09:08:41.118407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.451 [2024-05-15 09:08:41.118432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.451 qpair failed and we were unable to recover it. 00:42:46.451 [2024-05-15 09:08:41.118589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.451 [2024-05-15 09:08:41.118743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.451 [2024-05-15 09:08:41.118768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.451 qpair failed and we were unable to recover it. 00:42:46.451 [2024-05-15 09:08:41.118925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.451 [2024-05-15 09:08:41.119050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.451 [2024-05-15 09:08:41.119075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.451 qpair failed and we were unable to recover it. 00:42:46.451 [2024-05-15 09:08:41.119246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.451 [2024-05-15 09:08:41.119353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.451 [2024-05-15 09:08:41.119379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.451 qpair failed and we were unable to recover it. 00:42:46.451 [2024-05-15 09:08:41.119485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.451 [2024-05-15 09:08:41.119613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.451 [2024-05-15 09:08:41.119638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.451 qpair failed and we were unable to recover it. 00:42:46.451 [2024-05-15 09:08:41.119768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.451 [2024-05-15 09:08:41.119896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.451 [2024-05-15 09:08:41.119921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.451 qpair failed and we were unable to recover it. 00:42:46.451 [2024-05-15 09:08:41.120046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.451 [2024-05-15 09:08:41.120183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.451 [2024-05-15 09:08:41.120208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.451 qpair failed and we were unable to recover it. 00:42:46.451 [2024-05-15 09:08:41.120384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.451 [2024-05-15 09:08:41.120485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.451 [2024-05-15 09:08:41.120510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.451 qpair failed and we were unable to recover it. 00:42:46.451 [2024-05-15 09:08:41.120658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.451 [2024-05-15 09:08:41.120815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.451 [2024-05-15 09:08:41.120840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.451 qpair failed and we were unable to recover it. 00:42:46.451 [2024-05-15 09:08:41.120938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.451 [2024-05-15 09:08:41.121071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.451 [2024-05-15 09:08:41.121096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.451 qpair failed and we were unable to recover it. 00:42:46.451 [2024-05-15 09:08:41.121235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.451 [2024-05-15 09:08:41.121336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.451 [2024-05-15 09:08:41.121361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.451 qpair failed and we were unable to recover it. 00:42:46.451 [2024-05-15 09:08:41.121467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.451 [2024-05-15 09:08:41.121567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.451 [2024-05-15 09:08:41.121592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.451 qpair failed and we were unable to recover it. 00:42:46.451 [2024-05-15 09:08:41.121695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.451 [2024-05-15 09:08:41.121847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.451 [2024-05-15 09:08:41.121872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.451 qpair failed and we were unable to recover it. 00:42:46.451 [2024-05-15 09:08:41.122032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.451 [2024-05-15 09:08:41.122157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.451 [2024-05-15 09:08:41.122182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.451 qpair failed and we were unable to recover it. 00:42:46.451 [2024-05-15 09:08:41.122331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.451 [2024-05-15 09:08:41.122460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.451 [2024-05-15 09:08:41.122484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.451 qpair failed and we were unable to recover it. 00:42:46.451 [2024-05-15 09:08:41.122587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.451 [2024-05-15 09:08:41.122711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.451 [2024-05-15 09:08:41.122735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.451 qpair failed and we were unable to recover it. 00:42:46.451 [2024-05-15 09:08:41.122864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.451 [2024-05-15 09:08:41.122996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.451 [2024-05-15 09:08:41.123021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.451 qpair failed and we were unable to recover it. 00:42:46.451 [2024-05-15 09:08:41.123149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.451 [2024-05-15 09:08:41.123296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.451 [2024-05-15 09:08:41.123322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.451 qpair failed and we were unable to recover it. 00:42:46.451 [2024-05-15 09:08:41.123475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.451 [2024-05-15 09:08:41.123589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.451 [2024-05-15 09:08:41.123617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.451 qpair failed and we were unable to recover it. 00:42:46.451 [2024-05-15 09:08:41.123814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.451 [2024-05-15 09:08:41.123973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.451 [2024-05-15 09:08:41.124002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.451 qpair failed and we were unable to recover it. 00:42:46.451 [2024-05-15 09:08:41.124189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.451 [2024-05-15 09:08:41.124386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.451 [2024-05-15 09:08:41.124414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.451 qpair failed and we were unable to recover it. 00:42:46.451 [2024-05-15 09:08:41.124538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.451 [2024-05-15 09:08:41.124689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.451 [2024-05-15 09:08:41.124714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.451 qpair failed and we were unable to recover it. 00:42:46.451 [2024-05-15 09:08:41.124868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.451 [2024-05-15 09:08:41.124972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.452 [2024-05-15 09:08:41.124998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.452 qpair failed and we were unable to recover it. 00:42:46.452 [2024-05-15 09:08:41.125126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.452 [2024-05-15 09:08:41.125233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.452 [2024-05-15 09:08:41.125260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.452 qpair failed and we were unable to recover it. 00:42:46.452 [2024-05-15 09:08:41.125388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.452 [2024-05-15 09:08:41.125519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.452 [2024-05-15 09:08:41.125545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.452 qpair failed and we were unable to recover it. 00:42:46.452 [2024-05-15 09:08:41.125683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.452 [2024-05-15 09:08:41.125783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.452 [2024-05-15 09:08:41.125808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.452 qpair failed and we were unable to recover it. 00:42:46.452 [2024-05-15 09:08:41.125918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.452 [2024-05-15 09:08:41.126044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.452 [2024-05-15 09:08:41.126069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.452 qpair failed and we were unable to recover it. 00:42:46.452 [2024-05-15 09:08:41.126208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.452 [2024-05-15 09:08:41.126318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.452 [2024-05-15 09:08:41.126342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.452 qpair failed and we were unable to recover it. 00:42:46.452 [2024-05-15 09:08:41.126447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.452 [2024-05-15 09:08:41.126538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.452 [2024-05-15 09:08:41.126562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.452 qpair failed and we were unable to recover it. 00:42:46.452 [2024-05-15 09:08:41.126685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.452 [2024-05-15 09:08:41.126812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.452 [2024-05-15 09:08:41.126836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.452 qpair failed and we were unable to recover it. 00:42:46.452 [2024-05-15 09:08:41.126963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.452 [2024-05-15 09:08:41.127075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.452 [2024-05-15 09:08:41.127100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.452 qpair failed and we were unable to recover it. 00:42:46.452 [2024-05-15 09:08:41.127235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.452 [2024-05-15 09:08:41.127362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.452 [2024-05-15 09:08:41.127388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.452 qpair failed and we were unable to recover it. 00:42:46.452 [2024-05-15 09:08:41.127510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.452 [2024-05-15 09:08:41.127660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.452 [2024-05-15 09:08:41.127684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.452 qpair failed and we were unable to recover it. 00:42:46.452 [2024-05-15 09:08:41.127810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.452 [2024-05-15 09:08:41.127937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.452 [2024-05-15 09:08:41.127961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.452 qpair failed and we were unable to recover it. 00:42:46.452 [2024-05-15 09:08:41.128073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.452 [2024-05-15 09:08:41.128194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.452 [2024-05-15 09:08:41.128228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.452 qpair failed and we were unable to recover it. 00:42:46.452 [2024-05-15 09:08:41.128364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.452 [2024-05-15 09:08:41.128492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.452 [2024-05-15 09:08:41.128516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.452 qpair failed and we were unable to recover it. 00:42:46.452 [2024-05-15 09:08:41.128677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.452 [2024-05-15 09:08:41.128807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.452 [2024-05-15 09:08:41.128831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.452 qpair failed and we were unable to recover it. 00:42:46.452 [2024-05-15 09:08:41.128959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.452 [2024-05-15 09:08:41.129081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.452 [2024-05-15 09:08:41.129106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.452 qpair failed and we were unable to recover it. 00:42:46.452 [2024-05-15 09:08:41.129242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.452 [2024-05-15 09:08:41.129376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.452 [2024-05-15 09:08:41.129401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.452 qpair failed and we were unable to recover it. 00:42:46.452 [2024-05-15 09:08:41.129532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.452 [2024-05-15 09:08:41.129661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.452 [2024-05-15 09:08:41.129685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.452 qpair failed and we were unable to recover it. 00:42:46.452 [2024-05-15 09:08:41.129817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.452 [2024-05-15 09:08:41.129949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.452 [2024-05-15 09:08:41.129973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.452 qpair failed and we were unable to recover it. 00:42:46.452 [2024-05-15 09:08:41.130068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.452 [2024-05-15 09:08:41.130194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.452 [2024-05-15 09:08:41.130225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.452 qpair failed and we were unable to recover it. 00:42:46.452 [2024-05-15 09:08:41.130357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.452 [2024-05-15 09:08:41.130490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.452 [2024-05-15 09:08:41.130514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.452 qpair failed and we were unable to recover it. 00:42:46.452 [2024-05-15 09:08:41.130621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.452 [2024-05-15 09:08:41.130747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.452 [2024-05-15 09:08:41.130772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.452 qpair failed and we were unable to recover it. 00:42:46.452 [2024-05-15 09:08:41.130908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.452 [2024-05-15 09:08:41.131039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.452 [2024-05-15 09:08:41.131063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.452 qpair failed and we were unable to recover it. 00:42:46.452 [2024-05-15 09:08:41.131165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.452 [2024-05-15 09:08:41.131294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.452 [2024-05-15 09:08:41.131321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.452 qpair failed and we were unable to recover it. 00:42:46.452 [2024-05-15 09:08:41.131480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.452 [2024-05-15 09:08:41.131579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.452 [2024-05-15 09:08:41.131604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.452 qpair failed and we were unable to recover it. 00:42:46.452 [2024-05-15 09:08:41.131735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.452 [2024-05-15 09:08:41.131863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.452 [2024-05-15 09:08:41.131888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.452 qpair failed and we were unable to recover it. 00:42:46.452 [2024-05-15 09:08:41.132070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.452 [2024-05-15 09:08:41.132238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.452 [2024-05-15 09:08:41.132263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.452 qpair failed and we were unable to recover it. 00:42:46.452 [2024-05-15 09:08:41.132425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.452 [2024-05-15 09:08:41.132557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.452 [2024-05-15 09:08:41.132581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.452 qpair failed and we were unable to recover it. 00:42:46.452 [2024-05-15 09:08:41.132684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.452 [2024-05-15 09:08:41.132813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.452 [2024-05-15 09:08:41.132842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.452 qpair failed and we were unable to recover it. 00:42:46.452 [2024-05-15 09:08:41.132941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.452 [2024-05-15 09:08:41.133073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.453 [2024-05-15 09:08:41.133098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.453 qpair failed and we were unable to recover it. 00:42:46.453 [2024-05-15 09:08:41.133210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.453 [2024-05-15 09:08:41.133311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.453 [2024-05-15 09:08:41.133336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.453 qpair failed and we were unable to recover it. 00:42:46.453 [2024-05-15 09:08:41.133438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.453 [2024-05-15 09:08:41.133563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.453 [2024-05-15 09:08:41.133588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.453 qpair failed and we were unable to recover it. 00:42:46.453 [2024-05-15 09:08:41.133691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.453 [2024-05-15 09:08:41.133795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.453 [2024-05-15 09:08:41.133819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.453 qpair failed and we were unable to recover it. 00:42:46.453 [2024-05-15 09:08:41.133973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.453 [2024-05-15 09:08:41.134073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.453 [2024-05-15 09:08:41.134099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.453 qpair failed and we were unable to recover it. 00:42:46.453 [2024-05-15 09:08:41.134234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.453 [2024-05-15 09:08:41.134405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.453 [2024-05-15 09:08:41.134430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.453 qpair failed and we were unable to recover it. 00:42:46.453 [2024-05-15 09:08:41.134556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.453 [2024-05-15 09:08:41.134656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.453 [2024-05-15 09:08:41.134680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.453 qpair failed and we were unable to recover it. 00:42:46.453 [2024-05-15 09:08:41.134808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.453 [2024-05-15 09:08:41.134933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.453 [2024-05-15 09:08:41.134958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.453 qpair failed and we were unable to recover it. 00:42:46.453 [2024-05-15 09:08:41.135080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.453 [2024-05-15 09:08:41.135175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.453 [2024-05-15 09:08:41.135199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.453 qpair failed and we were unable to recover it. 00:42:46.453 [2024-05-15 09:08:41.135341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.453 [2024-05-15 09:08:41.135472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.453 [2024-05-15 09:08:41.135496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.453 qpair failed and we were unable to recover it. 00:42:46.453 [2024-05-15 09:08:41.135630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.453 [2024-05-15 09:08:41.135756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.453 [2024-05-15 09:08:41.135780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.453 qpair failed and we were unable to recover it. 00:42:46.453 [2024-05-15 09:08:41.135916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.453 [2024-05-15 09:08:41.136043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.453 [2024-05-15 09:08:41.136067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.453 qpair failed and we were unable to recover it. 00:42:46.453 [2024-05-15 09:08:41.136176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.453 [2024-05-15 09:08:41.136323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.453 [2024-05-15 09:08:41.136349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.453 qpair failed and we were unable to recover it. 00:42:46.453 [2024-05-15 09:08:41.136474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.453 [2024-05-15 09:08:41.136623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.453 [2024-05-15 09:08:41.136648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.453 qpair failed and we were unable to recover it. 00:42:46.453 [2024-05-15 09:08:41.136779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.453 [2024-05-15 09:08:41.136909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.453 [2024-05-15 09:08:41.136935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.453 qpair failed and we were unable to recover it. 00:42:46.453 [2024-05-15 09:08:41.137086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.453 [2024-05-15 09:08:41.137272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.453 [2024-05-15 09:08:41.137301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.453 qpair failed and we were unable to recover it. 00:42:46.453 [2024-05-15 09:08:41.137443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.453 [2024-05-15 09:08:41.137584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.453 [2024-05-15 09:08:41.137608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.453 qpair failed and we were unable to recover it. 00:42:46.453 [2024-05-15 09:08:41.137735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.453 [2024-05-15 09:08:41.137860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.453 [2024-05-15 09:08:41.137885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.453 qpair failed and we were unable to recover it. 00:42:46.453 [2024-05-15 09:08:41.138014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.453 [2024-05-15 09:08:41.138135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.453 [2024-05-15 09:08:41.138159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.453 qpair failed and we were unable to recover it. 00:42:46.453 [2024-05-15 09:08:41.138305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.453 [2024-05-15 09:08:41.138410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.453 [2024-05-15 09:08:41.138434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.453 qpair failed and we were unable to recover it. 00:42:46.453 [2024-05-15 09:08:41.138540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.453 [2024-05-15 09:08:41.138637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.453 [2024-05-15 09:08:41.138661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.453 qpair failed and we were unable to recover it. 00:42:46.453 [2024-05-15 09:08:41.138814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.453 [2024-05-15 09:08:41.138949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.453 [2024-05-15 09:08:41.138974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.453 qpair failed and we were unable to recover it. 00:42:46.453 [2024-05-15 09:08:41.139079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.453 [2024-05-15 09:08:41.139207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.453 [2024-05-15 09:08:41.139238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.453 qpair failed and we were unable to recover it. 00:42:46.453 [2024-05-15 09:08:41.139382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.453 [2024-05-15 09:08:41.139508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.453 [2024-05-15 09:08:41.139533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.453 qpair failed and we were unable to recover it. 00:42:46.453 [2024-05-15 09:08:41.139667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.453 [2024-05-15 09:08:41.139826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.453 [2024-05-15 09:08:41.139851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.453 qpair failed and we were unable to recover it. 00:42:46.453 [2024-05-15 09:08:41.139984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.453 [2024-05-15 09:08:41.140096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.453 [2024-05-15 09:08:41.140121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.453 qpair failed and we were unable to recover it. 00:42:46.453 [2024-05-15 09:08:41.140252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.453 [2024-05-15 09:08:41.140406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.453 [2024-05-15 09:08:41.140431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.453 qpair failed and we were unable to recover it. 00:42:46.453 [2024-05-15 09:08:41.140559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.453 [2024-05-15 09:08:41.140714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.453 [2024-05-15 09:08:41.140739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.453 qpair failed and we were unable to recover it. 00:42:46.453 [2024-05-15 09:08:41.140867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.453 [2024-05-15 09:08:41.141000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.453 [2024-05-15 09:08:41.141025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.453 qpair failed and we were unable to recover it. 00:42:46.453 [2024-05-15 09:08:41.141125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.454 [2024-05-15 09:08:41.141241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.454 [2024-05-15 09:08:41.141266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.454 qpair failed and we were unable to recover it. 00:42:46.454 [2024-05-15 09:08:41.141375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.454 [2024-05-15 09:08:41.141483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.454 [2024-05-15 09:08:41.141509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.454 qpair failed and we were unable to recover it. 00:42:46.454 [2024-05-15 09:08:41.141616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.454 [2024-05-15 09:08:41.141740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.454 [2024-05-15 09:08:41.141765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.454 qpair failed and we were unable to recover it. 00:42:46.454 [2024-05-15 09:08:41.141862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.454 [2024-05-15 09:08:41.141967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.454 [2024-05-15 09:08:41.141991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.454 qpair failed and we were unable to recover it. 00:42:46.454 [2024-05-15 09:08:41.142087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.454 [2024-05-15 09:08:41.142221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.454 [2024-05-15 09:08:41.142246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.454 qpair failed and we were unable to recover it. 00:42:46.454 [2024-05-15 09:08:41.142408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.454 [2024-05-15 09:08:41.142534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.454 [2024-05-15 09:08:41.142559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.454 qpair failed and we were unable to recover it. 00:42:46.454 [2024-05-15 09:08:41.142682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.454 [2024-05-15 09:08:41.142834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.454 [2024-05-15 09:08:41.142859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.454 qpair failed and we were unable to recover it. 00:42:46.454 [2024-05-15 09:08:41.142989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.454 [2024-05-15 09:08:41.143119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.454 [2024-05-15 09:08:41.143143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.454 qpair failed and we were unable to recover it. 00:42:46.454 [2024-05-15 09:08:41.143246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.454 [2024-05-15 09:08:41.143401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.454 [2024-05-15 09:08:41.143426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.454 qpair failed and we were unable to recover it. 00:42:46.454 [2024-05-15 09:08:41.143560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.454 [2024-05-15 09:08:41.143692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.454 [2024-05-15 09:08:41.143717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.454 qpair failed and we were unable to recover it. 00:42:46.454 [2024-05-15 09:08:41.143867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.454 [2024-05-15 09:08:41.143976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.454 [2024-05-15 09:08:41.144003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.454 qpair failed and we were unable to recover it. 00:42:46.454 [2024-05-15 09:08:41.144133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.454 [2024-05-15 09:08:41.144268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.454 [2024-05-15 09:08:41.144294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.454 qpair failed and we were unable to recover it. 00:42:46.454 [2024-05-15 09:08:41.144409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.454 [2024-05-15 09:08:41.144579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.454 [2024-05-15 09:08:41.144606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.454 qpair failed and we were unable to recover it. 00:42:46.454 [2024-05-15 09:08:41.144760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.454 [2024-05-15 09:08:41.144883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.454 [2024-05-15 09:08:41.144907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.454 qpair failed and we were unable to recover it. 00:42:46.454 [2024-05-15 09:08:41.145114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.454 [2024-05-15 09:08:41.145257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.454 [2024-05-15 09:08:41.145286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.454 qpair failed and we were unable to recover it. 00:42:46.454 [2024-05-15 09:08:41.145459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.454 [2024-05-15 09:08:41.145617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.454 [2024-05-15 09:08:41.145642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.454 qpair failed and we were unable to recover it. 00:42:46.454 [2024-05-15 09:08:41.145802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.454 [2024-05-15 09:08:41.145932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.454 [2024-05-15 09:08:41.145957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.454 qpair failed and we were unable to recover it. 00:42:46.454 [2024-05-15 09:08:41.146070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.454 [2024-05-15 09:08:41.146207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.454 [2024-05-15 09:08:41.146265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.454 qpair failed and we were unable to recover it. 00:42:46.454 [2024-05-15 09:08:41.146403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.454 [2024-05-15 09:08:41.146590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.454 [2024-05-15 09:08:41.146617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.454 qpair failed and we were unable to recover it. 00:42:46.454 [2024-05-15 09:08:41.146794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.454 [2024-05-15 09:08:41.146912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.454 [2024-05-15 09:08:41.146937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.454 qpair failed and we were unable to recover it. 00:42:46.454 [2024-05-15 09:08:41.147092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.454 [2024-05-15 09:08:41.147190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.454 [2024-05-15 09:08:41.147221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.454 qpair failed and we were unable to recover it. 00:42:46.454 [2024-05-15 09:08:41.147336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.454 [2024-05-15 09:08:41.147482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.454 [2024-05-15 09:08:41.147514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.454 qpair failed and we were unable to recover it. 00:42:46.454 [2024-05-15 09:08:41.147650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.454 [2024-05-15 09:08:41.147766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.454 [2024-05-15 09:08:41.147791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.454 qpair failed and we were unable to recover it. 00:42:46.454 [2024-05-15 09:08:41.147915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.454 [2024-05-15 09:08:41.148045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.454 [2024-05-15 09:08:41.148070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.454 qpair failed and we were unable to recover it. 00:42:46.454 [2024-05-15 09:08:41.148204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.454 [2024-05-15 09:08:41.148331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.454 [2024-05-15 09:08:41.148356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.454 qpair failed and we were unable to recover it. 00:42:46.454 [2024-05-15 09:08:41.148481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.454 [2024-05-15 09:08:41.148580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.454 [2024-05-15 09:08:41.148605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.454 qpair failed and we were unable to recover it. 00:42:46.454 [2024-05-15 09:08:41.148707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.454 [2024-05-15 09:08:41.148832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.454 [2024-05-15 09:08:41.148857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.454 qpair failed and we were unable to recover it. 00:42:46.454 [2024-05-15 09:08:41.148983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.454 [2024-05-15 09:08:41.149140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.454 [2024-05-15 09:08:41.149165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.454 qpair failed and we were unable to recover it. 00:42:46.454 [2024-05-15 09:08:41.149365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.454 [2024-05-15 09:08:41.149496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.454 [2024-05-15 09:08:41.149538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.454 qpair failed and we were unable to recover it. 00:42:46.454 [2024-05-15 09:08:41.149691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.455 [2024-05-15 09:08:41.149823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.455 [2024-05-15 09:08:41.149850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.455 qpair failed and we were unable to recover it. 00:42:46.455 [2024-05-15 09:08:41.149979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.455 [2024-05-15 09:08:41.150154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.455 [2024-05-15 09:08:41.150179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.455 qpair failed and we were unable to recover it. 00:42:46.455 [2024-05-15 09:08:41.150341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.455 [2024-05-15 09:08:41.150485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.455 [2024-05-15 09:08:41.150513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.455 qpair failed and we were unable to recover it. 00:42:46.455 [2024-05-15 09:08:41.150684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.455 [2024-05-15 09:08:41.150812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.455 [2024-05-15 09:08:41.150854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.455 qpair failed and we were unable to recover it. 00:42:46.455 [2024-05-15 09:08:41.150971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.455 [2024-05-15 09:08:41.151088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.455 [2024-05-15 09:08:41.151115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.455 qpair failed and we were unable to recover it. 00:42:46.455 [2024-05-15 09:08:41.151285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.455 [2024-05-15 09:08:41.151402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.455 [2024-05-15 09:08:41.151429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.455 qpair failed and we were unable to recover it. 00:42:46.455 [2024-05-15 09:08:41.151583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.455 [2024-05-15 09:08:41.151714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.455 [2024-05-15 09:08:41.151754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.455 qpair failed and we were unable to recover it. 00:42:46.455 [2024-05-15 09:08:41.151905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.455 [2024-05-15 09:08:41.152074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.455 [2024-05-15 09:08:41.152101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.455 qpair failed and we were unable to recover it. 00:42:46.455 [2024-05-15 09:08:41.152248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.455 [2024-05-15 09:08:41.152354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.455 [2024-05-15 09:08:41.152382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.455 qpair failed and we were unable to recover it. 00:42:46.455 [2024-05-15 09:08:41.152511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.455 [2024-05-15 09:08:41.152674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.455 [2024-05-15 09:08:41.152699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.455 qpair failed and we were unable to recover it. 00:42:46.455 [2024-05-15 09:08:41.152864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.455 [2024-05-15 09:08:41.153007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.455 [2024-05-15 09:08:41.153035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.455 qpair failed and we were unable to recover it. 00:42:46.455 [2024-05-15 09:08:41.153175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.455 [2024-05-15 09:08:41.153317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.455 [2024-05-15 09:08:41.153345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.455 qpair failed and we were unable to recover it. 00:42:46.455 [2024-05-15 09:08:41.153498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.455 [2024-05-15 09:08:41.153638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.455 [2024-05-15 09:08:41.153663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.455 qpair failed and we were unable to recover it. 00:42:46.455 [2024-05-15 09:08:41.153839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.455 [2024-05-15 09:08:41.153989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.455 [2024-05-15 09:08:41.154014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.455 qpair failed and we were unable to recover it. 00:42:46.455 [2024-05-15 09:08:41.154189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.455 [2024-05-15 09:08:41.154350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.455 [2024-05-15 09:08:41.154376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.455 qpair failed and we were unable to recover it. 00:42:46.455 [2024-05-15 09:08:41.154532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.455 [2024-05-15 09:08:41.154663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.455 [2024-05-15 09:08:41.154689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.455 qpair failed and we were unable to recover it. 00:42:46.455 [2024-05-15 09:08:41.154822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.455 [2024-05-15 09:08:41.154991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.455 [2024-05-15 09:08:41.155019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.455 qpair failed and we were unable to recover it. 00:42:46.455 [2024-05-15 09:08:41.155134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.455 [2024-05-15 09:08:41.155299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.455 [2024-05-15 09:08:41.155324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.455 qpair failed and we were unable to recover it. 00:42:46.455 [2024-05-15 09:08:41.155453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.455 [2024-05-15 09:08:41.155549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.455 [2024-05-15 09:08:41.155574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.455 qpair failed and we were unable to recover it. 00:42:46.455 [2024-05-15 09:08:41.155704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.455 [2024-05-15 09:08:41.155819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.455 [2024-05-15 09:08:41.155846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.455 qpair failed and we were unable to recover it. 00:42:46.455 [2024-05-15 09:08:41.155967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.455 [2024-05-15 09:08:41.156120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.455 [2024-05-15 09:08:41.156160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.455 qpair failed and we were unable to recover it. 00:42:46.455 [2024-05-15 09:08:41.156329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.455 [2024-05-15 09:08:41.156428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.455 [2024-05-15 09:08:41.156454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.455 qpair failed and we were unable to recover it. 00:42:46.455 [2024-05-15 09:08:41.156583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.455 [2024-05-15 09:08:41.156717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.455 [2024-05-15 09:08:41.156744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.455 qpair failed and we were unable to recover it. 00:42:46.455 [2024-05-15 09:08:41.156856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.455 [2024-05-15 09:08:41.157011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.455 [2024-05-15 09:08:41.157036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.455 qpair failed and we were unable to recover it. 00:42:46.455 [2024-05-15 09:08:41.157166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.455 [2024-05-15 09:08:41.157275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.455 [2024-05-15 09:08:41.157300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.456 qpair failed and we were unable to recover it. 00:42:46.456 [2024-05-15 09:08:41.157457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.456 [2024-05-15 09:08:41.157593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.456 [2024-05-15 09:08:41.157620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.456 qpair failed and we were unable to recover it. 00:42:46.456 [2024-05-15 09:08:41.157730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.456 [2024-05-15 09:08:41.157885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.456 [2024-05-15 09:08:41.157910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.456 qpair failed and we were unable to recover it. 00:42:46.456 [2024-05-15 09:08:41.158034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.456 [2024-05-15 09:08:41.158162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.456 [2024-05-15 09:08:41.158186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.456 qpair failed and we were unable to recover it. 00:42:46.456 [2024-05-15 09:08:41.158325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.456 [2024-05-15 09:08:41.158425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.456 [2024-05-15 09:08:41.158449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.456 qpair failed and we were unable to recover it. 00:42:46.456 [2024-05-15 09:08:41.158598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.456 [2024-05-15 09:08:41.158749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.456 [2024-05-15 09:08:41.158774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.456 qpair failed and we were unable to recover it. 00:42:46.456 [2024-05-15 09:08:41.158905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.456 [2024-05-15 09:08:41.159028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.456 [2024-05-15 09:08:41.159053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.456 qpair failed and we were unable to recover it. 00:42:46.456 [2024-05-15 09:08:41.159222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.456 [2024-05-15 09:08:41.159361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.456 [2024-05-15 09:08:41.159389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.456 qpair failed and we were unable to recover it. 00:42:46.456 [2024-05-15 09:08:41.159545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.456 [2024-05-15 09:08:41.159657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.456 [2024-05-15 09:08:41.159682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.456 qpair failed and we were unable to recover it. 00:42:46.456 [2024-05-15 09:08:41.159785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.456 [2024-05-15 09:08:41.159915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.456 [2024-05-15 09:08:41.159940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.456 qpair failed and we were unable to recover it. 00:42:46.456 [2024-05-15 09:08:41.160091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.456 [2024-05-15 09:08:41.160242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.456 [2024-05-15 09:08:41.160271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.456 qpair failed and we were unable to recover it. 00:42:46.456 [2024-05-15 09:08:41.160414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.456 [2024-05-15 09:08:41.160579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.456 [2024-05-15 09:08:41.160603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.456 qpair failed and we were unable to recover it. 00:42:46.456 [2024-05-15 09:08:41.160731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.456 [2024-05-15 09:08:41.160829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.456 [2024-05-15 09:08:41.160854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.456 qpair failed and we were unable to recover it. 00:42:46.456 [2024-05-15 09:08:41.161003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.456 [2024-05-15 09:08:41.161149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.456 [2024-05-15 09:08:41.161176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.456 qpair failed and we were unable to recover it. 00:42:46.456 [2024-05-15 09:08:41.161304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.456 [2024-05-15 09:08:41.161433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.456 [2024-05-15 09:08:41.161457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.456 qpair failed and we were unable to recover it. 00:42:46.456 [2024-05-15 09:08:41.161559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.456 [2024-05-15 09:08:41.161684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.456 [2024-05-15 09:08:41.161708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.456 qpair failed and we were unable to recover it. 00:42:46.456 [2024-05-15 09:08:41.161861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.456 [2024-05-15 09:08:41.162031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.456 [2024-05-15 09:08:41.162058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.456 qpair failed and we were unable to recover it. 00:42:46.456 [2024-05-15 09:08:41.162200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.456 [2024-05-15 09:08:41.162373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.456 [2024-05-15 09:08:41.162401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.456 qpair failed and we were unable to recover it. 00:42:46.456 [2024-05-15 09:08:41.162560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.456 [2024-05-15 09:08:41.162687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.456 [2024-05-15 09:08:41.162711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.456 qpair failed and we were unable to recover it. 00:42:46.456 [2024-05-15 09:08:41.162835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.456 [2024-05-15 09:08:41.162977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.456 [2024-05-15 09:08:41.163009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.456 qpair failed and we were unable to recover it. 00:42:46.456 [2024-05-15 09:08:41.163144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.456 [2024-05-15 09:08:41.163255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.456 [2024-05-15 09:08:41.163283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.456 qpair failed and we were unable to recover it. 00:42:46.456 [2024-05-15 09:08:41.163414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.456 [2024-05-15 09:08:41.163546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.456 [2024-05-15 09:08:41.163571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.456 qpair failed and we were unable to recover it. 00:42:46.456 [2024-05-15 09:08:41.163702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.456 [2024-05-15 09:08:41.163860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.456 [2024-05-15 09:08:41.163885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.456 qpair failed and we were unable to recover it. 00:42:46.456 [2024-05-15 09:08:41.164038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.456 [2024-05-15 09:08:41.164153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.456 [2024-05-15 09:08:41.164181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.456 qpair failed and we were unable to recover it. 00:42:46.456 [2024-05-15 09:08:41.164346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.456 [2024-05-15 09:08:41.164559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.456 [2024-05-15 09:08:41.164586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.456 qpair failed and we were unable to recover it. 00:42:46.456 [2024-05-15 09:08:41.164731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.456 [2024-05-15 09:08:41.164897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.456 [2024-05-15 09:08:41.164924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.456 qpair failed and we were unable to recover it. 00:42:46.456 [2024-05-15 09:08:41.165099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.456 [2024-05-15 09:08:41.165238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.456 [2024-05-15 09:08:41.165263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.456 qpair failed and we were unable to recover it. 00:42:46.456 [2024-05-15 09:08:41.165420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.456 [2024-05-15 09:08:41.165520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.456 [2024-05-15 09:08:41.165546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.456 qpair failed and we were unable to recover it. 00:42:46.456 [2024-05-15 09:08:41.165708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.456 [2024-05-15 09:08:41.165862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.456 [2024-05-15 09:08:41.165889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.456 qpair failed and we were unable to recover it. 00:42:46.456 [2024-05-15 09:08:41.166023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.456 [2024-05-15 09:08:41.166162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.457 [2024-05-15 09:08:41.166189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.457 qpair failed and we were unable to recover it. 00:42:46.457 [2024-05-15 09:08:41.166360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.457 [2024-05-15 09:08:41.166490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.457 [2024-05-15 09:08:41.166515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.457 qpair failed and we were unable to recover it. 00:42:46.457 [2024-05-15 09:08:41.166740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.457 [2024-05-15 09:08:41.166880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.457 [2024-05-15 09:08:41.166908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.457 qpair failed and we were unable to recover it. 00:42:46.457 [2024-05-15 09:08:41.167054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.457 [2024-05-15 09:08:41.167207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.457 [2024-05-15 09:08:41.167238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.457 qpair failed and we were unable to recover it. 00:42:46.457 [2024-05-15 09:08:41.167360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.457 [2024-05-15 09:08:41.167518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.457 [2024-05-15 09:08:41.167563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.457 qpair failed and we were unable to recover it. 00:42:46.457 [2024-05-15 09:08:41.167706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.457 [2024-05-15 09:08:41.167874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.457 [2024-05-15 09:08:41.167901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.457 qpair failed and we were unable to recover it. 00:42:46.457 [2024-05-15 09:08:41.168017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.457 [2024-05-15 09:08:41.168184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.457 [2024-05-15 09:08:41.168211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.457 qpair failed and we were unable to recover it. 00:42:46.457 [2024-05-15 09:08:41.168370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.457 [2024-05-15 09:08:41.168468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.457 [2024-05-15 09:08:41.168493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.457 qpair failed and we were unable to recover it. 00:42:46.457 [2024-05-15 09:08:41.168649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.457 [2024-05-15 09:08:41.168785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.457 [2024-05-15 09:08:41.168812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.457 qpair failed and we were unable to recover it. 00:42:46.457 [2024-05-15 09:08:41.169027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.457 [2024-05-15 09:08:41.169200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.457 [2024-05-15 09:08:41.169236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.457 qpair failed and we were unable to recover it. 00:42:46.457 [2024-05-15 09:08:41.169359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.457 [2024-05-15 09:08:41.169491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.457 [2024-05-15 09:08:41.169516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.457 qpair failed and we were unable to recover it. 00:42:46.457 [2024-05-15 09:08:41.169673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.457 [2024-05-15 09:08:41.169792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.457 [2024-05-15 09:08:41.169819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.457 qpair failed and we were unable to recover it. 00:42:46.457 [2024-05-15 09:08:41.169933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.457 [2024-05-15 09:08:41.170051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.457 [2024-05-15 09:08:41.170078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.457 qpair failed and we were unable to recover it. 00:42:46.457 [2024-05-15 09:08:41.170206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.457 [2024-05-15 09:08:41.170347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.457 [2024-05-15 09:08:41.170372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.457 qpair failed and we were unable to recover it. 00:42:46.457 [2024-05-15 09:08:41.170473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.457 [2024-05-15 09:08:41.170624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.457 [2024-05-15 09:08:41.170653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.457 qpair failed and we were unable to recover it. 00:42:46.457 [2024-05-15 09:08:41.170790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.457 [2024-05-15 09:08:41.170925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.457 [2024-05-15 09:08:41.170953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.457 qpair failed and we were unable to recover it. 00:42:46.457 [2024-05-15 09:08:41.171105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.457 [2024-05-15 09:08:41.171201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.457 [2024-05-15 09:08:41.171233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.457 qpair failed and we were unable to recover it. 00:42:46.457 [2024-05-15 09:08:41.171425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.457 [2024-05-15 09:08:41.171554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.457 [2024-05-15 09:08:41.171579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.457 qpair failed and we were unable to recover it. 00:42:46.457 [2024-05-15 09:08:41.171709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.457 [2024-05-15 09:08:41.171801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.457 [2024-05-15 09:08:41.171826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.457 qpair failed and we were unable to recover it. 00:42:46.457 [2024-05-15 09:08:41.171956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.457 [2024-05-15 09:08:41.172112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.457 [2024-05-15 09:08:41.172153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.457 qpair failed and we were unable to recover it. 00:42:46.457 [2024-05-15 09:08:41.172298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.457 [2024-05-15 09:08:41.172465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.457 [2024-05-15 09:08:41.172493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.457 qpair failed and we were unable to recover it. 00:42:46.457 [2024-05-15 09:08:41.172640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.457 [2024-05-15 09:08:41.172763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.457 [2024-05-15 09:08:41.172788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.457 qpair failed and we were unable to recover it. 00:42:46.457 [2024-05-15 09:08:41.172917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.457 [2024-05-15 09:08:41.173025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.457 [2024-05-15 09:08:41.173051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.457 qpair failed and we were unable to recover it. 00:42:46.457 [2024-05-15 09:08:41.173152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.457 [2024-05-15 09:08:41.173281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.457 [2024-05-15 09:08:41.173309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.457 qpair failed and we were unable to recover it. 00:42:46.457 [2024-05-15 09:08:41.173525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.457 [2024-05-15 09:08:41.173661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.457 [2024-05-15 09:08:41.173689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.457 qpair failed and we were unable to recover it. 00:42:46.457 [2024-05-15 09:08:41.173868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.457 [2024-05-15 09:08:41.173990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.457 [2024-05-15 09:08:41.174014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.457 qpair failed and we were unable to recover it. 00:42:46.457 [2024-05-15 09:08:41.174121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.457 [2024-05-15 09:08:41.174271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.457 [2024-05-15 09:08:41.174297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.457 qpair failed and we were unable to recover it. 00:42:46.457 [2024-05-15 09:08:41.174432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.457 [2024-05-15 09:08:41.174614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.457 [2024-05-15 09:08:41.174639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.457 qpair failed and we were unable to recover it. 00:42:46.457 [2024-05-15 09:08:41.174740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.457 [2024-05-15 09:08:41.174895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.457 [2024-05-15 09:08:41.174920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.457 qpair failed and we were unable to recover it. 00:42:46.457 [2024-05-15 09:08:41.175050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.458 [2024-05-15 09:08:41.175260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.458 [2024-05-15 09:08:41.175286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.458 qpair failed and we were unable to recover it. 00:42:46.458 [2024-05-15 09:08:41.175393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.458 [2024-05-15 09:08:41.175488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.458 [2024-05-15 09:08:41.175513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.458 qpair failed and we were unable to recover it. 00:42:46.458 [2024-05-15 09:08:41.175641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.458 [2024-05-15 09:08:41.175786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.458 [2024-05-15 09:08:41.175811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.458 qpair failed and we were unable to recover it. 00:42:46.458 [2024-05-15 09:08:41.175966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.458 [2024-05-15 09:08:41.176165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.458 [2024-05-15 09:08:41.176192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.458 qpair failed and we were unable to recover it. 00:42:46.458 [2024-05-15 09:08:41.176322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.458 [2024-05-15 09:08:41.176454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.458 [2024-05-15 09:08:41.176479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.458 qpair failed and we were unable to recover it. 00:42:46.458 [2024-05-15 09:08:41.176609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.458 [2024-05-15 09:08:41.176740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.458 [2024-05-15 09:08:41.176765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.458 qpair failed and we were unable to recover it. 00:42:46.458 [2024-05-15 09:08:41.176934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.458 [2024-05-15 09:08:41.177139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.458 [2024-05-15 09:08:41.177164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.458 qpair failed and we were unable to recover it. 00:42:46.458 [2024-05-15 09:08:41.177303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.458 [2024-05-15 09:08:41.177456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.458 [2024-05-15 09:08:41.177481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.458 qpair failed and we were unable to recover it. 00:42:46.458 [2024-05-15 09:08:41.177638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.458 [2024-05-15 09:08:41.177794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.458 [2024-05-15 09:08:41.177819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.458 qpair failed and we were unable to recover it. 00:42:46.458 [2024-05-15 09:08:41.177976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.458 [2024-05-15 09:08:41.178106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.458 [2024-05-15 09:08:41.178134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.458 qpair failed and we were unable to recover it. 00:42:46.458 [2024-05-15 09:08:41.178298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.458 [2024-05-15 09:08:41.178462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.458 [2024-05-15 09:08:41.178490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.458 qpair failed and we were unable to recover it. 00:42:46.458 [2024-05-15 09:08:41.178636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.458 [2024-05-15 09:08:41.178791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.458 [2024-05-15 09:08:41.178816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.458 qpair failed and we were unable to recover it. 00:42:46.458 [2024-05-15 09:08:41.178973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.458 [2024-05-15 09:08:41.179152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.458 [2024-05-15 09:08:41.179181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.458 qpair failed and we were unable to recover it. 00:42:46.458 [2024-05-15 09:08:41.179324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.458 [2024-05-15 09:08:41.179433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.458 [2024-05-15 09:08:41.179458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.458 qpair failed and we were unable to recover it. 00:42:46.458 [2024-05-15 09:08:41.179589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.458 [2024-05-15 09:08:41.179718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.458 [2024-05-15 09:08:41.179743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.458 qpair failed and we were unable to recover it. 00:42:46.458 [2024-05-15 09:08:41.179877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.458 [2024-05-15 09:08:41.180032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.458 [2024-05-15 09:08:41.180057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.458 qpair failed and we were unable to recover it. 00:42:46.458 [2024-05-15 09:08:41.180211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.458 [2024-05-15 09:08:41.180317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.458 [2024-05-15 09:08:41.180342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.458 qpair failed and we were unable to recover it. 00:42:46.458 [2024-05-15 09:08:41.180469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.458 [2024-05-15 09:08:41.180688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.458 [2024-05-15 09:08:41.180716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.458 qpair failed and we were unable to recover it. 00:42:46.458 [2024-05-15 09:08:41.180877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.458 [2024-05-15 09:08:41.181016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.458 [2024-05-15 09:08:41.181044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.458 qpair failed and we were unable to recover it. 00:42:46.458 [2024-05-15 09:08:41.181184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.458 [2024-05-15 09:08:41.181329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.458 [2024-05-15 09:08:41.181357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.458 qpair failed and we were unable to recover it. 00:42:46.458 [2024-05-15 09:08:41.181496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.458 [2024-05-15 09:08:41.181625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.458 [2024-05-15 09:08:41.181650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.458 qpair failed and we were unable to recover it. 00:42:46.458 [2024-05-15 09:08:41.181779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.458 [2024-05-15 09:08:41.181917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.458 [2024-05-15 09:08:41.181944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.458 qpair failed and we were unable to recover it. 00:42:46.458 [2024-05-15 09:08:41.182118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.458 [2024-05-15 09:08:41.182294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.458 [2024-05-15 09:08:41.182324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.458 qpair failed and we were unable to recover it. 00:42:46.458 [2024-05-15 09:08:41.182448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.458 [2024-05-15 09:08:41.182585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.458 [2024-05-15 09:08:41.182610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.458 qpair failed and we were unable to recover it. 00:42:46.458 [2024-05-15 09:08:41.182768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.458 [2024-05-15 09:08:41.182887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.458 [2024-05-15 09:08:41.182912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.458 qpair failed and we were unable to recover it. 00:42:46.458 [2024-05-15 09:08:41.183036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.458 [2024-05-15 09:08:41.183137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.458 [2024-05-15 09:08:41.183163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.458 qpair failed and we were unable to recover it. 00:42:46.458 [2024-05-15 09:08:41.183285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.458 [2024-05-15 09:08:41.183389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.458 [2024-05-15 09:08:41.183414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.458 qpair failed and we were unable to recover it. 00:42:46.458 [2024-05-15 09:08:41.183526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.458 [2024-05-15 09:08:41.183624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.458 [2024-05-15 09:08:41.183664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.458 qpair failed and we were unable to recover it. 00:42:46.458 [2024-05-15 09:08:41.183776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.458 [2024-05-15 09:08:41.183898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.458 [2024-05-15 09:08:41.183924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.458 qpair failed and we were unable to recover it. 00:42:46.458 [2024-05-15 09:08:41.184043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.459 [2024-05-15 09:08:41.184162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.459 [2024-05-15 09:08:41.184187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.459 qpair failed and we were unable to recover it. 00:42:46.459 [2024-05-15 09:08:41.184344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.459 [2024-05-15 09:08:41.184525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.459 [2024-05-15 09:08:41.184550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.459 qpair failed and we were unable to recover it. 00:42:46.459 [2024-05-15 09:08:41.184679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.459 [2024-05-15 09:08:41.184833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.459 [2024-05-15 09:08:41.184857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.459 qpair failed and we were unable to recover it. 00:42:46.459 [2024-05-15 09:08:41.184986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.459 [2024-05-15 09:08:41.185140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.459 [2024-05-15 09:08:41.185181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.459 qpair failed and we were unable to recover it. 00:42:46.459 [2024-05-15 09:08:41.185340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.459 [2024-05-15 09:08:41.185476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.459 [2024-05-15 09:08:41.185503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.459 qpair failed and we were unable to recover it. 00:42:46.459 [2024-05-15 09:08:41.185638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.459 [2024-05-15 09:08:41.185825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.459 [2024-05-15 09:08:41.185892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.459 qpair failed and we were unable to recover it. 00:42:46.459 [2024-05-15 09:08:41.186038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.459 [2024-05-15 09:08:41.186187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.459 [2024-05-15 09:08:41.186211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.459 qpair failed and we were unable to recover it. 00:42:46.459 [2024-05-15 09:08:41.186372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.459 [2024-05-15 09:08:41.186521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.459 [2024-05-15 09:08:41.186548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.459 qpair failed and we were unable to recover it. 00:42:46.459 [2024-05-15 09:08:41.186692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.459 [2024-05-15 09:08:41.186835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.459 [2024-05-15 09:08:41.186863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.459 qpair failed and we were unable to recover it. 00:42:46.459 [2024-05-15 09:08:41.186990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.459 [2024-05-15 09:08:41.187121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.459 [2024-05-15 09:08:41.187145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.459 qpair failed and we were unable to recover it. 00:42:46.459 [2024-05-15 09:08:41.187328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.459 [2024-05-15 09:08:41.187496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.459 [2024-05-15 09:08:41.187524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.459 qpair failed and we were unable to recover it. 00:42:46.459 [2024-05-15 09:08:41.187702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.459 [2024-05-15 09:08:41.187844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.459 [2024-05-15 09:08:41.187868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.459 qpair failed and we were unable to recover it. 00:42:46.459 [2024-05-15 09:08:41.188000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.459 [2024-05-15 09:08:41.188124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.459 [2024-05-15 09:08:41.188148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.459 qpair failed and we were unable to recover it. 00:42:46.459 [2024-05-15 09:08:41.188323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.459 [2024-05-15 09:08:41.188446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.459 [2024-05-15 09:08:41.188471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.459 qpair failed and we were unable to recover it. 00:42:46.459 [2024-05-15 09:08:41.188582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.459 [2024-05-15 09:08:41.188676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.459 [2024-05-15 09:08:41.188701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.459 qpair failed and we were unable to recover it. 00:42:46.459 [2024-05-15 09:08:41.188833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.459 [2024-05-15 09:08:41.188986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.459 [2024-05-15 09:08:41.189013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.459 qpair failed and we were unable to recover it. 00:42:46.459 [2024-05-15 09:08:41.189120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.459 [2024-05-15 09:08:41.189303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.459 [2024-05-15 09:08:41.189331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.459 qpair failed and we were unable to recover it. 00:42:46.459 [2024-05-15 09:08:41.189474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.459 [2024-05-15 09:08:41.189641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.459 [2024-05-15 09:08:41.189668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.459 qpair failed and we were unable to recover it. 00:42:46.459 [2024-05-15 09:08:41.189818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.459 [2024-05-15 09:08:41.189947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.459 [2024-05-15 09:08:41.189972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.459 qpair failed and we were unable to recover it. 00:42:46.459 [2024-05-15 09:08:41.190099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.459 [2024-05-15 09:08:41.190258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.459 [2024-05-15 09:08:41.190284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.459 qpair failed and we were unable to recover it. 00:42:46.459 [2024-05-15 09:08:41.190419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.459 [2024-05-15 09:08:41.190567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.459 [2024-05-15 09:08:41.190595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.459 qpair failed and we were unable to recover it. 00:42:46.459 [2024-05-15 09:08:41.190727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.459 [2024-05-15 09:08:41.190858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.459 [2024-05-15 09:08:41.190882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.459 qpair failed and we were unable to recover it. 00:42:46.459 [2024-05-15 09:08:41.190998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.459 [2024-05-15 09:08:41.191119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.459 [2024-05-15 09:08:41.191161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.459 qpair failed and we were unable to recover it. 00:42:46.459 [2024-05-15 09:08:41.191274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.459 [2024-05-15 09:08:41.191421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.459 [2024-05-15 09:08:41.191451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.459 qpair failed and we were unable to recover it. 00:42:46.459 [2024-05-15 09:08:41.191636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.459 [2024-05-15 09:08:41.191772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.459 [2024-05-15 09:08:41.191797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.459 qpair failed and we were unable to recover it. 00:42:46.459 [2024-05-15 09:08:41.191907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.459 [2024-05-15 09:08:41.192007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.459 [2024-05-15 09:08:41.192032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.459 qpair failed and we were unable to recover it. 00:42:46.459 [2024-05-15 09:08:41.192169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.459 [2024-05-15 09:08:41.192318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.459 [2024-05-15 09:08:41.192346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.459 qpair failed and we were unable to recover it. 00:42:46.459 [2024-05-15 09:08:41.192476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.459 [2024-05-15 09:08:41.192629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.459 [2024-05-15 09:08:41.192654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.459 qpair failed and we were unable to recover it. 00:42:46.459 [2024-05-15 09:08:41.192822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.741 [2024-05-15 09:08:41.192950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.741 [2024-05-15 09:08:41.192974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.741 qpair failed and we were unable to recover it. 00:42:46.741 [2024-05-15 09:08:41.193126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.741 [2024-05-15 09:08:41.193338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.741 [2024-05-15 09:08:41.193366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.741 qpair failed and we were unable to recover it. 00:42:46.741 [2024-05-15 09:08:41.193489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.741 [2024-05-15 09:08:41.193595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.741 [2024-05-15 09:08:41.193620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.741 qpair failed and we were unable to recover it. 00:42:46.741 [2024-05-15 09:08:41.193726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.741 [2024-05-15 09:08:41.193897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.741 [2024-05-15 09:08:41.193924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.741 qpair failed and we were unable to recover it. 00:42:46.741 [2024-05-15 09:08:41.194070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.741 [2024-05-15 09:08:41.194187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.741 [2024-05-15 09:08:41.194222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.741 qpair failed and we were unable to recover it. 00:42:46.741 [2024-05-15 09:08:41.194354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.741 [2024-05-15 09:08:41.194457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.741 [2024-05-15 09:08:41.194482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.741 qpair failed and we were unable to recover it. 00:42:46.741 [2024-05-15 09:08:41.194644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.741 [2024-05-15 09:08:41.194784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.741 [2024-05-15 09:08:41.194812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.741 qpair failed and we were unable to recover it. 00:42:46.741 [2024-05-15 09:08:41.194951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.741 [2024-05-15 09:08:41.195057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.741 [2024-05-15 09:08:41.195089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.741 qpair failed and we were unable to recover it. 00:42:46.741 [2024-05-15 09:08:41.195245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.741 [2024-05-15 09:08:41.195380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.741 [2024-05-15 09:08:41.195405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.741 qpair failed and we were unable to recover it. 00:42:46.741 [2024-05-15 09:08:41.195562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.741 [2024-05-15 09:08:41.195745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.741 [2024-05-15 09:08:41.195771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.741 qpair failed and we were unable to recover it. 00:42:46.741 [2024-05-15 09:08:41.195881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.741 [2024-05-15 09:08:41.196010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.741 [2024-05-15 09:08:41.196035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.741 qpair failed and we were unable to recover it. 00:42:46.741 [2024-05-15 09:08:41.196171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.741 [2024-05-15 09:08:41.196308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.741 [2024-05-15 09:08:41.196349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.741 qpair failed and we were unable to recover it. 00:42:46.741 [2024-05-15 09:08:41.196493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.741 [2024-05-15 09:08:41.196634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.741 [2024-05-15 09:08:41.196662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.741 qpair failed and we were unable to recover it. 00:42:46.741 [2024-05-15 09:08:41.196774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.741 [2024-05-15 09:08:41.196886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.741 [2024-05-15 09:08:41.196913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.741 qpair failed and we were unable to recover it. 00:42:46.741 [2024-05-15 09:08:41.197038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.741 [2024-05-15 09:08:41.197167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.741 [2024-05-15 09:08:41.197193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.741 qpair failed and we were unable to recover it. 00:42:46.741 [2024-05-15 09:08:41.197358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.741 [2024-05-15 09:08:41.197502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.741 [2024-05-15 09:08:41.197531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.741 qpair failed and we were unable to recover it. 00:42:46.741 [2024-05-15 09:08:41.197648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.741 [2024-05-15 09:08:41.197815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.741 [2024-05-15 09:08:41.197847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.741 qpair failed and we were unable to recover it. 00:42:46.741 [2024-05-15 09:08:41.198013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.741 [2024-05-15 09:08:41.198142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.741 [2024-05-15 09:08:41.198167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.741 qpair failed and we were unable to recover it. 00:42:46.742 [2024-05-15 09:08:41.198331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.742 [2024-05-15 09:08:41.198459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.742 [2024-05-15 09:08:41.198484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.742 qpair failed and we were unable to recover it. 00:42:46.742 [2024-05-15 09:08:41.198616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.742 [2024-05-15 09:08:41.198714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.742 [2024-05-15 09:08:41.198739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.742 qpair failed and we were unable to recover it. 00:42:46.742 [2024-05-15 09:08:41.198862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.742 [2024-05-15 09:08:41.199014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.742 [2024-05-15 09:08:41.199039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.742 qpair failed and we were unable to recover it. 00:42:46.742 [2024-05-15 09:08:41.199173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.742 [2024-05-15 09:08:41.199348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.742 [2024-05-15 09:08:41.199373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.742 qpair failed and we were unable to recover it. 00:42:46.742 [2024-05-15 09:08:41.199510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.742 [2024-05-15 09:08:41.199633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.742 [2024-05-15 09:08:41.199660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.742 qpair failed and we were unable to recover it. 00:42:46.742 [2024-05-15 09:08:41.199786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.742 [2024-05-15 09:08:41.199940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.742 [2024-05-15 09:08:41.199965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.742 qpair failed and we were unable to recover it. 00:42:46.742 [2024-05-15 09:08:41.200072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.742 [2024-05-15 09:08:41.200178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.742 [2024-05-15 09:08:41.200203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.742 qpair failed and we were unable to recover it. 00:42:46.742 [2024-05-15 09:08:41.200332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.742 [2024-05-15 09:08:41.200464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.742 [2024-05-15 09:08:41.200488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.742 qpair failed and we were unable to recover it. 00:42:46.742 [2024-05-15 09:08:41.200615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.742 [2024-05-15 09:08:41.200717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.742 [2024-05-15 09:08:41.200742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.742 qpair failed and we were unable to recover it. 00:42:46.742 [2024-05-15 09:08:41.200943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.742 [2024-05-15 09:08:41.201070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.742 [2024-05-15 09:08:41.201094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.742 qpair failed and we were unable to recover it. 00:42:46.742 [2024-05-15 09:08:41.201302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.742 [2024-05-15 09:08:41.201477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.742 [2024-05-15 09:08:41.201504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.742 qpair failed and we were unable to recover it. 00:42:46.742 [2024-05-15 09:08:41.201671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.742 [2024-05-15 09:08:41.201773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.742 [2024-05-15 09:08:41.201797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.742 qpair failed and we were unable to recover it. 00:42:46.742 [2024-05-15 09:08:41.202025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.742 [2024-05-15 09:08:41.202131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.742 [2024-05-15 09:08:41.202159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.742 qpair failed and we were unable to recover it. 00:42:46.742 [2024-05-15 09:08:41.202328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.742 [2024-05-15 09:08:41.202493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.742 [2024-05-15 09:08:41.202520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.742 qpair failed and we were unable to recover it. 00:42:46.742 [2024-05-15 09:08:41.202672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.742 [2024-05-15 09:08:41.202800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.742 [2024-05-15 09:08:41.202841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.742 qpair failed and we were unable to recover it. 00:42:46.742 [2024-05-15 09:08:41.202986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.742 [2024-05-15 09:08:41.203154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.742 [2024-05-15 09:08:41.203181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.742 qpair failed and we were unable to recover it. 00:42:46.742 [2024-05-15 09:08:41.203398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.742 [2024-05-15 09:08:41.203543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.742 [2024-05-15 09:08:41.203570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.742 qpair failed and we were unable to recover it. 00:42:46.742 [2024-05-15 09:08:41.203746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.742 [2024-05-15 09:08:41.203870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.742 [2024-05-15 09:08:41.203910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.742 qpair failed and we were unable to recover it. 00:42:46.742 [2024-05-15 09:08:41.204061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.742 [2024-05-15 09:08:41.204229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.742 [2024-05-15 09:08:41.204257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.742 qpair failed and we were unable to recover it. 00:42:46.742 [2024-05-15 09:08:41.204410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.742 [2024-05-15 09:08:41.204561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.742 [2024-05-15 09:08:41.204586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.742 qpair failed and we were unable to recover it. 00:42:46.742 [2024-05-15 09:08:41.204710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.742 [2024-05-15 09:08:41.204840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.742 [2024-05-15 09:08:41.204865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.742 qpair failed and we were unable to recover it. 00:42:46.742 [2024-05-15 09:08:41.204997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.742 [2024-05-15 09:08:41.205153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.742 [2024-05-15 09:08:41.205180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.742 qpair failed and we were unable to recover it. 00:42:46.742 [2024-05-15 09:08:41.205348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.742 [2024-05-15 09:08:41.205495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.742 [2024-05-15 09:08:41.205521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.742 qpair failed and we were unable to recover it. 00:42:46.742 [2024-05-15 09:08:41.205673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.742 [2024-05-15 09:08:41.205799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.742 [2024-05-15 09:08:41.205825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.742 qpair failed and we were unable to recover it. 00:42:46.742 [2024-05-15 09:08:41.205929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.742 [2024-05-15 09:08:41.206051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.742 [2024-05-15 09:08:41.206078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.742 qpair failed and we were unable to recover it. 00:42:46.742 [2024-05-15 09:08:41.206228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.742 [2024-05-15 09:08:41.206383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.742 [2024-05-15 09:08:41.206408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.742 qpair failed and we were unable to recover it. 00:42:46.742 [2024-05-15 09:08:41.206505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.742 [2024-05-15 09:08:41.206606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.742 [2024-05-15 09:08:41.206631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.742 qpair failed and we were unable to recover it. 00:42:46.742 [2024-05-15 09:08:41.206799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.742 [2024-05-15 09:08:41.206968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.742 [2024-05-15 09:08:41.206996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.742 qpair failed and we were unable to recover it. 00:42:46.742 [2024-05-15 09:08:41.207141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.743 [2024-05-15 09:08:41.207268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.743 [2024-05-15 09:08:41.207293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.743 qpair failed and we were unable to recover it. 00:42:46.743 [2024-05-15 09:08:41.207396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.743 [2024-05-15 09:08:41.207521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.743 [2024-05-15 09:08:41.207547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.743 qpair failed and we were unable to recover it. 00:42:46.743 [2024-05-15 09:08:41.207648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.743 [2024-05-15 09:08:41.207769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.743 [2024-05-15 09:08:41.207794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.743 qpair failed and we were unable to recover it. 00:42:46.743 [2024-05-15 09:08:41.207900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.743 [2024-05-15 09:08:41.208051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.743 [2024-05-15 09:08:41.208079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.743 qpair failed and we were unable to recover it. 00:42:46.743 [2024-05-15 09:08:41.208198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.743 [2024-05-15 09:08:41.208306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.743 [2024-05-15 09:08:41.208331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.743 qpair failed and we were unable to recover it. 00:42:46.743 [2024-05-15 09:08:41.208468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.743 [2024-05-15 09:08:41.208582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.743 [2024-05-15 09:08:41.208610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.743 qpair failed and we were unable to recover it. 00:42:46.743 [2024-05-15 09:08:41.208755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.743 [2024-05-15 09:08:41.208909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.743 [2024-05-15 09:08:41.208933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.743 qpair failed and we were unable to recover it. 00:42:46.743 [2024-05-15 09:08:41.209039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.743 [2024-05-15 09:08:41.209136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.743 [2024-05-15 09:08:41.209161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.743 qpair failed and we were unable to recover it. 00:42:46.743 [2024-05-15 09:08:41.209418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.743 [2024-05-15 09:08:41.209584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.743 [2024-05-15 09:08:41.209612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.743 qpair failed and we were unable to recover it. 00:42:46.743 [2024-05-15 09:08:41.209759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.743 [2024-05-15 09:08:41.209913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.743 [2024-05-15 09:08:41.209938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.743 qpair failed and we were unable to recover it. 00:42:46.743 [2024-05-15 09:08:41.210066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.743 [2024-05-15 09:08:41.210170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.743 [2024-05-15 09:08:41.210195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.743 qpair failed and we were unable to recover it. 00:42:46.743 [2024-05-15 09:08:41.210334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.743 [2024-05-15 09:08:41.210443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.743 [2024-05-15 09:08:41.210468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.743 qpair failed and we were unable to recover it. 00:42:46.743 [2024-05-15 09:08:41.210621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.743 [2024-05-15 09:08:41.210755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.743 [2024-05-15 09:08:41.210782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.743 qpair failed and we were unable to recover it. 00:42:46.743 [2024-05-15 09:08:41.210957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.743 [2024-05-15 09:08:41.211082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.743 [2024-05-15 09:08:41.211107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.743 qpair failed and we were unable to recover it. 00:42:46.743 [2024-05-15 09:08:41.211271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.743 [2024-05-15 09:08:41.211375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.743 [2024-05-15 09:08:41.211400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.743 qpair failed and we were unable to recover it. 00:42:46.743 [2024-05-15 09:08:41.211521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.743 [2024-05-15 09:08:41.211654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.743 [2024-05-15 09:08:41.211681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.743 qpair failed and we were unable to recover it. 00:42:46.743 [2024-05-15 09:08:41.211826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.743 [2024-05-15 09:08:41.211953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.743 [2024-05-15 09:08:41.211977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.743 qpair failed and we were unable to recover it. 00:42:46.743 [2024-05-15 09:08:41.212110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.743 [2024-05-15 09:08:41.212261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.743 [2024-05-15 09:08:41.212289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.743 qpair failed and we were unable to recover it. 00:42:46.743 [2024-05-15 09:08:41.212432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.743 [2024-05-15 09:08:41.212568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.743 [2024-05-15 09:08:41.212596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.743 qpair failed and we were unable to recover it. 00:42:46.743 [2024-05-15 09:08:41.212757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.743 [2024-05-15 09:08:41.212889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.743 [2024-05-15 09:08:41.212914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.743 qpair failed and we were unable to recover it. 00:42:46.743 [2024-05-15 09:08:41.213102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.743 [2024-05-15 09:08:41.213258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.743 [2024-05-15 09:08:41.213284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.743 qpair failed and we were unable to recover it. 00:42:46.743 [2024-05-15 09:08:41.213438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.743 [2024-05-15 09:08:41.213588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.743 [2024-05-15 09:08:41.213620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.743 qpair failed and we were unable to recover it. 00:42:46.743 [2024-05-15 09:08:41.213773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.743 [2024-05-15 09:08:41.213921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.743 [2024-05-15 09:08:41.213962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.743 qpair failed and we were unable to recover it. 00:42:46.743 [2024-05-15 09:08:41.214132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.743 [2024-05-15 09:08:41.214286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.743 [2024-05-15 09:08:41.214312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.743 qpair failed and we were unable to recover it. 00:42:46.743 [2024-05-15 09:08:41.214435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.743 [2024-05-15 09:08:41.214567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.743 [2024-05-15 09:08:41.214593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.743 qpair failed and we were unable to recover it. 00:42:46.743 [2024-05-15 09:08:41.214716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.743 [2024-05-15 09:08:41.214814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.743 [2024-05-15 09:08:41.214839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.743 qpair failed and we were unable to recover it. 00:42:46.743 [2024-05-15 09:08:41.214994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.743 [2024-05-15 09:08:41.215169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.743 [2024-05-15 09:08:41.215194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.743 qpair failed and we were unable to recover it. 00:42:46.743 [2024-05-15 09:08:41.215302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.743 [2024-05-15 09:08:41.215433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.743 [2024-05-15 09:08:41.215457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.743 qpair failed and we were unable to recover it. 00:42:46.744 [2024-05-15 09:08:41.215557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.744 [2024-05-15 09:08:41.215661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.744 [2024-05-15 09:08:41.215685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.744 qpair failed and we were unable to recover it. 00:42:46.744 [2024-05-15 09:08:41.215856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.744 [2024-05-15 09:08:41.215997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.744 [2024-05-15 09:08:41.216024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.744 qpair failed and we were unable to recover it. 00:42:46.744 [2024-05-15 09:08:41.216161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.744 [2024-05-15 09:08:41.216326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.744 [2024-05-15 09:08:41.216354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.744 qpair failed and we were unable to recover it. 00:42:46.744 [2024-05-15 09:08:41.216505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.744 [2024-05-15 09:08:41.216611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.744 [2024-05-15 09:08:41.216636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.744 qpair failed and we were unable to recover it. 00:42:46.744 [2024-05-15 09:08:41.216793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.744 [2024-05-15 09:08:41.216960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.744 [2024-05-15 09:08:41.216988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.744 qpair failed and we were unable to recover it. 00:42:46.744 [2024-05-15 09:08:41.217124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.744 [2024-05-15 09:08:41.217263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.744 [2024-05-15 09:08:41.217291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.744 qpair failed and we were unable to recover it. 00:42:46.744 [2024-05-15 09:08:41.217411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.744 [2024-05-15 09:08:41.217540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.744 [2024-05-15 09:08:41.217565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.744 qpair failed and we were unable to recover it. 00:42:46.744 [2024-05-15 09:08:41.217693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.744 [2024-05-15 09:08:41.217880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.744 [2024-05-15 09:08:41.217905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.744 qpair failed and we were unable to recover it. 00:42:46.744 [2024-05-15 09:08:41.218032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.744 [2024-05-15 09:08:41.218206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.744 [2024-05-15 09:08:41.218240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.744 qpair failed and we were unable to recover it. 00:42:46.744 [2024-05-15 09:08:41.218384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.744 [2024-05-15 09:08:41.218513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.744 [2024-05-15 09:08:41.218537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.744 qpair failed and we were unable to recover it. 00:42:46.744 [2024-05-15 09:08:41.218663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.744 [2024-05-15 09:08:41.218834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.744 [2024-05-15 09:08:41.218862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.744 qpair failed and we were unable to recover it. 00:42:46.744 [2024-05-15 09:08:41.218967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.744 [2024-05-15 09:08:41.219088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.744 [2024-05-15 09:08:41.219115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.744 qpair failed and we were unable to recover it. 00:42:46.744 [2024-05-15 09:08:41.219246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.744 [2024-05-15 09:08:41.219412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.744 [2024-05-15 09:08:41.219438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.744 qpair failed and we were unable to recover it. 00:42:46.744 [2024-05-15 09:08:41.219559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.744 [2024-05-15 09:08:41.219706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.744 [2024-05-15 09:08:41.219731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.744 qpair failed and we were unable to recover it. 00:42:46.744 [2024-05-15 09:08:41.219840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.744 [2024-05-15 09:08:41.219936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.744 [2024-05-15 09:08:41.219961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.744 qpair failed and we were unable to recover it. 00:42:46.744 [2024-05-15 09:08:41.220075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.744 [2024-05-15 09:08:41.220228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.744 [2024-05-15 09:08:41.220257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.744 qpair failed and we were unable to recover it. 00:42:46.744 [2024-05-15 09:08:41.220382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.744 [2024-05-15 09:08:41.220536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.744 [2024-05-15 09:08:41.220563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.744 qpair failed and we were unable to recover it. 00:42:46.744 [2024-05-15 09:08:41.220700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.744 [2024-05-15 09:08:41.220882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.744 [2024-05-15 09:08:41.220907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.744 qpair failed and we were unable to recover it. 00:42:46.744 [2024-05-15 09:08:41.221066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.744 [2024-05-15 09:08:41.221189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.744 [2024-05-15 09:08:41.221239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.744 qpair failed and we were unable to recover it. 00:42:46.744 [2024-05-15 09:08:41.221386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.744 [2024-05-15 09:08:41.221601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.744 [2024-05-15 09:08:41.221628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.744 qpair failed and we were unable to recover it. 00:42:46.744 [2024-05-15 09:08:41.221767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.744 [2024-05-15 09:08:41.221897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.744 [2024-05-15 09:08:41.221924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.744 qpair failed and we were unable to recover it. 00:42:46.744 [2024-05-15 09:08:41.222055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.744 [2024-05-15 09:08:41.222182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.744 [2024-05-15 09:08:41.222206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.744 qpair failed and we were unable to recover it. 00:42:46.744 [2024-05-15 09:08:41.222342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.744 [2024-05-15 09:08:41.222491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.744 [2024-05-15 09:08:41.222519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.744 qpair failed and we were unable to recover it. 00:42:46.744 [2024-05-15 09:08:41.222684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.744 [2024-05-15 09:08:41.222822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.744 [2024-05-15 09:08:41.222849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.744 qpair failed and we were unable to recover it. 00:42:46.744 [2024-05-15 09:08:41.222976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.744 [2024-05-15 09:08:41.223083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.744 [2024-05-15 09:08:41.223109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.745 qpair failed and we were unable to recover it. 00:42:46.745 [2024-05-15 09:08:41.223241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.745 [2024-05-15 09:08:41.223363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.745 [2024-05-15 09:08:41.223391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.745 qpair failed and we were unable to recover it. 00:42:46.745 [2024-05-15 09:08:41.223534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.745 [2024-05-15 09:08:41.223671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.745 [2024-05-15 09:08:41.223699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.745 qpair failed and we were unable to recover it. 00:42:46.745 [2024-05-15 09:08:41.223846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.745 [2024-05-15 09:08:41.223975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.745 [2024-05-15 09:08:41.224000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.745 qpair failed and we were unable to recover it. 00:42:46.745 [2024-05-15 09:08:41.224126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.745 [2024-05-15 09:08:41.224252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.745 [2024-05-15 09:08:41.224277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.745 qpair failed and we were unable to recover it. 00:42:46.745 [2024-05-15 09:08:41.224432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.745 [2024-05-15 09:08:41.224560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.745 [2024-05-15 09:08:41.224588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.745 qpair failed and we were unable to recover it. 00:42:46.745 [2024-05-15 09:08:41.224737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.745 [2024-05-15 09:08:41.224893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.745 [2024-05-15 09:08:41.224918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.745 qpair failed and we were unable to recover it. 00:42:46.745 [2024-05-15 09:08:41.225072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.745 [2024-05-15 09:08:41.225233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.745 [2024-05-15 09:08:41.225261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.745 qpair failed and we were unable to recover it. 00:42:46.745 [2024-05-15 09:08:41.225432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.745 [2024-05-15 09:08:41.225563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.745 [2024-05-15 09:08:41.225588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.745 qpair failed and we were unable to recover it. 00:42:46.745 [2024-05-15 09:08:41.225719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.745 [2024-05-15 09:08:41.225852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.745 [2024-05-15 09:08:41.225877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.745 qpair failed and we were unable to recover it. 00:42:46.745 [2024-05-15 09:08:41.226056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.745 [2024-05-15 09:08:41.226201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.745 [2024-05-15 09:08:41.226236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.745 qpair failed and we were unable to recover it. 00:42:46.745 [2024-05-15 09:08:41.226349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.745 [2024-05-15 09:08:41.226481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.745 [2024-05-15 09:08:41.226509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.745 qpair failed and we were unable to recover it. 00:42:46.745 [2024-05-15 09:08:41.226687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.745 [2024-05-15 09:08:41.226858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.745 [2024-05-15 09:08:41.226886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.745 qpair failed and we were unable to recover it. 00:42:46.745 [2024-05-15 09:08:41.227018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.745 [2024-05-15 09:08:41.227187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.745 [2024-05-15 09:08:41.227222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.745 qpair failed and we were unable to recover it. 00:42:46.745 [2024-05-15 09:08:41.227366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.745 [2024-05-15 09:08:41.227512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.745 [2024-05-15 09:08:41.227539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.745 qpair failed and we were unable to recover it. 00:42:46.745 [2024-05-15 09:08:41.227682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.745 [2024-05-15 09:08:41.227840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.745 [2024-05-15 09:08:41.227865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.745 qpair failed and we were unable to recover it. 00:42:46.745 [2024-05-15 09:08:41.228019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.745 [2024-05-15 09:08:41.228186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.745 [2024-05-15 09:08:41.228213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.745 qpair failed and we were unable to recover it. 00:42:46.745 [2024-05-15 09:08:41.228377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.745 [2024-05-15 09:08:41.228491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.745 [2024-05-15 09:08:41.228520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.745 qpair failed and we were unable to recover it. 00:42:46.745 [2024-05-15 09:08:41.228675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.745 [2024-05-15 09:08:41.228804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.745 [2024-05-15 09:08:41.228829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.745 qpair failed and we were unable to recover it. 00:42:46.745 [2024-05-15 09:08:41.228941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.745 [2024-05-15 09:08:41.229063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.745 [2024-05-15 09:08:41.229092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.745 qpair failed and we were unable to recover it. 00:42:46.745 [2024-05-15 09:08:41.229249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.745 [2024-05-15 09:08:41.229403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.745 [2024-05-15 09:08:41.229434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.745 qpair failed and we were unable to recover it. 00:42:46.745 [2024-05-15 09:08:41.229568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.745 [2024-05-15 09:08:41.229674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.745 [2024-05-15 09:08:41.229700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.745 qpair failed and we were unable to recover it. 00:42:46.745 [2024-05-15 09:08:41.229803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.745 [2024-05-15 09:08:41.229946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.745 [2024-05-15 09:08:41.229973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.745 qpair failed and we were unable to recover it. 00:42:46.745 [2024-05-15 09:08:41.230108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.745 [2024-05-15 09:08:41.230253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.745 [2024-05-15 09:08:41.230281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.745 qpair failed and we were unable to recover it. 00:42:46.745 [2024-05-15 09:08:41.230441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.745 [2024-05-15 09:08:41.230546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.745 [2024-05-15 09:08:41.230570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.745 qpair failed and we were unable to recover it. 00:42:46.745 [2024-05-15 09:08:41.230723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.745 [2024-05-15 09:08:41.230841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.745 [2024-05-15 09:08:41.230869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.745 qpair failed and we were unable to recover it. 00:42:46.746 [2024-05-15 09:08:41.230973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.746 [2024-05-15 09:08:41.231110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.746 [2024-05-15 09:08:41.231138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.746 qpair failed and we were unable to recover it. 00:42:46.746 [2024-05-15 09:08:41.231306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.746 [2024-05-15 09:08:41.231441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.746 [2024-05-15 09:08:41.231466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.746 qpair failed and we were unable to recover it. 00:42:46.746 [2024-05-15 09:08:41.231648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.746 [2024-05-15 09:08:41.231814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.746 [2024-05-15 09:08:41.231842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.746 qpair failed and we were unable to recover it. 00:42:46.746 [2024-05-15 09:08:41.231961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.746 [2024-05-15 09:08:41.232082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.746 [2024-05-15 09:08:41.232109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.746 qpair failed and we were unable to recover it. 00:42:46.746 [2024-05-15 09:08:41.232235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.746 [2024-05-15 09:08:41.232366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.746 [2024-05-15 09:08:41.232395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.746 qpair failed and we were unable to recover it. 00:42:46.746 [2024-05-15 09:08:41.232498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.746 [2024-05-15 09:08:41.232689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.746 [2024-05-15 09:08:41.232715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.746 qpair failed and we were unable to recover it. 00:42:46.746 [2024-05-15 09:08:41.232841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.746 [2024-05-15 09:08:41.232990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.746 [2024-05-15 09:08:41.233018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.746 qpair failed and we were unable to recover it. 00:42:46.746 [2024-05-15 09:08:41.233148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.746 [2024-05-15 09:08:41.233272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.746 [2024-05-15 09:08:41.233298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.746 qpair failed and we were unable to recover it. 00:42:46.746 [2024-05-15 09:08:41.233417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.746 [2024-05-15 09:08:41.233522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.746 [2024-05-15 09:08:41.233549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.746 qpair failed and we were unable to recover it. 00:42:46.746 [2024-05-15 09:08:41.233686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.746 [2024-05-15 09:08:41.233851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.746 [2024-05-15 09:08:41.233878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.746 qpair failed and we were unable to recover it. 00:42:46.746 [2024-05-15 09:08:41.234031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.746 [2024-05-15 09:08:41.234159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.746 [2024-05-15 09:08:41.234184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.746 qpair failed and we were unable to recover it. 00:42:46.746 [2024-05-15 09:08:41.234348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.746 [2024-05-15 09:08:41.234486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.746 [2024-05-15 09:08:41.234514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.746 qpair failed and we were unable to recover it. 00:42:46.746 [2024-05-15 09:08:41.234730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.746 [2024-05-15 09:08:41.234910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.746 [2024-05-15 09:08:41.234935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.746 qpair failed and we were unable to recover it. 00:42:46.746 [2024-05-15 09:08:41.235032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.746 [2024-05-15 09:08:41.235126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.746 [2024-05-15 09:08:41.235152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.746 qpair failed and we were unable to recover it. 00:42:46.746 [2024-05-15 09:08:41.235303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.746 [2024-05-15 09:08:41.235484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.746 [2024-05-15 09:08:41.235511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.746 qpair failed and we were unable to recover it. 00:42:46.746 [2024-05-15 09:08:41.235652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.746 [2024-05-15 09:08:41.235820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.746 [2024-05-15 09:08:41.235848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.746 qpair failed and we were unable to recover it. 00:42:46.746 [2024-05-15 09:08:41.236018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.746 [2024-05-15 09:08:41.236147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.746 [2024-05-15 09:08:41.236189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.746 qpair failed and we were unable to recover it. 00:42:46.746 [2024-05-15 09:08:41.236361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.746 [2024-05-15 09:08:41.236496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.746 [2024-05-15 09:08:41.236524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.746 qpair failed and we were unable to recover it. 00:42:46.746 [2024-05-15 09:08:41.236658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.746 [2024-05-15 09:08:41.236805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.746 [2024-05-15 09:08:41.236830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.746 qpair failed and we were unable to recover it. 00:42:46.746 [2024-05-15 09:08:41.236936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.746 [2024-05-15 09:08:41.237054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.746 [2024-05-15 09:08:41.237079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.746 qpair failed and we were unable to recover it. 00:42:46.746 [2024-05-15 09:08:41.237240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.746 [2024-05-15 09:08:41.237415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.746 [2024-05-15 09:08:41.237443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.747 qpair failed and we were unable to recover it. 00:42:46.747 [2024-05-15 09:08:41.237619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.747 [2024-05-15 09:08:41.237739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.747 [2024-05-15 09:08:41.237764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.747 qpair failed and we were unable to recover it. 00:42:46.747 [2024-05-15 09:08:41.237860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.747 [2024-05-15 09:08:41.237959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.747 [2024-05-15 09:08:41.237983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.747 qpair failed and we were unable to recover it. 00:42:46.747 [2024-05-15 09:08:41.238108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.747 [2024-05-15 09:08:41.238208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.747 [2024-05-15 09:08:41.238243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.747 qpair failed and we were unable to recover it. 00:42:46.747 [2024-05-15 09:08:41.238391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.747 [2024-05-15 09:08:41.238513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.747 [2024-05-15 09:08:41.238542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.747 qpair failed and we were unable to recover it. 00:42:46.747 [2024-05-15 09:08:41.238710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.747 [2024-05-15 09:08:41.238834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.747 [2024-05-15 09:08:41.238859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.747 qpair failed and we were unable to recover it. 00:42:46.747 [2024-05-15 09:08:41.239042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.747 [2024-05-15 09:08:41.239151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.747 [2024-05-15 09:08:41.239180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.747 qpair failed and we were unable to recover it. 00:42:46.747 [2024-05-15 09:08:41.239377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.747 [2024-05-15 09:08:41.239481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.747 [2024-05-15 09:08:41.239506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.747 qpair failed and we were unable to recover it. 00:42:46.747 [2024-05-15 09:08:41.239662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.747 [2024-05-15 09:08:41.239792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.747 [2024-05-15 09:08:41.239816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.747 qpair failed and we were unable to recover it. 00:42:46.747 [2024-05-15 09:08:41.239950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.747 [2024-05-15 09:08:41.240075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.747 [2024-05-15 09:08:41.240100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.747 qpair failed and we were unable to recover it. 00:42:46.747 [2024-05-15 09:08:41.240199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.747 [2024-05-15 09:08:41.240306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.747 [2024-05-15 09:08:41.240333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.747 qpair failed and we were unable to recover it. 00:42:46.747 [2024-05-15 09:08:41.240483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.747 [2024-05-15 09:08:41.240608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.747 [2024-05-15 09:08:41.240649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.747 qpair failed and we were unable to recover it. 00:42:46.747 [2024-05-15 09:08:41.240828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.747 [2024-05-15 09:08:41.240950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.747 [2024-05-15 09:08:41.240974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.747 qpair failed and we were unable to recover it. 00:42:46.747 [2024-05-15 09:08:41.241077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.747 [2024-05-15 09:08:41.241202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.747 [2024-05-15 09:08:41.241237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.747 qpair failed and we were unable to recover it. 00:42:46.747 [2024-05-15 09:08:41.241384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.747 [2024-05-15 09:08:41.241483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.747 [2024-05-15 09:08:41.241508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.747 qpair failed and we were unable to recover it. 00:42:46.747 [2024-05-15 09:08:41.241654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.747 [2024-05-15 09:08:41.241807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.747 [2024-05-15 09:08:41.241835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.747 qpair failed and we were unable to recover it. 00:42:46.747 [2024-05-15 09:08:41.242000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.747 [2024-05-15 09:08:41.242107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.747 [2024-05-15 09:08:41.242134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.747 qpair failed and we were unable to recover it. 00:42:46.747 [2024-05-15 09:08:41.242284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.747 [2024-05-15 09:08:41.242416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.747 [2024-05-15 09:08:41.242441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.747 qpair failed and we were unable to recover it. 00:42:46.747 [2024-05-15 09:08:41.242594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.747 [2024-05-15 09:08:41.242734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.747 [2024-05-15 09:08:41.242763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.747 qpair failed and we were unable to recover it. 00:42:46.747 [2024-05-15 09:08:41.242905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.747 [2024-05-15 09:08:41.243067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.747 [2024-05-15 09:08:41.243094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.747 qpair failed and we were unable to recover it. 00:42:46.747 [2024-05-15 09:08:41.243319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.747 [2024-05-15 09:08:41.243467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.747 [2024-05-15 09:08:41.243494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.747 qpair failed and we were unable to recover it. 00:42:46.747 [2024-05-15 09:08:41.243602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.747 [2024-05-15 09:08:41.243766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.747 [2024-05-15 09:08:41.243794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.747 qpair failed and we were unable to recover it. 00:42:46.747 [2024-05-15 09:08:41.243911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.747 [2024-05-15 09:08:41.244050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.747 [2024-05-15 09:08:41.244078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.747 qpair failed and we were unable to recover it. 00:42:46.747 [2024-05-15 09:08:41.244288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.747 [2024-05-15 09:08:41.244430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.747 [2024-05-15 09:08:41.244458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.747 qpair failed and we were unable to recover it. 00:42:46.747 [2024-05-15 09:08:41.244640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.747 [2024-05-15 09:08:41.247381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.747 [2024-05-15 09:08:41.247410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.747 qpair failed and we were unable to recover it. 00:42:46.747 [2024-05-15 09:08:41.247562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.747 [2024-05-15 09:08:41.247702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.747 [2024-05-15 09:08:41.247727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.747 qpair failed and we were unable to recover it. 00:42:46.747 [2024-05-15 09:08:41.247857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.747 [2024-05-15 09:08:41.247979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.747 [2024-05-15 09:08:41.248004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.747 qpair failed and we were unable to recover it. 00:42:46.747 [2024-05-15 09:08:41.248134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.747 [2024-05-15 09:08:41.248259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.747 [2024-05-15 09:08:41.248285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.747 qpair failed and we were unable to recover it. 00:42:46.747 [2024-05-15 09:08:41.248412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.747 [2024-05-15 09:08:41.248545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.747 [2024-05-15 09:08:41.248570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.747 qpair failed and we were unable to recover it. 00:42:46.747 [2024-05-15 09:08:41.248675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.748 [2024-05-15 09:08:41.248889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.748 [2024-05-15 09:08:41.248914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.748 qpair failed and we were unable to recover it. 00:42:46.748 [2024-05-15 09:08:41.249086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.748 [2024-05-15 09:08:41.249264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.748 [2024-05-15 09:08:41.249289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.748 qpair failed and we were unable to recover it. 00:42:46.748 [2024-05-15 09:08:41.249424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.748 [2024-05-15 09:08:41.249557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.748 [2024-05-15 09:08:41.249583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.748 qpair failed and we were unable to recover it. 00:42:46.748 [2024-05-15 09:08:41.249685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.748 [2024-05-15 09:08:41.249812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.748 [2024-05-15 09:08:41.249837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.748 qpair failed and we were unable to recover it. 00:42:46.748 [2024-05-15 09:08:41.249947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.748 [2024-05-15 09:08:41.250070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.748 [2024-05-15 09:08:41.250097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.748 qpair failed and we were unable to recover it. 00:42:46.748 [2024-05-15 09:08:41.250280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.748 [2024-05-15 09:08:41.250389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.748 [2024-05-15 09:08:41.250414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.748 qpair failed and we were unable to recover it. 00:42:46.748 [2024-05-15 09:08:41.250543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.748 [2024-05-15 09:08:41.250675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.748 [2024-05-15 09:08:41.250704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.748 qpair failed and we were unable to recover it. 00:42:46.748 [2024-05-15 09:08:41.250875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.748 [2024-05-15 09:08:41.251026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.748 [2024-05-15 09:08:41.251053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.748 qpair failed and we were unable to recover it. 00:42:46.748 [2024-05-15 09:08:41.251214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.748 [2024-05-15 09:08:41.251359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.748 [2024-05-15 09:08:41.251387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.748 qpair failed and we were unable to recover it. 00:42:46.748 [2024-05-15 09:08:41.251530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.748 [2024-05-15 09:08:41.251625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.748 [2024-05-15 09:08:41.251649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.748 qpair failed and we were unable to recover it. 00:42:46.748 [2024-05-15 09:08:41.251795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.748 [2024-05-15 09:08:41.252009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.748 [2024-05-15 09:08:41.252036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.748 qpair failed and we were unable to recover it. 00:42:46.748 [2024-05-15 09:08:41.252225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.748 [2024-05-15 09:08:41.252356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.748 [2024-05-15 09:08:41.252381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.748 qpair failed and we were unable to recover it. 00:42:46.748 [2024-05-15 09:08:41.252487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.748 [2024-05-15 09:08:41.252659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.748 [2024-05-15 09:08:41.252684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.748 qpair failed and we were unable to recover it. 00:42:46.748 [2024-05-15 09:08:41.252852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.748 [2024-05-15 09:08:41.252982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.748 [2024-05-15 09:08:41.253007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.748 qpair failed and we were unable to recover it. 00:42:46.748 [2024-05-15 09:08:41.253160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.748 [2024-05-15 09:08:41.253318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.748 [2024-05-15 09:08:41.253346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.748 qpair failed and we were unable to recover it. 00:42:46.748 [2024-05-15 09:08:41.253499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.748 [2024-05-15 09:08:41.253594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.748 [2024-05-15 09:08:41.253618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.748 qpair failed and we were unable to recover it. 00:42:46.748 [2024-05-15 09:08:41.253739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.748 [2024-05-15 09:08:41.253892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.748 [2024-05-15 09:08:41.253916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.748 qpair failed and we were unable to recover it. 00:42:46.748 [2024-05-15 09:08:41.254050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.748 [2024-05-15 09:08:41.254192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.748 [2024-05-15 09:08:41.254229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.748 qpair failed and we were unable to recover it. 00:42:46.748 [2024-05-15 09:08:41.254360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.748 [2024-05-15 09:08:41.254470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.748 [2024-05-15 09:08:41.254496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.748 qpair failed and we were unable to recover it. 00:42:46.748 [2024-05-15 09:08:41.254634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.748 [2024-05-15 09:08:41.254763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.748 [2024-05-15 09:08:41.254789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.748 qpair failed and we were unable to recover it. 00:42:46.748 [2024-05-15 09:08:41.254944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.748 [2024-05-15 09:08:41.255095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.748 [2024-05-15 09:08:41.255123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.748 qpair failed and we were unable to recover it. 00:42:46.748 [2024-05-15 09:08:41.255278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.748 [2024-05-15 09:08:41.255386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.748 [2024-05-15 09:08:41.255411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.748 qpair failed and we were unable to recover it. 00:42:46.748 [2024-05-15 09:08:41.255559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.748 [2024-05-15 09:08:41.255738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.748 [2024-05-15 09:08:41.255763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.748 qpair failed and we were unable to recover it. 00:42:46.748 [2024-05-15 09:08:41.255895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.748 [2024-05-15 09:08:41.256068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.748 [2024-05-15 09:08:41.256096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.748 qpair failed and we were unable to recover it. 00:42:46.748 [2024-05-15 09:08:41.256242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.748 [2024-05-15 09:08:41.256369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.748 [2024-05-15 09:08:41.256395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.748 qpair failed and we were unable to recover it. 00:42:46.748 [2024-05-15 09:08:41.256535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.748 [2024-05-15 09:08:41.256639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.748 [2024-05-15 09:08:41.256664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.748 qpair failed and we were unable to recover it. 00:42:46.748 [2024-05-15 09:08:41.256832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.748 [2024-05-15 09:08:41.256960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.748 [2024-05-15 09:08:41.256985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.748 qpair failed and we were unable to recover it. 00:42:46.748 [2024-05-15 09:08:41.257123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.748 [2024-05-15 09:08:41.257266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.748 [2024-05-15 09:08:41.257306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.748 qpair failed and we were unable to recover it. 00:42:46.748 [2024-05-15 09:08:41.257450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.748 [2024-05-15 09:08:41.257599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.748 [2024-05-15 09:08:41.257626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.748 qpair failed and we were unable to recover it. 00:42:46.749 [2024-05-15 09:08:41.257769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.749 [2024-05-15 09:08:41.257890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.749 [2024-05-15 09:08:41.257917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.749 qpair failed and we were unable to recover it. 00:42:46.749 [2024-05-15 09:08:41.258062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.749 [2024-05-15 09:08:41.258190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.749 [2024-05-15 09:08:41.258220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.749 qpair failed and we were unable to recover it. 00:42:46.749 [2024-05-15 09:08:41.258387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.749 [2024-05-15 09:08:41.258505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.749 [2024-05-15 09:08:41.258547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.749 qpair failed and we were unable to recover it. 00:42:46.749 [2024-05-15 09:08:41.258679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.749 [2024-05-15 09:08:41.258812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.749 [2024-05-15 09:08:41.258837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.749 qpair failed and we were unable to recover it. 00:42:46.749 [2024-05-15 09:08:41.258938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.749 [2024-05-15 09:08:41.259088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.749 [2024-05-15 09:08:41.259113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.749 qpair failed and we were unable to recover it. 00:42:46.749 [2024-05-15 09:08:41.259229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.749 [2024-05-15 09:08:41.259384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.749 [2024-05-15 09:08:41.259409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.749 qpair failed and we were unable to recover it. 00:42:46.749 [2024-05-15 09:08:41.259555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.749 [2024-05-15 09:08:41.259668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.749 [2024-05-15 09:08:41.259695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.749 qpair failed and we were unable to recover it. 00:42:46.749 [2024-05-15 09:08:41.259818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.749 [2024-05-15 09:08:41.259945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.749 [2024-05-15 09:08:41.259970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.749 qpair failed and we were unable to recover it. 00:42:46.749 [2024-05-15 09:08:41.260118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.749 [2024-05-15 09:08:41.260266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.749 [2024-05-15 09:08:41.260295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.749 qpair failed and we were unable to recover it. 00:42:46.749 [2024-05-15 09:08:41.260412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.749 [2024-05-15 09:08:41.260557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.749 [2024-05-15 09:08:41.260585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.749 qpair failed and we were unable to recover it. 00:42:46.749 [2024-05-15 09:08:41.260763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.749 [2024-05-15 09:08:41.260887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.749 [2024-05-15 09:08:41.260912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.749 qpair failed and we were unable to recover it. 00:42:46.749 [2024-05-15 09:08:41.261023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.749 [2024-05-15 09:08:41.261151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.749 [2024-05-15 09:08:41.261194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.749 qpair failed and we were unable to recover it. 00:42:46.749 [2024-05-15 09:08:41.261332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.749 [2024-05-15 09:08:41.261436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.749 [2024-05-15 09:08:41.261461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.749 qpair failed and we were unable to recover it. 00:42:46.749 [2024-05-15 09:08:41.261592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.749 [2024-05-15 09:08:41.261719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.749 [2024-05-15 09:08:41.261744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.749 qpair failed and we were unable to recover it. 00:42:46.749 [2024-05-15 09:08:41.261932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.749 [2024-05-15 09:08:41.262053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.749 [2024-05-15 09:08:41.262080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.749 qpair failed and we were unable to recover it. 00:42:46.749 [2024-05-15 09:08:41.262190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.749 [2024-05-15 09:08:41.262317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.749 [2024-05-15 09:08:41.262343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.749 qpair failed and we were unable to recover it. 00:42:46.749 [2024-05-15 09:08:41.262478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.749 [2024-05-15 09:08:41.262609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.749 [2024-05-15 09:08:41.262633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.749 qpair failed and we were unable to recover it. 00:42:46.749 [2024-05-15 09:08:41.262755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.749 [2024-05-15 09:08:41.262895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.749 [2024-05-15 09:08:41.262922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.749 qpair failed and we were unable to recover it. 00:42:46.749 [2024-05-15 09:08:41.263028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.749 [2024-05-15 09:08:41.263156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.749 [2024-05-15 09:08:41.263183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.749 qpair failed and we were unable to recover it. 00:42:46.749 [2024-05-15 09:08:41.263318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.749 [2024-05-15 09:08:41.263426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.749 [2024-05-15 09:08:41.263451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.749 qpair failed and we were unable to recover it. 00:42:46.749 [2024-05-15 09:08:41.263561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.749 [2024-05-15 09:08:41.263689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.749 [2024-05-15 09:08:41.263714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.749 qpair failed and we were unable to recover it. 00:42:46.749 [2024-05-15 09:08:41.263809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.749 [2024-05-15 09:08:41.263910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.749 [2024-05-15 09:08:41.263934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.749 qpair failed and we were unable to recover it. 00:42:46.749 [2024-05-15 09:08:41.264029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.749 [2024-05-15 09:08:41.264148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.749 [2024-05-15 09:08:41.264172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.749 qpair failed and we were unable to recover it. 00:42:46.749 [2024-05-15 09:08:41.264287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.749 [2024-05-15 09:08:41.264411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.749 [2024-05-15 09:08:41.264437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.749 qpair failed and we were unable to recover it. 00:42:46.749 [2024-05-15 09:08:41.264587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.749 [2024-05-15 09:08:41.264737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.749 [2024-05-15 09:08:41.264760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.749 qpair failed and we were unable to recover it. 00:42:46.749 [2024-05-15 09:08:41.264858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.749 [2024-05-15 09:08:41.264964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.749 [2024-05-15 09:08:41.264987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.749 qpair failed and we were unable to recover it. 00:42:46.749 [2024-05-15 09:08:41.265091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.749 [2024-05-15 09:08:41.265211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.749 [2024-05-15 09:08:41.265276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.749 qpair failed and we were unable to recover it. 00:42:46.749 [2024-05-15 09:08:41.265397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.749 [2024-05-15 09:08:41.265546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.749 [2024-05-15 09:08:41.265573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.749 qpair failed and we were unable to recover it. 00:42:46.749 [2024-05-15 09:08:41.265705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.749 [2024-05-15 09:08:41.265831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.749 [2024-05-15 09:08:41.265859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.750 qpair failed and we were unable to recover it. 00:42:46.750 [2024-05-15 09:08:41.265983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.750 [2024-05-15 09:08:41.266123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.750 [2024-05-15 09:08:41.266150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.750 qpair failed and we were unable to recover it. 00:42:46.750 [2024-05-15 09:08:41.266305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.750 [2024-05-15 09:08:41.266413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.750 [2024-05-15 09:08:41.266439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.750 qpair failed and we were unable to recover it. 00:42:46.750 [2024-05-15 09:08:41.266542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.750 [2024-05-15 09:08:41.266673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.750 [2024-05-15 09:08:41.266697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.750 qpair failed and we were unable to recover it. 00:42:46.750 [2024-05-15 09:08:41.266803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.750 [2024-05-15 09:08:41.266923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.750 [2024-05-15 09:08:41.266947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.750 qpair failed and we were unable to recover it. 00:42:46.750 [2024-05-15 09:08:41.267048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.750 [2024-05-15 09:08:41.267193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.750 [2024-05-15 09:08:41.267227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.750 qpair failed and we were unable to recover it. 00:42:46.750 [2024-05-15 09:08:41.267358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.750 [2024-05-15 09:08:41.267453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.750 [2024-05-15 09:08:41.267476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.750 qpair failed and we were unable to recover it. 00:42:46.750 [2024-05-15 09:08:41.267630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.750 [2024-05-15 09:08:41.267766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.750 [2024-05-15 09:08:41.267793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.750 qpair failed and we were unable to recover it. 00:42:46.750 [2024-05-15 09:08:41.267938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.750 [2024-05-15 09:08:41.268067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.750 [2024-05-15 09:08:41.268092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.750 qpair failed and we were unable to recover it. 00:42:46.750 [2024-05-15 09:08:41.268198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.750 [2024-05-15 09:08:41.268307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.750 [2024-05-15 09:08:41.268331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.750 qpair failed and we were unable to recover it. 00:42:46.750 [2024-05-15 09:08:41.268454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.750 [2024-05-15 09:08:41.268594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.750 [2024-05-15 09:08:41.268621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.750 qpair failed and we were unable to recover it. 00:42:46.750 [2024-05-15 09:08:41.268800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.750 [2024-05-15 09:08:41.268948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.750 [2024-05-15 09:08:41.268972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.750 qpair failed and we were unable to recover it. 00:42:46.750 [2024-05-15 09:08:41.269068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.750 [2024-05-15 09:08:41.269179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.750 [2024-05-15 09:08:41.269205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.750 qpair failed and we were unable to recover it. 00:42:46.750 [2024-05-15 09:08:41.269358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.750 [2024-05-15 09:08:41.269534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.750 [2024-05-15 09:08:41.269559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.750 qpair failed and we were unable to recover it. 00:42:46.750 [2024-05-15 09:08:41.269661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.750 [2024-05-15 09:08:41.269788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.750 [2024-05-15 09:08:41.269812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.750 qpair failed and we were unable to recover it. 00:42:46.750 [2024-05-15 09:08:41.269934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.750 [2024-05-15 09:08:41.270035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.750 [2024-05-15 09:08:41.270059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.750 qpair failed and we were unable to recover it. 00:42:46.750 [2024-05-15 09:08:41.270178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.750 [2024-05-15 09:08:41.270327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.750 [2024-05-15 09:08:41.270354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.750 qpair failed and we were unable to recover it. 00:42:46.750 [2024-05-15 09:08:41.270461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.750 [2024-05-15 09:08:41.270603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.750 [2024-05-15 09:08:41.270628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.750 qpair failed and we were unable to recover it. 00:42:46.750 [2024-05-15 09:08:41.270754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.750 [2024-05-15 09:08:41.270911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.750 [2024-05-15 09:08:41.270951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.750 qpair failed and we were unable to recover it. 00:42:46.750 [2024-05-15 09:08:41.271081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.750 [2024-05-15 09:08:41.271184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.750 [2024-05-15 09:08:41.271208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.750 qpair failed and we were unable to recover it. 00:42:46.750 [2024-05-15 09:08:41.271313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.750 [2024-05-15 09:08:41.271486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.750 [2024-05-15 09:08:41.271512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.750 qpair failed and we were unable to recover it. 00:42:46.750 [2024-05-15 09:08:41.271665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.750 [2024-05-15 09:08:41.271788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.750 [2024-05-15 09:08:41.271812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.750 qpair failed and we were unable to recover it. 00:42:46.750 [2024-05-15 09:08:41.271951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.750 [2024-05-15 09:08:41.272055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.750 [2024-05-15 09:08:41.272080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.750 qpair failed and we were unable to recover it. 00:42:46.750 [2024-05-15 09:08:41.272179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.750 [2024-05-15 09:08:41.272285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.750 [2024-05-15 09:08:41.272310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.750 qpair failed and we were unable to recover it. 00:42:46.750 [2024-05-15 09:08:41.272421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.750 [2024-05-15 09:08:41.272527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.750 [2024-05-15 09:08:41.272551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.750 qpair failed and we were unable to recover it. 00:42:46.750 [2024-05-15 09:08:41.272701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.750 [2024-05-15 09:08:41.272886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.750 [2024-05-15 09:08:41.272911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.750 qpair failed and we were unable to recover it. 00:42:46.750 [2024-05-15 09:08:41.273011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.750 [2024-05-15 09:08:41.273172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.750 [2024-05-15 09:08:41.273200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.750 qpair failed and we were unable to recover it. 00:42:46.750 [2024-05-15 09:08:41.273334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.751 [2024-05-15 09:08:41.273439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.751 [2024-05-15 09:08:41.273463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.751 qpair failed and we were unable to recover it. 00:42:46.751 [2024-05-15 09:08:41.273601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.751 [2024-05-15 09:08:41.273714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.751 [2024-05-15 09:08:41.273741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.751 qpair failed and we were unable to recover it. 00:42:46.751 [2024-05-15 09:08:41.273904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.751 [2024-05-15 09:08:41.274038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.751 [2024-05-15 09:08:41.274064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.751 qpair failed and we were unable to recover it. 00:42:46.751 [2024-05-15 09:08:41.274281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.751 [2024-05-15 09:08:41.274392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.751 [2024-05-15 09:08:41.274417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.751 qpair failed and we were unable to recover it. 00:42:46.751 [2024-05-15 09:08:41.274568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.751 [2024-05-15 09:08:41.274686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.751 [2024-05-15 09:08:41.274713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.751 qpair failed and we were unable to recover it. 00:42:46.751 [2024-05-15 09:08:41.274823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.751 [2024-05-15 09:08:41.274953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.751 [2024-05-15 09:08:41.274981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.751 qpair failed and we were unable to recover it. 00:42:46.751 [2024-05-15 09:08:41.275117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.751 [2024-05-15 09:08:41.275274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.751 [2024-05-15 09:08:41.275316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.751 qpair failed and we were unable to recover it. 00:42:46.751 [2024-05-15 09:08:41.275431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.751 [2024-05-15 09:08:41.275555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.751 [2024-05-15 09:08:41.275582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.751 qpair failed and we were unable to recover it. 00:42:46.751 [2024-05-15 09:08:41.275730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.751 [2024-05-15 09:08:41.275831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.751 [2024-05-15 09:08:41.275854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.751 qpair failed and we were unable to recover it. 00:42:46.751 [2024-05-15 09:08:41.275989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.751 [2024-05-15 09:08:41.276083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.751 [2024-05-15 09:08:41.276108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.751 qpair failed and we were unable to recover it. 00:42:46.751 [2024-05-15 09:08:41.276243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.751 [2024-05-15 09:08:41.276354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.751 [2024-05-15 09:08:41.276395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.751 qpair failed and we were unable to recover it. 00:42:46.751 [2024-05-15 09:08:41.276495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.751 [2024-05-15 09:08:41.276598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.751 [2024-05-15 09:08:41.276622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.751 qpair failed and we were unable to recover it. 00:42:46.751 [2024-05-15 09:08:41.276717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.751 [2024-05-15 09:08:41.276844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.751 [2024-05-15 09:08:41.276868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.751 qpair failed and we were unable to recover it. 00:42:46.751 [2024-05-15 09:08:41.276993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.751 [2024-05-15 09:08:41.277124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.751 [2024-05-15 09:08:41.277151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.751 qpair failed and we were unable to recover it. 00:42:46.751 [2024-05-15 09:08:41.277296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.751 [2024-05-15 09:08:41.277440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.751 [2024-05-15 09:08:41.277469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.751 qpair failed and we were unable to recover it. 00:42:46.751 [2024-05-15 09:08:41.277626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.751 [2024-05-15 09:08:41.277729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.751 [2024-05-15 09:08:41.277753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.751 qpair failed and we were unable to recover it. 00:42:46.751 [2024-05-15 09:08:41.277913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.751 [2024-05-15 09:08:41.278054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.751 [2024-05-15 09:08:41.278081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.751 qpair failed and we were unable to recover it. 00:42:46.751 [2024-05-15 09:08:41.278185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.751 [2024-05-15 09:08:41.278347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.751 [2024-05-15 09:08:41.278373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.751 qpair failed and we were unable to recover it. 00:42:46.751 [2024-05-15 09:08:41.278481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.751 [2024-05-15 09:08:41.278621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.751 [2024-05-15 09:08:41.278646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.751 qpair failed and we were unable to recover it. 00:42:46.751 [2024-05-15 09:08:41.278779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.751 [2024-05-15 09:08:41.278884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.751 [2024-05-15 09:08:41.278924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.751 qpair failed and we were unable to recover it. 00:42:46.751 [2024-05-15 09:08:41.279049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.751 [2024-05-15 09:08:41.279178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.751 [2024-05-15 09:08:41.279203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.751 qpair failed and we were unable to recover it. 00:42:46.751 [2024-05-15 09:08:41.279340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.751 [2024-05-15 09:08:41.279471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.751 [2024-05-15 09:08:41.279495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.751 qpair failed and we were unable to recover it. 00:42:46.751 [2024-05-15 09:08:41.279596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.751 [2024-05-15 09:08:41.279720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.751 [2024-05-15 09:08:41.279744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.751 qpair failed and we were unable to recover it. 00:42:46.751 [2024-05-15 09:08:41.279916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.751 [2024-05-15 09:08:41.280034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.751 [2024-05-15 09:08:41.280060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.751 qpair failed and we were unable to recover it. 00:42:46.751 [2024-05-15 09:08:41.280179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.751 [2024-05-15 09:08:41.280299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.751 [2024-05-15 09:08:41.280328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.751 qpair failed and we were unable to recover it. 00:42:46.751 [2024-05-15 09:08:41.280460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.751 [2024-05-15 09:08:41.280589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.751 [2024-05-15 09:08:41.280615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.751 qpair failed and we were unable to recover it. 00:42:46.751 [2024-05-15 09:08:41.280759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.751 [2024-05-15 09:08:41.280873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.752 [2024-05-15 09:08:41.280899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.752 qpair failed and we were unable to recover it. 00:42:46.752 [2024-05-15 09:08:41.281026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.752 [2024-05-15 09:08:41.281158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.752 [2024-05-15 09:08:41.281182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.752 qpair failed and we were unable to recover it. 00:42:46.752 [2024-05-15 09:08:41.281347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.752 [2024-05-15 09:08:41.281465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.752 [2024-05-15 09:08:41.281489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.752 qpair failed and we were unable to recover it. 00:42:46.752 [2024-05-15 09:08:41.281659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.752 [2024-05-15 09:08:41.281819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.752 [2024-05-15 09:08:41.281843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.752 qpair failed and we were unable to recover it. 00:42:46.752 [2024-05-15 09:08:41.281974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.752 [2024-05-15 09:08:41.282101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.752 [2024-05-15 09:08:41.282126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.752 qpair failed and we were unable to recover it. 00:42:46.752 [2024-05-15 09:08:41.282274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.752 [2024-05-15 09:08:41.282393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.752 [2024-05-15 09:08:41.282417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.752 qpair failed and we were unable to recover it. 00:42:46.752 [2024-05-15 09:08:41.282534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.752 [2024-05-15 09:08:41.282698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.752 [2024-05-15 09:08:41.282725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.752 qpair failed and we were unable to recover it. 00:42:46.752 [2024-05-15 09:08:41.282857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.752 [2024-05-15 09:08:41.282968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.752 [2024-05-15 09:08:41.282991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.752 qpair failed and we were unable to recover it. 00:42:46.752 [2024-05-15 09:08:41.283156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.752 [2024-05-15 09:08:41.283260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.752 [2024-05-15 09:08:41.283285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.752 qpair failed and we were unable to recover it. 00:42:46.752 [2024-05-15 09:08:41.283436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.752 [2024-05-15 09:08:41.283586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.752 [2024-05-15 09:08:41.283611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.752 qpair failed and we were unable to recover it. 00:42:46.752 [2024-05-15 09:08:41.283735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.752 [2024-05-15 09:08:41.283872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.752 [2024-05-15 09:08:41.283896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.752 qpair failed and we were unable to recover it. 00:42:46.752 [2024-05-15 09:08:41.284055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.752 [2024-05-15 09:08:41.284198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.752 [2024-05-15 09:08:41.284232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.752 qpair failed and we were unable to recover it. 00:42:46.752 [2024-05-15 09:08:41.284356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.752 [2024-05-15 09:08:41.284516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.752 [2024-05-15 09:08:41.284540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.752 qpair failed and we were unable to recover it. 00:42:46.752 [2024-05-15 09:08:41.284640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.752 [2024-05-15 09:08:41.284765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.752 [2024-05-15 09:08:41.284789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.752 qpair failed and we were unable to recover it. 00:42:46.752 [2024-05-15 09:08:41.284946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.752 [2024-05-15 09:08:41.285049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.752 [2024-05-15 09:08:41.285076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.752 qpair failed and we were unable to recover it. 00:42:46.752 [2024-05-15 09:08:41.285187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.752 [2024-05-15 09:08:41.285323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.752 [2024-05-15 09:08:41.285351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.752 qpair failed and we were unable to recover it. 00:42:46.752 [2024-05-15 09:08:41.285490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.752 [2024-05-15 09:08:41.285594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.752 [2024-05-15 09:08:41.285619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.752 qpair failed and we were unable to recover it. 00:42:46.752 [2024-05-15 09:08:41.285732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.752 [2024-05-15 09:08:41.285857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.752 [2024-05-15 09:08:41.285881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.752 qpair failed and we were unable to recover it. 00:42:46.752 [2024-05-15 09:08:41.286005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.752 [2024-05-15 09:08:41.286157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.752 [2024-05-15 09:08:41.286182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.752 qpair failed and we were unable to recover it. 00:42:46.752 [2024-05-15 09:08:41.286325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.752 [2024-05-15 09:08:41.286453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.752 [2024-05-15 09:08:41.286478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.752 qpair failed and we were unable to recover it. 00:42:46.752 [2024-05-15 09:08:41.286577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.752 [2024-05-15 09:08:41.286702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.752 [2024-05-15 09:08:41.286726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.752 qpair failed and we were unable to recover it. 00:42:46.752 [2024-05-15 09:08:41.286833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.752 [2024-05-15 09:08:41.287001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.752 [2024-05-15 09:08:41.287028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.752 qpair failed and we were unable to recover it. 00:42:46.752 [2024-05-15 09:08:41.287172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.752 [2024-05-15 09:08:41.287283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.752 [2024-05-15 09:08:41.287308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.752 qpair failed and we were unable to recover it. 00:42:46.752 [2024-05-15 09:08:41.287439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.752 [2024-05-15 09:08:41.287555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.752 [2024-05-15 09:08:41.287583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.752 qpair failed and we were unable to recover it. 00:42:46.752 [2024-05-15 09:08:41.287736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.752 [2024-05-15 09:08:41.287842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.752 [2024-05-15 09:08:41.287867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.752 qpair failed and we were unable to recover it. 00:42:46.752 [2024-05-15 09:08:41.288021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.752 [2024-05-15 09:08:41.288131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.752 [2024-05-15 09:08:41.288157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.752 qpair failed and we were unable to recover it. 00:42:46.752 [2024-05-15 09:08:41.288318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.752 [2024-05-15 09:08:41.288415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.752 [2024-05-15 09:08:41.288440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.752 qpair failed and we were unable to recover it. 00:42:46.752 [2024-05-15 09:08:41.288543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.752 [2024-05-15 09:08:41.288697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.752 [2024-05-15 09:08:41.288722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.752 qpair failed and we were unable to recover it. 00:42:46.752 [2024-05-15 09:08:41.288843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.752 [2024-05-15 09:08:41.288943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.752 [2024-05-15 09:08:41.288968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.752 qpair failed and we were unable to recover it. 00:42:46.752 [2024-05-15 09:08:41.289085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.752 [2024-05-15 09:08:41.289205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.753 [2024-05-15 09:08:41.289240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.753 qpair failed and we were unable to recover it. 00:42:46.753 [2024-05-15 09:08:41.289423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.753 [2024-05-15 09:08:41.289522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.753 [2024-05-15 09:08:41.289545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.753 qpair failed and we were unable to recover it. 00:42:46.753 [2024-05-15 09:08:41.289673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.753 [2024-05-15 09:08:41.289778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.753 [2024-05-15 09:08:41.289803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.753 qpair failed and we were unable to recover it. 00:42:46.753 [2024-05-15 09:08:41.289969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.753 [2024-05-15 09:08:41.290078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.753 [2024-05-15 09:08:41.290105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.753 qpair failed and we were unable to recover it. 00:42:46.753 [2024-05-15 09:08:41.290244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.753 [2024-05-15 09:08:41.290364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.753 [2024-05-15 09:08:41.290391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.753 qpair failed and we were unable to recover it. 00:42:46.753 [2024-05-15 09:08:41.290549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.753 [2024-05-15 09:08:41.290644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.753 [2024-05-15 09:08:41.290668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.753 qpair failed and we were unable to recover it. 00:42:46.753 [2024-05-15 09:08:41.290805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.753 [2024-05-15 09:08:41.290929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.753 [2024-05-15 09:08:41.290954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.753 qpair failed and we were unable to recover it. 00:42:46.753 [2024-05-15 09:08:41.291151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.753 [2024-05-15 09:08:41.291254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.753 [2024-05-15 09:08:41.291279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.753 qpair failed and we were unable to recover it. 00:42:46.753 [2024-05-15 09:08:41.291383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.753 [2024-05-15 09:08:41.291490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.753 [2024-05-15 09:08:41.291515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.753 qpair failed and we were unable to recover it. 00:42:46.753 [2024-05-15 09:08:41.291644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.753 [2024-05-15 09:08:41.291739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.753 [2024-05-15 09:08:41.291780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.753 qpair failed and we were unable to recover it. 00:42:46.753 [2024-05-15 09:08:41.291892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.753 [2024-05-15 09:08:41.292034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.753 [2024-05-15 09:08:41.292061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.753 qpair failed and we were unable to recover it. 00:42:46.753 [2024-05-15 09:08:41.292220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.753 [2024-05-15 09:08:41.292330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.753 [2024-05-15 09:08:41.292354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.753 qpair failed and we were unable to recover it. 00:42:46.753 [2024-05-15 09:08:41.292485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.753 [2024-05-15 09:08:41.292642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.753 [2024-05-15 09:08:41.292666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.753 qpair failed and we were unable to recover it. 00:42:46.753 [2024-05-15 09:08:41.292796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.753 [2024-05-15 09:08:41.292898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.753 [2024-05-15 09:08:41.292942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.753 qpair failed and we were unable to recover it. 00:42:46.753 [2024-05-15 09:08:41.293090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.753 [2024-05-15 09:08:41.293225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.753 [2024-05-15 09:08:41.293251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.753 qpair failed and we were unable to recover it. 00:42:46.753 [2024-05-15 09:08:41.293384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.753 [2024-05-15 09:08:41.293502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.753 [2024-05-15 09:08:41.293530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.753 qpair failed and we were unable to recover it. 00:42:46.753 [2024-05-15 09:08:41.293637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.753 [2024-05-15 09:08:41.293787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.753 [2024-05-15 09:08:41.293812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.753 qpair failed and we were unable to recover it. 00:42:46.753 [2024-05-15 09:08:41.293938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.753 [2024-05-15 09:08:41.294042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.753 [2024-05-15 09:08:41.294066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.753 qpair failed and we were unable to recover it. 00:42:46.753 [2024-05-15 09:08:41.294222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.753 [2024-05-15 09:08:41.294393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.753 [2024-05-15 09:08:41.294418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.753 qpair failed and we were unable to recover it. 00:42:46.753 [2024-05-15 09:08:41.294577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.753 [2024-05-15 09:08:41.294733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.753 [2024-05-15 09:08:41.294758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.753 qpair failed and we were unable to recover it. 00:42:46.753 [2024-05-15 09:08:41.294886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.753 [2024-05-15 09:08:41.294994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.753 [2024-05-15 09:08:41.295023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.753 qpair failed and we were unable to recover it. 00:42:46.753 [2024-05-15 09:08:41.295136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.753 [2024-05-15 09:08:41.295279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.753 [2024-05-15 09:08:41.295308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.753 qpair failed and we were unable to recover it. 00:42:46.753 [2024-05-15 09:08:41.295426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.753 [2024-05-15 09:08:41.295535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.753 [2024-05-15 09:08:41.295562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.753 qpair failed and we were unable to recover it. 00:42:46.753 [2024-05-15 09:08:41.295688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.753 [2024-05-15 09:08:41.295795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.753 [2024-05-15 09:08:41.295820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.753 qpair failed and we were unable to recover it. 00:42:46.753 [2024-05-15 09:08:41.295941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.753 [2024-05-15 09:08:41.296056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.753 [2024-05-15 09:08:41.296084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.753 qpair failed and we were unable to recover it. 00:42:46.753 [2024-05-15 09:08:41.296262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.753 [2024-05-15 09:08:41.296359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.753 [2024-05-15 09:08:41.296383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.753 qpair failed and we were unable to recover it. 00:42:46.753 [2024-05-15 09:08:41.296490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.753 [2024-05-15 09:08:41.296650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.753 [2024-05-15 09:08:41.296675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.753 qpair failed and we were unable to recover it. 00:42:46.753 [2024-05-15 09:08:41.296783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.753 [2024-05-15 09:08:41.296885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.753 [2024-05-15 09:08:41.296909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.753 qpair failed and we were unable to recover it. 00:42:46.753 [2024-05-15 09:08:41.297002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.753 [2024-05-15 09:08:41.297102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.753 [2024-05-15 09:08:41.297126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.753 qpair failed and we were unable to recover it. 00:42:46.753 [2024-05-15 09:08:41.297265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.754 [2024-05-15 09:08:41.297388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.754 [2024-05-15 09:08:41.297413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.754 qpair failed and we were unable to recover it. 00:42:46.754 [2024-05-15 09:08:41.297570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.754 [2024-05-15 09:08:41.297708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.754 [2024-05-15 09:08:41.297739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.754 qpair failed and we were unable to recover it. 00:42:46.754 [2024-05-15 09:08:41.297891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.754 [2024-05-15 09:08:41.297998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.754 [2024-05-15 09:08:41.298022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.754 qpair failed and we were unable to recover it. 00:42:46.754 [2024-05-15 09:08:41.298152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.754 [2024-05-15 09:08:41.298262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.754 [2024-05-15 09:08:41.298287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.754 qpair failed and we were unable to recover it. 00:42:46.754 [2024-05-15 09:08:41.298468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.754 [2024-05-15 09:08:41.298604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.754 [2024-05-15 09:08:41.298632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.754 qpair failed and we were unable to recover it. 00:42:46.754 [2024-05-15 09:08:41.298736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.754 [2024-05-15 09:08:41.298904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.754 [2024-05-15 09:08:41.298932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.754 qpair failed and we were unable to recover it. 00:42:46.754 [2024-05-15 09:08:41.299081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.754 [2024-05-15 09:08:41.299192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.754 [2024-05-15 09:08:41.299231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.754 qpair failed and we were unable to recover it. 00:42:46.754 [2024-05-15 09:08:41.299367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.754 [2024-05-15 09:08:41.299518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.754 [2024-05-15 09:08:41.299547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.754 qpair failed and we were unable to recover it. 00:42:46.754 [2024-05-15 09:08:41.299679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.754 [2024-05-15 09:08:41.299844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.754 [2024-05-15 09:08:41.299871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.754 qpair failed and we were unable to recover it. 00:42:46.754 [2024-05-15 09:08:41.299993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.754 [2024-05-15 09:08:41.300122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.754 [2024-05-15 09:08:41.300146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.754 qpair failed and we were unable to recover it. 00:42:46.754 [2024-05-15 09:08:41.300278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.754 [2024-05-15 09:08:41.300400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.754 [2024-05-15 09:08:41.300429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.754 qpair failed and we were unable to recover it. 00:42:46.754 [2024-05-15 09:08:41.300538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.754 [2024-05-15 09:08:41.300674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.754 [2024-05-15 09:08:41.300715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.754 qpair failed and we were unable to recover it. 00:42:46.754 [2024-05-15 09:08:41.300820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.754 [2024-05-15 09:08:41.300945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.754 [2024-05-15 09:08:41.300969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.754 qpair failed and we were unable to recover it. 00:42:46.754 [2024-05-15 09:08:41.301125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.754 [2024-05-15 09:08:41.301246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.754 [2024-05-15 09:08:41.301273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.754 qpair failed and we were unable to recover it. 00:42:46.754 [2024-05-15 09:08:41.301436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.754 [2024-05-15 09:08:41.301577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.754 [2024-05-15 09:08:41.301605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.754 qpair failed and we were unable to recover it. 00:42:46.754 [2024-05-15 09:08:41.301755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.754 [2024-05-15 09:08:41.301857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.754 [2024-05-15 09:08:41.301882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.754 qpair failed and we were unable to recover it. 00:42:46.754 [2024-05-15 09:08:41.302032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.754 [2024-05-15 09:08:41.302149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.754 [2024-05-15 09:08:41.302175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.754 qpair failed and we were unable to recover it. 00:42:46.754 [2024-05-15 09:08:41.302300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.754 [2024-05-15 09:08:41.302410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.754 [2024-05-15 09:08:41.302434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.754 qpair failed and we were unable to recover it. 00:42:46.754 [2024-05-15 09:08:41.302565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.754 [2024-05-15 09:08:41.302669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.754 [2024-05-15 09:08:41.302694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.754 qpair failed and we were unable to recover it. 00:42:46.754 [2024-05-15 09:08:41.302839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.754 [2024-05-15 09:08:41.302970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.754 [2024-05-15 09:08:41.302994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.754 qpair failed and we were unable to recover it. 00:42:46.754 [2024-05-15 09:08:41.303122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.754 [2024-05-15 09:08:41.303259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.754 [2024-05-15 09:08:41.303283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.754 qpair failed and we were unable to recover it. 00:42:46.754 [2024-05-15 09:08:41.303389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.754 [2024-05-15 09:08:41.303492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.754 [2024-05-15 09:08:41.303516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.754 qpair failed and we were unable to recover it. 00:42:46.754 [2024-05-15 09:08:41.303629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.754 [2024-05-15 09:08:41.303773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.754 [2024-05-15 09:08:41.303800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.754 qpair failed and we were unable to recover it. 00:42:46.754 [2024-05-15 09:08:41.303912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.754 [2024-05-15 09:08:41.304027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.754 [2024-05-15 09:08:41.304054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.754 qpair failed and we were unable to recover it. 00:42:46.754 [2024-05-15 09:08:41.304180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.754 [2024-05-15 09:08:41.304309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.754 [2024-05-15 09:08:41.304335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.754 qpair failed and we were unable to recover it. 00:42:46.754 [2024-05-15 09:08:41.304515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.754 [2024-05-15 09:08:41.304632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.754 [2024-05-15 09:08:41.304660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.754 qpair failed and we were unable to recover it. 00:42:46.754 [2024-05-15 09:08:41.304783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.754 [2024-05-15 09:08:41.304928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.754 [2024-05-15 09:08:41.304956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.754 qpair failed and we were unable to recover it. 00:42:46.754 [2024-05-15 09:08:41.305105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.754 [2024-05-15 09:08:41.305206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.754 [2024-05-15 09:08:41.305237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.754 qpair failed and we were unable to recover it. 00:42:46.754 [2024-05-15 09:08:41.305371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.754 [2024-05-15 09:08:41.305540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.754 [2024-05-15 09:08:41.305567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.754 qpair failed and we were unable to recover it. 00:42:46.755 [2024-05-15 09:08:41.305707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.755 [2024-05-15 09:08:41.305819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.755 [2024-05-15 09:08:41.305847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.755 qpair failed and we were unable to recover it. 00:42:46.755 [2024-05-15 09:08:41.306000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.755 [2024-05-15 09:08:41.306126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.755 [2024-05-15 09:08:41.306151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.755 qpair failed and we were unable to recover it. 00:42:46.755 [2024-05-15 09:08:41.306259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.755 [2024-05-15 09:08:41.306362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.755 [2024-05-15 09:08:41.306386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.755 qpair failed and we were unable to recover it. 00:42:46.755 [2024-05-15 09:08:41.306514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.755 [2024-05-15 09:08:41.306619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.755 [2024-05-15 09:08:41.306644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.755 qpair failed and we were unable to recover it. 00:42:46.755 [2024-05-15 09:08:41.306747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.755 [2024-05-15 09:08:41.306851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.755 [2024-05-15 09:08:41.306876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.755 qpair failed and we were unable to recover it. 00:42:46.755 [2024-05-15 09:08:41.307029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.755 [2024-05-15 09:08:41.307162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.755 [2024-05-15 09:08:41.307190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.755 qpair failed and we were unable to recover it. 00:42:46.755 [2024-05-15 09:08:41.307327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.755 [2024-05-15 09:08:41.307431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.755 [2024-05-15 09:08:41.307456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.755 qpair failed and we were unable to recover it. 00:42:46.755 [2024-05-15 09:08:41.307559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.755 [2024-05-15 09:08:41.307665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.755 [2024-05-15 09:08:41.307690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.755 qpair failed and we were unable to recover it. 00:42:46.755 [2024-05-15 09:08:41.307884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.755 [2024-05-15 09:08:41.308011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.755 [2024-05-15 09:08:41.308035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.755 qpair failed and we were unable to recover it. 00:42:46.755 [2024-05-15 09:08:41.308150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.755 [2024-05-15 09:08:41.308266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.755 [2024-05-15 09:08:41.308293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.755 qpair failed and we were unable to recover it. 00:42:46.755 [2024-05-15 09:08:41.308446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.755 [2024-05-15 09:08:41.308556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.755 [2024-05-15 09:08:41.308580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.755 qpair failed and we were unable to recover it. 00:42:46.755 [2024-05-15 09:08:41.308747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.755 [2024-05-15 09:08:41.308883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.755 [2024-05-15 09:08:41.308908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.755 qpair failed and we were unable to recover it. 00:42:46.755 [2024-05-15 09:08:41.309055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.755 [2024-05-15 09:08:41.309169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.755 [2024-05-15 09:08:41.309196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.755 qpair failed and we were unable to recover it. 00:42:46.755 [2024-05-15 09:08:41.309369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.755 [2024-05-15 09:08:41.309479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.755 [2024-05-15 09:08:41.309504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.755 qpair failed and we were unable to recover it. 00:42:46.755 [2024-05-15 09:08:41.309639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.755 [2024-05-15 09:08:41.309786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.755 [2024-05-15 09:08:41.309814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.755 qpair failed and we were unable to recover it. 00:42:46.755 [2024-05-15 09:08:41.309924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.755 [2024-05-15 09:08:41.310042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.755 [2024-05-15 09:08:41.310070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.755 qpair failed and we were unable to recover it. 00:42:46.755 [2024-05-15 09:08:41.310194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.755 [2024-05-15 09:08:41.310347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.755 [2024-05-15 09:08:41.310372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.755 qpair failed and we were unable to recover it. 00:42:46.755 [2024-05-15 09:08:41.310507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.755 [2024-05-15 09:08:41.310683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.755 [2024-05-15 09:08:41.310711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.755 qpair failed and we were unable to recover it. 00:42:46.755 [2024-05-15 09:08:41.310853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.755 [2024-05-15 09:08:41.310991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.755 [2024-05-15 09:08:41.311019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.755 qpair failed and we were unable to recover it. 00:42:46.755 [2024-05-15 09:08:41.311138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.755 [2024-05-15 09:08:41.311262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.755 [2024-05-15 09:08:41.311288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.755 qpair failed and we were unable to recover it. 00:42:46.755 [2024-05-15 09:08:41.311399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.755 [2024-05-15 09:08:41.311552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.755 [2024-05-15 09:08:41.311577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.755 qpair failed and we were unable to recover it. 00:42:46.755 [2024-05-15 09:08:41.311698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.755 [2024-05-15 09:08:41.311842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.755 [2024-05-15 09:08:41.311869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.755 qpair failed and we were unable to recover it. 00:42:46.755 [2024-05-15 09:08:41.312036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.755 [2024-05-15 09:08:41.312133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.755 [2024-05-15 09:08:41.312157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.755 qpair failed and we were unable to recover it. 00:42:46.755 [2024-05-15 09:08:41.312307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.755 [2024-05-15 09:08:41.312452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.755 [2024-05-15 09:08:41.312485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.755 qpair failed and we were unable to recover it. 00:42:46.755 [2024-05-15 09:08:41.312618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.755 [2024-05-15 09:08:41.312756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.756 [2024-05-15 09:08:41.312784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.756 qpair failed and we were unable to recover it. 00:42:46.756 [2024-05-15 09:08:41.312928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.756 [2024-05-15 09:08:41.313048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.756 [2024-05-15 09:08:41.313072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.756 qpair failed and we were unable to recover it. 00:42:46.756 [2024-05-15 09:08:41.313253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.756 [2024-05-15 09:08:41.313404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.756 [2024-05-15 09:08:41.313432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.756 qpair failed and we were unable to recover it. 00:42:46.756 [2024-05-15 09:08:41.313547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.756 [2024-05-15 09:08:41.313661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.756 [2024-05-15 09:08:41.313689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.756 qpair failed and we were unable to recover it. 00:42:46.756 [2024-05-15 09:08:41.313816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.756 [2024-05-15 09:08:41.313940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.756 [2024-05-15 09:08:41.313964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.756 qpair failed and we were unable to recover it. 00:42:46.756 [2024-05-15 09:08:41.314072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.756 [2024-05-15 09:08:41.314199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.756 [2024-05-15 09:08:41.314230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.756 qpair failed and we were unable to recover it. 00:42:46.756 [2024-05-15 09:08:41.314362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.756 [2024-05-15 09:08:41.314518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.756 [2024-05-15 09:08:41.314544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.756 qpair failed and we were unable to recover it. 00:42:46.756 [2024-05-15 09:08:41.314647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.756 [2024-05-15 09:08:41.314774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.756 [2024-05-15 09:08:41.314800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.756 qpair failed and we were unable to recover it. 00:42:46.756 [2024-05-15 09:08:41.314928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.756 [2024-05-15 09:08:41.315067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.756 [2024-05-15 09:08:41.315095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.756 qpair failed and we were unable to recover it. 00:42:46.756 [2024-05-15 09:08:41.315247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.756 [2024-05-15 09:08:41.315381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.756 [2024-05-15 09:08:41.315409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.756 qpair failed and we were unable to recover it. 00:42:46.756 [2024-05-15 09:08:41.315582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.756 [2024-05-15 09:08:41.315705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.756 [2024-05-15 09:08:41.315746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.756 qpair failed and we were unable to recover it. 00:42:46.756 [2024-05-15 09:08:41.315889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.756 [2024-05-15 09:08:41.316041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.756 [2024-05-15 09:08:41.316068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.756 qpair failed and we were unable to recover it. 00:42:46.756 [2024-05-15 09:08:41.316181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.756 [2024-05-15 09:08:41.316349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.756 [2024-05-15 09:08:41.316375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.756 qpair failed and we were unable to recover it. 00:42:46.756 [2024-05-15 09:08:41.316504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.756 [2024-05-15 09:08:41.316632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.756 [2024-05-15 09:08:41.316657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.756 qpair failed and we were unable to recover it. 00:42:46.756 [2024-05-15 09:08:41.316812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.756 [2024-05-15 09:08:41.316931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.756 [2024-05-15 09:08:41.316960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.756 qpair failed and we were unable to recover it. 00:42:46.756 [2024-05-15 09:08:41.317107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.756 [2024-05-15 09:08:41.317254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.756 [2024-05-15 09:08:41.317279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.756 qpair failed and we were unable to recover it. 00:42:46.756 [2024-05-15 09:08:41.317406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.756 [2024-05-15 09:08:41.317507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.756 [2024-05-15 09:08:41.317531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.756 qpair failed and we were unable to recover it. 00:42:46.756 [2024-05-15 09:08:41.317683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.756 [2024-05-15 09:08:41.317835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.756 [2024-05-15 09:08:41.317860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.756 qpair failed and we were unable to recover it. 00:42:46.756 [2024-05-15 09:08:41.317964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.756 [2024-05-15 09:08:41.318059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.756 [2024-05-15 09:08:41.318084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.756 qpair failed and we were unable to recover it. 00:42:46.756 [2024-05-15 09:08:41.318236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.756 [2024-05-15 09:08:41.318382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.756 [2024-05-15 09:08:41.318407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.756 qpair failed and we were unable to recover it. 00:42:46.756 [2024-05-15 09:08:41.318537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.756 [2024-05-15 09:08:41.318688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.756 [2024-05-15 09:08:41.318713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.756 qpair failed and we were unable to recover it. 00:42:46.756 [2024-05-15 09:08:41.318846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.756 [2024-05-15 09:08:41.318950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.756 [2024-05-15 09:08:41.318974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.756 qpair failed and we were unable to recover it. 00:42:46.756 [2024-05-15 09:08:41.319102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.756 [2024-05-15 09:08:41.319230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.756 [2024-05-15 09:08:41.319255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.756 qpair failed and we were unable to recover it. 00:42:46.756 [2024-05-15 09:08:41.319396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.756 [2024-05-15 09:08:41.319564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.756 [2024-05-15 09:08:41.319588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.756 qpair failed and we were unable to recover it. 00:42:46.756 [2024-05-15 09:08:41.319688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.756 [2024-05-15 09:08:41.319865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.756 [2024-05-15 09:08:41.319892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.756 qpair failed and we were unable to recover it. 00:42:46.756 [2024-05-15 09:08:41.320050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.756 [2024-05-15 09:08:41.320145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.756 [2024-05-15 09:08:41.320170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.756 qpair failed and we were unable to recover it. 00:42:46.756 [2024-05-15 09:08:41.320330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.756 [2024-05-15 09:08:41.320474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.756 [2024-05-15 09:08:41.320500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.756 qpair failed and we were unable to recover it. 00:42:46.756 [2024-05-15 09:08:41.320635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.756 [2024-05-15 09:08:41.320762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.756 [2024-05-15 09:08:41.320786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.756 qpair failed and we were unable to recover it. 00:42:46.756 [2024-05-15 09:08:41.320883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.756 [2024-05-15 09:08:41.321009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.756 [2024-05-15 09:08:41.321033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.756 qpair failed and we were unable to recover it. 00:42:46.756 [2024-05-15 09:08:41.321155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.756 [2024-05-15 09:08:41.321274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.757 [2024-05-15 09:08:41.321302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.757 qpair failed and we were unable to recover it. 00:42:46.757 [2024-05-15 09:08:41.321417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.757 [2024-05-15 09:08:41.321557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.757 [2024-05-15 09:08:41.321584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.757 qpair failed and we were unable to recover it. 00:42:46.757 [2024-05-15 09:08:41.321723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.757 [2024-05-15 09:08:41.321849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.757 [2024-05-15 09:08:41.321873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.757 qpair failed and we were unable to recover it. 00:42:46.757 [2024-05-15 09:08:41.322003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.757 [2024-05-15 09:08:41.322111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.757 [2024-05-15 09:08:41.322135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.757 qpair failed and we were unable to recover it. 00:42:46.757 [2024-05-15 09:08:41.322271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.757 [2024-05-15 09:08:41.322403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.757 [2024-05-15 09:08:41.322429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.757 qpair failed and we were unable to recover it. 00:42:46.757 [2024-05-15 09:08:41.322546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.757 [2024-05-15 09:08:41.322676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.757 [2024-05-15 09:08:41.322701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.757 qpair failed and we were unable to recover it. 00:42:46.757 [2024-05-15 09:08:41.322835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.757 [2024-05-15 09:08:41.322983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.757 [2024-05-15 09:08:41.323011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.757 qpair failed and we were unable to recover it. 00:42:46.757 [2024-05-15 09:08:41.323163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.757 [2024-05-15 09:08:41.323313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.757 [2024-05-15 09:08:41.323340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.757 qpair failed and we were unable to recover it. 00:42:46.757 [2024-05-15 09:08:41.323463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.757 [2024-05-15 09:08:41.323565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.757 [2024-05-15 09:08:41.323589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.757 qpair failed and we were unable to recover it. 00:42:46.757 [2024-05-15 09:08:41.323744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.757 [2024-05-15 09:08:41.323868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.757 [2024-05-15 09:08:41.323895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.757 qpair failed and we were unable to recover it. 00:42:46.757 [2024-05-15 09:08:41.324001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.757 [2024-05-15 09:08:41.324116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.757 [2024-05-15 09:08:41.324142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.757 qpair failed and we were unable to recover it. 00:42:46.757 [2024-05-15 09:08:41.324274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.757 [2024-05-15 09:08:41.324406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.757 [2024-05-15 09:08:41.324431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.757 qpair failed and we were unable to recover it. 00:42:46.757 [2024-05-15 09:08:41.324586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.757 [2024-05-15 09:08:41.324738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.757 [2024-05-15 09:08:41.324763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.757 qpair failed and we were unable to recover it. 00:42:46.757 [2024-05-15 09:08:41.324889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.757 [2024-05-15 09:08:41.325007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.757 [2024-05-15 09:08:41.325035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.757 qpair failed and we were unable to recover it. 00:42:46.757 [2024-05-15 09:08:41.325223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.757 [2024-05-15 09:08:41.325352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.757 [2024-05-15 09:08:41.325376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.757 qpair failed and we were unable to recover it. 00:42:46.757 [2024-05-15 09:08:41.325508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.757 [2024-05-15 09:08:41.325621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.757 [2024-05-15 09:08:41.325647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.757 qpair failed and we were unable to recover it. 00:42:46.757 [2024-05-15 09:08:41.325758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.757 [2024-05-15 09:08:41.325882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.757 [2024-05-15 09:08:41.325909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.757 qpair failed and we were unable to recover it. 00:42:46.757 [2024-05-15 09:08:41.326030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.757 [2024-05-15 09:08:41.326157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.757 [2024-05-15 09:08:41.326182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.757 qpair failed and we were unable to recover it. 00:42:46.757 [2024-05-15 09:08:41.326366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.757 [2024-05-15 09:08:41.326520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.757 [2024-05-15 09:08:41.326544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.757 qpair failed and we were unable to recover it. 00:42:46.757 [2024-05-15 09:08:41.326652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.757 [2024-05-15 09:08:41.326755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.757 [2024-05-15 09:08:41.326779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.757 qpair failed and we were unable to recover it. 00:42:46.757 [2024-05-15 09:08:41.326883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.757 [2024-05-15 09:08:41.326981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.757 [2024-05-15 09:08:41.327006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.757 qpair failed and we were unable to recover it. 00:42:46.757 [2024-05-15 09:08:41.327110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.757 [2024-05-15 09:08:41.327276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.757 [2024-05-15 09:08:41.327305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.757 qpair failed and we were unable to recover it. 00:42:46.757 [2024-05-15 09:08:41.327487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.757 [2024-05-15 09:08:41.327594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.757 [2024-05-15 09:08:41.327618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.757 qpair failed and we were unable to recover it. 00:42:46.757 [2024-05-15 09:08:41.327723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.757 [2024-05-15 09:08:41.327860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.757 [2024-05-15 09:08:41.327885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.757 qpair failed and we were unable to recover it. 00:42:46.757 [2024-05-15 09:08:41.328006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.757 [2024-05-15 09:08:41.328185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.757 [2024-05-15 09:08:41.328232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.757 qpair failed and we were unable to recover it. 00:42:46.757 [2024-05-15 09:08:41.328367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.757 [2024-05-15 09:08:41.328544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.757 [2024-05-15 09:08:41.328568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.757 qpair failed and we were unable to recover it. 00:42:46.757 [2024-05-15 09:08:41.328695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.757 [2024-05-15 09:08:41.328824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.757 [2024-05-15 09:08:41.328849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.757 qpair failed and we were unable to recover it. 00:42:46.757 [2024-05-15 09:08:41.328987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.757 [2024-05-15 09:08:41.329136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.757 [2024-05-15 09:08:41.329161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.757 qpair failed and we were unable to recover it. 00:42:46.757 [2024-05-15 09:08:41.329284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.757 [2024-05-15 09:08:41.329429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.757 [2024-05-15 09:08:41.329462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.757 qpair failed and we were unable to recover it. 00:42:46.757 [2024-05-15 09:08:41.329593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.758 [2024-05-15 09:08:41.329707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.758 [2024-05-15 09:08:41.329732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.758 qpair failed and we were unable to recover it. 00:42:46.758 [2024-05-15 09:08:41.329920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.758 [2024-05-15 09:08:41.330047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.758 [2024-05-15 09:08:41.330072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.758 qpair failed and we were unable to recover it. 00:42:46.758 [2024-05-15 09:08:41.330203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.758 [2024-05-15 09:08:41.330309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.758 [2024-05-15 09:08:41.330333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.758 qpair failed and we were unable to recover it. 00:42:46.758 [2024-05-15 09:08:41.330437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.758 [2024-05-15 09:08:41.330587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.758 [2024-05-15 09:08:41.330611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.758 qpair failed and we were unable to recover it. 00:42:46.758 [2024-05-15 09:08:41.330745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.758 [2024-05-15 09:08:41.330886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.758 [2024-05-15 09:08:41.330914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.758 qpair failed and we were unable to recover it. 00:42:46.758 [2024-05-15 09:08:41.331025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.758 [2024-05-15 09:08:41.331146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.758 [2024-05-15 09:08:41.331173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.758 qpair failed and we were unable to recover it. 00:42:46.758 [2024-05-15 09:08:41.331331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.758 [2024-05-15 09:08:41.331439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.758 [2024-05-15 09:08:41.331463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.758 qpair failed and we were unable to recover it. 00:42:46.758 [2024-05-15 09:08:41.331565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.758 [2024-05-15 09:08:41.331659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.758 [2024-05-15 09:08:41.331683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.758 qpair failed and we were unable to recover it. 00:42:46.758 [2024-05-15 09:08:41.331788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.758 [2024-05-15 09:08:41.331904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.758 [2024-05-15 09:08:41.331931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.758 qpair failed and we were unable to recover it. 00:42:46.758 [2024-05-15 09:08:41.332053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.758 [2024-05-15 09:08:41.332149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.758 [2024-05-15 09:08:41.332174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.758 qpair failed and we were unable to recover it. 00:42:46.758 [2024-05-15 09:08:41.332280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.758 [2024-05-15 09:08:41.332387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.758 [2024-05-15 09:08:41.332414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.758 qpair failed and we were unable to recover it. 00:42:46.758 [2024-05-15 09:08:41.332521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.758 [2024-05-15 09:08:41.332650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.758 [2024-05-15 09:08:41.332675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.758 qpair failed and we were unable to recover it. 00:42:46.758 [2024-05-15 09:08:41.332774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.758 [2024-05-15 09:08:41.332870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.758 [2024-05-15 09:08:41.332896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.758 qpair failed and we were unable to recover it. 00:42:46.758 [2024-05-15 09:08:41.333032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.758 [2024-05-15 09:08:41.333134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.758 [2024-05-15 09:08:41.333160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.758 qpair failed and we were unable to recover it. 00:42:46.758 [2024-05-15 09:08:41.333296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.758 [2024-05-15 09:08:41.333451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.758 [2024-05-15 09:08:41.333480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.758 qpair failed and we were unable to recover it. 00:42:46.758 [2024-05-15 09:08:41.333609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.758 [2024-05-15 09:08:41.333716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.758 [2024-05-15 09:08:41.333741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.758 qpair failed and we were unable to recover it. 00:42:46.758 [2024-05-15 09:08:41.333865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.758 [2024-05-15 09:08:41.333983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.758 [2024-05-15 09:08:41.334008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.758 qpair failed and we were unable to recover it. 00:42:46.758 [2024-05-15 09:08:41.334164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.758 [2024-05-15 09:08:41.334291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.758 [2024-05-15 09:08:41.334319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.758 qpair failed and we were unable to recover it. 00:42:46.758 [2024-05-15 09:08:41.334447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.758 [2024-05-15 09:08:41.334560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.758 [2024-05-15 09:08:41.334584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.758 qpair failed and we were unable to recover it. 00:42:46.758 [2024-05-15 09:08:41.334720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.758 [2024-05-15 09:08:41.334865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.758 [2024-05-15 09:08:41.334891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.758 qpair failed and we were unable to recover it. 00:42:46.758 [2024-05-15 09:08:41.335026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.758 [2024-05-15 09:08:41.335137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.758 [2024-05-15 09:08:41.335164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.758 qpair failed and we were unable to recover it. 00:42:46.758 [2024-05-15 09:08:41.335323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.758 [2024-05-15 09:08:41.335433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.758 [2024-05-15 09:08:41.335457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.758 qpair failed and we were unable to recover it. 00:42:46.758 [2024-05-15 09:08:41.335565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.758 [2024-05-15 09:08:41.335732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.758 [2024-05-15 09:08:41.335759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.758 qpair failed and we were unable to recover it. 00:42:46.758 [2024-05-15 09:08:41.335879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.758 [2024-05-15 09:08:41.335999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.758 [2024-05-15 09:08:41.336027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.758 qpair failed and we were unable to recover it. 00:42:46.758 [2024-05-15 09:08:41.336149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.758 [2024-05-15 09:08:41.336297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.758 [2024-05-15 09:08:41.336323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.758 qpair failed and we were unable to recover it. 00:42:46.758 [2024-05-15 09:08:41.336456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.758 [2024-05-15 09:08:41.336566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.758 [2024-05-15 09:08:41.336591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.758 qpair failed and we were unable to recover it. 00:42:46.758 [2024-05-15 09:08:41.336717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.758 [2024-05-15 09:08:41.336866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.758 [2024-05-15 09:08:41.336894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.758 qpair failed and we were unable to recover it. 00:42:46.758 [2024-05-15 09:08:41.337021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.758 [2024-05-15 09:08:41.337159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.758 [2024-05-15 09:08:41.337183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.758 qpair failed and we were unable to recover it. 00:42:46.758 [2024-05-15 09:08:41.337345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.758 [2024-05-15 09:08:41.337458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.758 [2024-05-15 09:08:41.337485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.758 qpair failed and we were unable to recover it. 00:42:46.759 [2024-05-15 09:08:41.337655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.759 [2024-05-15 09:08:41.337782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.759 [2024-05-15 09:08:41.337806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.759 qpair failed and we were unable to recover it. 00:42:46.759 [2024-05-15 09:08:41.337935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.759 [2024-05-15 09:08:41.338037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.759 [2024-05-15 09:08:41.338061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.759 qpair failed and we were unable to recover it. 00:42:46.759 [2024-05-15 09:08:41.338210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.759 [2024-05-15 09:08:41.338354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.759 [2024-05-15 09:08:41.338382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.759 qpair failed and we were unable to recover it. 00:42:46.759 [2024-05-15 09:08:41.338498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.759 [2024-05-15 09:08:41.338630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.759 [2024-05-15 09:08:41.338654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.759 qpair failed and we were unable to recover it. 00:42:46.759 [2024-05-15 09:08:41.338755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.759 [2024-05-15 09:08:41.338885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.759 [2024-05-15 09:08:41.338910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.759 qpair failed and we were unable to recover it. 00:42:46.759 [2024-05-15 09:08:41.339098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.759 [2024-05-15 09:08:41.339247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.759 [2024-05-15 09:08:41.339273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.759 qpair failed and we were unable to recover it. 00:42:46.759 [2024-05-15 09:08:41.339398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.759 [2024-05-15 09:08:41.339536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.759 [2024-05-15 09:08:41.339561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.759 qpair failed and we were unable to recover it. 00:42:46.759 [2024-05-15 09:08:41.339664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.759 [2024-05-15 09:08:41.339756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.759 [2024-05-15 09:08:41.339780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.759 qpair failed and we were unable to recover it. 00:42:46.759 [2024-05-15 09:08:41.339890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.759 [2024-05-15 09:08:41.340021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.759 [2024-05-15 09:08:41.340046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.759 qpair failed and we were unable to recover it. 00:42:46.759 [2024-05-15 09:08:41.340171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.759 [2024-05-15 09:08:41.340321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.759 [2024-05-15 09:08:41.340349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.759 qpair failed and we were unable to recover it. 00:42:46.759 [2024-05-15 09:08:41.340512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.759 [2024-05-15 09:08:41.340609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.759 [2024-05-15 09:08:41.340633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.759 qpair failed and we were unable to recover it. 00:42:46.759 [2024-05-15 09:08:41.340764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.759 [2024-05-15 09:08:41.340878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.759 [2024-05-15 09:08:41.340905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.759 qpair failed and we were unable to recover it. 00:42:46.759 [2024-05-15 09:08:41.341019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.759 [2024-05-15 09:08:41.341130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.759 [2024-05-15 09:08:41.341157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.759 qpair failed and we were unable to recover it. 00:42:46.759 [2024-05-15 09:08:41.341283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.759 [2024-05-15 09:08:41.341418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.759 [2024-05-15 09:08:41.341442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.759 qpair failed and we were unable to recover it. 00:42:46.759 [2024-05-15 09:08:41.341606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.759 [2024-05-15 09:08:41.341758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.759 [2024-05-15 09:08:41.341786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.759 qpair failed and we were unable to recover it. 00:42:46.759 [2024-05-15 09:08:41.341935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.759 [2024-05-15 09:08:41.342054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.759 [2024-05-15 09:08:41.342082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.759 qpair failed and we were unable to recover it. 00:42:46.759 [2024-05-15 09:08:41.342248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.759 [2024-05-15 09:08:41.342377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.759 [2024-05-15 09:08:41.342402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.759 qpair failed and we were unable to recover it. 00:42:46.759 [2024-05-15 09:08:41.342505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.759 [2024-05-15 09:08:41.342682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.759 [2024-05-15 09:08:41.342710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.759 qpair failed and we were unable to recover it. 00:42:46.759 [2024-05-15 09:08:41.342848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.759 [2024-05-15 09:08:41.342958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.759 [2024-05-15 09:08:41.342985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.759 qpair failed and we were unable to recover it. 00:42:46.759 [2024-05-15 09:08:41.343134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.759 [2024-05-15 09:08:41.343256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.759 [2024-05-15 09:08:41.343282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.759 qpair failed and we were unable to recover it. 00:42:46.759 [2024-05-15 09:08:41.343409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.759 [2024-05-15 09:08:41.343524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.759 [2024-05-15 09:08:41.343550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.759 qpair failed and we were unable to recover it. 00:42:46.759 [2024-05-15 09:08:41.343695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.759 [2024-05-15 09:08:41.343832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.759 [2024-05-15 09:08:41.343858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.759 qpair failed and we were unable to recover it. 00:42:46.759 [2024-05-15 09:08:41.343995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.759 [2024-05-15 09:08:41.344122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.759 [2024-05-15 09:08:41.344146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.759 qpair failed and we were unable to recover it. 00:42:46.759 [2024-05-15 09:08:41.344313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.759 [2024-05-15 09:08:41.344464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.759 [2024-05-15 09:08:41.344490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.759 qpair failed and we were unable to recover it. 00:42:46.759 [2024-05-15 09:08:41.344670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.759 [2024-05-15 09:08:41.344775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.759 [2024-05-15 09:08:41.344805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.759 qpair failed and we were unable to recover it. 00:42:46.759 [2024-05-15 09:08:41.344933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.759 [2024-05-15 09:08:41.345032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.759 [2024-05-15 09:08:41.345057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.759 qpair failed and we were unable to recover it. 00:42:46.759 [2024-05-15 09:08:41.345194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.759 [2024-05-15 09:08:41.345391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.759 [2024-05-15 09:08:41.345416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.759 qpair failed and we were unable to recover it. 00:42:46.759 [2024-05-15 09:08:41.345514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.759 [2024-05-15 09:08:41.345636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.759 [2024-05-15 09:08:41.345662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.759 qpair failed and we were unable to recover it. 00:42:46.759 [2024-05-15 09:08:41.345783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.759 [2024-05-15 09:08:41.345933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.759 [2024-05-15 09:08:41.345957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.760 qpair failed and we were unable to recover it. 00:42:46.760 [2024-05-15 09:08:41.346086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.760 [2024-05-15 09:08:41.346254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.760 [2024-05-15 09:08:41.346283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.760 qpair failed and we were unable to recover it. 00:42:46.760 [2024-05-15 09:08:41.346399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.760 [2024-05-15 09:08:41.346516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.760 [2024-05-15 09:08:41.346542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.760 qpair failed and we were unable to recover it. 00:42:46.760 [2024-05-15 09:08:41.346677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.760 [2024-05-15 09:08:41.346773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.760 [2024-05-15 09:08:41.346798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.760 qpair failed and we were unable to recover it. 00:42:46.760 [2024-05-15 09:08:41.346900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.760 [2024-05-15 09:08:41.347024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.760 [2024-05-15 09:08:41.347048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.760 qpair failed and we were unable to recover it. 00:42:46.760 [2024-05-15 09:08:41.347176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.760 [2024-05-15 09:08:41.347317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.760 [2024-05-15 09:08:41.347343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.760 qpair failed and we were unable to recover it. 00:42:46.760 [2024-05-15 09:08:41.347441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.760 [2024-05-15 09:08:41.347613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.760 [2024-05-15 09:08:41.347638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.760 qpair failed and we were unable to recover it. 00:42:46.760 [2024-05-15 09:08:41.347777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.760 [2024-05-15 09:08:41.347924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.760 [2024-05-15 09:08:41.347948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.760 qpair failed and we were unable to recover it. 00:42:46.760 [2024-05-15 09:08:41.348093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.760 [2024-05-15 09:08:41.348211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.760 [2024-05-15 09:08:41.348245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.760 qpair failed and we were unable to recover it. 00:42:46.760 [2024-05-15 09:08:41.348365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.760 [2024-05-15 09:08:41.348474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.760 [2024-05-15 09:08:41.348498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.760 qpair failed and we were unable to recover it. 00:42:46.760 [2024-05-15 09:08:41.348609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.760 [2024-05-15 09:08:41.348725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.760 [2024-05-15 09:08:41.348751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.760 qpair failed and we were unable to recover it. 00:42:46.760 [2024-05-15 09:08:41.348896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.760 [2024-05-15 09:08:41.349030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.760 [2024-05-15 09:08:41.349057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.760 qpair failed and we were unable to recover it. 00:42:46.760 [2024-05-15 09:08:41.349257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.760 [2024-05-15 09:08:41.349430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.760 [2024-05-15 09:08:41.349454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.760 qpair failed and we were unable to recover it. 00:42:46.760 [2024-05-15 09:08:41.349621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.760 [2024-05-15 09:08:41.349749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.760 [2024-05-15 09:08:41.349773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.760 qpair failed and we were unable to recover it. 00:42:46.760 [2024-05-15 09:08:41.349872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.760 [2024-05-15 09:08:41.350029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.760 [2024-05-15 09:08:41.350057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.760 qpair failed and we were unable to recover it. 00:42:46.760 [2024-05-15 09:08:41.350201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.760 [2024-05-15 09:08:41.350342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.760 [2024-05-15 09:08:41.350367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.760 qpair failed and we were unable to recover it. 00:42:46.760 [2024-05-15 09:08:41.350523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.760 [2024-05-15 09:08:41.350674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.760 [2024-05-15 09:08:41.350698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.760 qpair failed and we were unable to recover it. 00:42:46.760 [2024-05-15 09:08:41.350836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.760 [2024-05-15 09:08:41.350935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.760 [2024-05-15 09:08:41.350959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.760 qpair failed and we were unable to recover it. 00:42:46.760 [2024-05-15 09:08:41.351094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.760 [2024-05-15 09:08:41.351189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.760 [2024-05-15 09:08:41.351213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.760 qpair failed and we were unable to recover it. 00:42:46.760 [2024-05-15 09:08:41.351346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.760 [2024-05-15 09:08:41.351482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.760 [2024-05-15 09:08:41.351519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.760 qpair failed and we were unable to recover it. 00:42:46.760 [2024-05-15 09:08:41.351672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.760 [2024-05-15 09:08:41.351810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.760 [2024-05-15 09:08:41.351838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.760 qpair failed and we were unable to recover it. 00:42:46.760 [2024-05-15 09:08:41.351983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.760 [2024-05-15 09:08:41.352141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.760 [2024-05-15 09:08:41.352166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.760 qpair failed and we were unable to recover it. 00:42:46.760 [2024-05-15 09:08:41.352322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.760 [2024-05-15 09:08:41.352495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.760 [2024-05-15 09:08:41.352520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.760 qpair failed and we were unable to recover it. 00:42:46.760 [2024-05-15 09:08:41.352661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.760 [2024-05-15 09:08:41.352816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.760 [2024-05-15 09:08:41.352844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.760 qpair failed and we were unable to recover it. 00:42:46.760 [2024-05-15 09:08:41.352968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.760 [2024-05-15 09:08:41.353069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.760 [2024-05-15 09:08:41.353094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.760 qpair failed and we were unable to recover it. 00:42:46.760 [2024-05-15 09:08:41.353201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.761 [2024-05-15 09:08:41.353316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.761 [2024-05-15 09:08:41.353344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.761 qpair failed and we were unable to recover it. 00:42:46.761 [2024-05-15 09:08:41.353476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.761 [2024-05-15 09:08:41.353603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.761 [2024-05-15 09:08:41.353629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.761 qpair failed and we were unable to recover it. 00:42:46.761 [2024-05-15 09:08:41.353739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.761 [2024-05-15 09:08:41.353873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.761 [2024-05-15 09:08:41.353898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.761 qpair failed and we were unable to recover it. 00:42:46.761 [2024-05-15 09:08:41.354055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.761 [2024-05-15 09:08:41.354208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.761 [2024-05-15 09:08:41.354240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.761 qpair failed and we were unable to recover it. 00:42:46.761 [2024-05-15 09:08:41.354397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.761 [2024-05-15 09:08:41.354521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.761 [2024-05-15 09:08:41.354546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.761 qpair failed and we were unable to recover it. 00:42:46.761 [2024-05-15 09:08:41.354649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.761 [2024-05-15 09:08:41.354746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.761 [2024-05-15 09:08:41.354771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.761 qpair failed and we were unable to recover it. 00:42:46.761 [2024-05-15 09:08:41.354929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.761 [2024-05-15 09:08:41.355038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.761 [2024-05-15 09:08:41.355065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.761 qpair failed and we were unable to recover it. 00:42:46.761 [2024-05-15 09:08:41.355203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.761 [2024-05-15 09:08:41.355352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.761 [2024-05-15 09:08:41.355380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.761 qpair failed and we were unable to recover it. 00:42:46.761 [2024-05-15 09:08:41.355512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.761 [2024-05-15 09:08:41.355615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.761 [2024-05-15 09:08:41.355640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.761 qpair failed and we were unable to recover it. 00:42:46.761 [2024-05-15 09:08:41.355749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.761 [2024-05-15 09:08:41.355900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.761 [2024-05-15 09:08:41.355928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.761 qpair failed and we were unable to recover it. 00:42:46.761 [2024-05-15 09:08:41.356074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.761 [2024-05-15 09:08:41.356209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.761 [2024-05-15 09:08:41.356261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.761 qpair failed and we were unable to recover it. 00:42:46.761 [2024-05-15 09:08:41.356393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.761 [2024-05-15 09:08:41.356517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.761 [2024-05-15 09:08:41.356542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.761 qpair failed and we were unable to recover it. 00:42:46.761 [2024-05-15 09:08:41.356685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.761 [2024-05-15 09:08:41.356855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.761 [2024-05-15 09:08:41.356883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.761 qpair failed and we were unable to recover it. 00:42:46.761 [2024-05-15 09:08:41.357023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.761 [2024-05-15 09:08:41.357187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.761 [2024-05-15 09:08:41.357223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.761 qpair failed and we were unable to recover it. 00:42:46.761 [2024-05-15 09:08:41.357351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.761 [2024-05-15 09:08:41.357477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.761 [2024-05-15 09:08:41.357502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.761 qpair failed and we were unable to recover it. 00:42:46.761 [2024-05-15 09:08:41.357612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.761 [2024-05-15 09:08:41.357736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.761 [2024-05-15 09:08:41.357761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.761 qpair failed and we were unable to recover it. 00:42:46.761 [2024-05-15 09:08:41.357860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.761 [2024-05-15 09:08:41.357963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.761 [2024-05-15 09:08:41.357989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.761 qpair failed and we were unable to recover it. 00:42:46.761 [2024-05-15 09:08:41.358171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.761 [2024-05-15 09:08:41.358313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.761 [2024-05-15 09:08:41.358338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.761 qpair failed and we were unable to recover it. 00:42:46.761 [2024-05-15 09:08:41.358474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.761 [2024-05-15 09:08:41.358627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.761 [2024-05-15 09:08:41.358655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.761 qpair failed and we were unable to recover it. 00:42:46.761 [2024-05-15 09:08:41.358796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.761 [2024-05-15 09:08:41.358913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.761 [2024-05-15 09:08:41.358939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.761 qpair failed and we were unable to recover it. 00:42:46.761 [2024-05-15 09:08:41.359065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.761 [2024-05-15 09:08:41.359167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.761 [2024-05-15 09:08:41.359192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.761 qpair failed and we were unable to recover it. 00:42:46.761 [2024-05-15 09:08:41.359362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.761 [2024-05-15 09:08:41.359499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.761 [2024-05-15 09:08:41.359527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.761 qpair failed and we were unable to recover it. 00:42:46.761 [2024-05-15 09:08:41.359666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.761 [2024-05-15 09:08:41.359773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.761 [2024-05-15 09:08:41.359807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.761 qpair failed and we were unable to recover it. 00:42:46.761 [2024-05-15 09:08:41.359942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.761 [2024-05-15 09:08:41.360066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.761 [2024-05-15 09:08:41.360091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.761 qpair failed and we were unable to recover it. 00:42:46.761 [2024-05-15 09:08:41.360256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.761 [2024-05-15 09:08:41.360393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.761 [2024-05-15 09:08:41.360417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.761 qpair failed and we were unable to recover it. 00:42:46.761 [2024-05-15 09:08:41.360553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.761 [2024-05-15 09:08:41.360669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.761 [2024-05-15 09:08:41.360696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.761 qpair failed and we were unable to recover it. 00:42:46.761 [2024-05-15 09:08:41.360844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.761 [2024-05-15 09:08:41.360999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.761 [2024-05-15 09:08:41.361023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.761 qpair failed and we were unable to recover it. 00:42:46.761 [2024-05-15 09:08:41.361225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.761 [2024-05-15 09:08:41.361380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.761 [2024-05-15 09:08:41.361404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.761 qpair failed and we were unable to recover it. 00:42:46.761 [2024-05-15 09:08:41.361547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.761 [2024-05-15 09:08:41.361685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.761 [2024-05-15 09:08:41.361711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.761 qpair failed and we were unable to recover it. 00:42:46.762 [2024-05-15 09:08:41.361838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.762 [2024-05-15 09:08:41.361991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.762 [2024-05-15 09:08:41.362015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.762 qpair failed and we were unable to recover it. 00:42:46.762 [2024-05-15 09:08:41.362161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.762 [2024-05-15 09:08:41.362281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.762 [2024-05-15 09:08:41.362309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.762 qpair failed and we were unable to recover it. 00:42:46.762 [2024-05-15 09:08:41.362422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.762 [2024-05-15 09:08:41.362540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.762 [2024-05-15 09:08:41.362567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.762 qpair failed and we were unable to recover it. 00:42:46.762 [2024-05-15 09:08:41.362706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.762 [2024-05-15 09:08:41.362810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.762 [2024-05-15 09:08:41.362834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.762 qpair failed and we were unable to recover it. 00:42:46.762 [2024-05-15 09:08:41.362951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.762 [2024-05-15 09:08:41.363057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.762 [2024-05-15 09:08:41.363080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.762 qpair failed and we were unable to recover it. 00:42:46.762 [2024-05-15 09:08:41.363181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.762 [2024-05-15 09:08:41.363315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.762 [2024-05-15 09:08:41.363340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.762 qpair failed and we were unable to recover it. 00:42:46.762 [2024-05-15 09:08:41.363496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.762 [2024-05-15 09:08:41.363602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.762 [2024-05-15 09:08:41.363626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.762 qpair failed and we were unable to recover it. 00:42:46.762 [2024-05-15 09:08:41.363736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.762 [2024-05-15 09:08:41.363873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.762 [2024-05-15 09:08:41.363897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.762 qpair failed and we were unable to recover it. 00:42:46.762 [2024-05-15 09:08:41.363998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.762 [2024-05-15 09:08:41.364104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.762 [2024-05-15 09:08:41.364128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.762 qpair failed and we were unable to recover it. 00:42:46.762 [2024-05-15 09:08:41.364284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.762 [2024-05-15 09:08:41.364438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.762 [2024-05-15 09:08:41.364463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.762 qpair failed and we were unable to recover it. 00:42:46.762 [2024-05-15 09:08:41.364617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.762 [2024-05-15 09:08:41.364726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.762 [2024-05-15 09:08:41.364754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.762 qpair failed and we were unable to recover it. 00:42:46.762 [2024-05-15 09:08:41.364896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.762 [2024-05-15 09:08:41.365034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.762 [2024-05-15 09:08:41.365063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.762 qpair failed and we were unable to recover it. 00:42:46.762 [2024-05-15 09:08:41.365210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.762 [2024-05-15 09:08:41.365330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.762 [2024-05-15 09:08:41.365354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.762 qpair failed and we were unable to recover it. 00:42:46.762 [2024-05-15 09:08:41.365457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.762 [2024-05-15 09:08:41.365588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.762 [2024-05-15 09:08:41.365615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.762 qpair failed and we were unable to recover it. 00:42:46.762 [2024-05-15 09:08:41.365757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.762 [2024-05-15 09:08:41.365879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.762 [2024-05-15 09:08:41.365905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.762 qpair failed and we were unable to recover it. 00:42:46.762 [2024-05-15 09:08:41.366056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.762 [2024-05-15 09:08:41.366185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.762 [2024-05-15 09:08:41.366209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.762 qpair failed and we were unable to recover it. 00:42:46.762 [2024-05-15 09:08:41.366350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.762 [2024-05-15 09:08:41.366522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.762 [2024-05-15 09:08:41.366550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.762 qpair failed and we were unable to recover it. 00:42:46.762 [2024-05-15 09:08:41.366667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.762 [2024-05-15 09:08:41.366812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.762 [2024-05-15 09:08:41.366836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.762 qpair failed and we were unable to recover it. 00:42:46.762 [2024-05-15 09:08:41.366937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.762 [2024-05-15 09:08:41.367044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.762 [2024-05-15 09:08:41.367070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.762 qpair failed and we were unable to recover it. 00:42:46.762 [2024-05-15 09:08:41.367230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.762 [2024-05-15 09:08:41.367395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.762 [2024-05-15 09:08:41.367423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.762 qpair failed and we were unable to recover it. 00:42:46.762 [2024-05-15 09:08:41.367563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.762 [2024-05-15 09:08:41.367700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.762 [2024-05-15 09:08:41.367727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.762 qpair failed and we were unable to recover it. 00:42:46.762 [2024-05-15 09:08:41.367853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.762 [2024-05-15 09:08:41.367971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.762 [2024-05-15 09:08:41.367995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.762 qpair failed and we were unable to recover it. 00:42:46.762 [2024-05-15 09:08:41.368095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.762 [2024-05-15 09:08:41.368249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.762 [2024-05-15 09:08:41.368274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.762 qpair failed and we were unable to recover it. 00:42:46.762 [2024-05-15 09:08:41.368395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.762 [2024-05-15 09:08:41.368572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.762 [2024-05-15 09:08:41.368598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.762 qpair failed and we were unable to recover it. 00:42:46.762 [2024-05-15 09:08:41.368726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.762 [2024-05-15 09:08:41.368824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.762 [2024-05-15 09:08:41.368849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.762 qpair failed and we were unable to recover it. 00:42:46.762 [2024-05-15 09:08:41.368978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.762 [2024-05-15 09:08:41.369097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.762 [2024-05-15 09:08:41.369125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.762 qpair failed and we were unable to recover it. 00:42:46.762 [2024-05-15 09:08:41.369291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.762 [2024-05-15 09:08:41.369405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.762 [2024-05-15 09:08:41.369434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.762 qpair failed and we were unable to recover it. 00:42:46.762 [2024-05-15 09:08:41.369591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.762 [2024-05-15 09:08:41.369706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.762 [2024-05-15 09:08:41.369731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.762 qpair failed and we were unable to recover it. 00:42:46.762 [2024-05-15 09:08:41.369889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.762 [2024-05-15 09:08:41.370014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.762 [2024-05-15 09:08:41.370041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.763 qpair failed and we were unable to recover it. 00:42:46.763 [2024-05-15 09:08:41.370177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.763 [2024-05-15 09:08:41.370323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.763 [2024-05-15 09:08:41.370351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.763 qpair failed and we were unable to recover it. 00:42:46.763 [2024-05-15 09:08:41.370477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.763 [2024-05-15 09:08:41.370576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.763 [2024-05-15 09:08:41.370601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.763 qpair failed and we were unable to recover it. 00:42:46.763 [2024-05-15 09:08:41.370757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.763 [2024-05-15 09:08:41.370900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.763 [2024-05-15 09:08:41.370927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.763 qpair failed and we were unable to recover it. 00:42:46.763 [2024-05-15 09:08:41.371034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.763 [2024-05-15 09:08:41.371178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.763 [2024-05-15 09:08:41.371204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.763 qpair failed and we were unable to recover it. 00:42:46.763 [2024-05-15 09:08:41.371392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.763 [2024-05-15 09:08:41.371508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.763 [2024-05-15 09:08:41.371533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.763 qpair failed and we were unable to recover it. 00:42:46.763 [2024-05-15 09:08:41.371675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.763 [2024-05-15 09:08:41.371798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.763 [2024-05-15 09:08:41.371824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.763 qpair failed and we were unable to recover it. 00:42:46.763 [2024-05-15 09:08:41.371978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.763 [2024-05-15 09:08:41.372090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.763 [2024-05-15 09:08:41.372116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.763 qpair failed and we were unable to recover it. 00:42:46.763 [2024-05-15 09:08:41.372256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.763 [2024-05-15 09:08:41.372384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.763 [2024-05-15 09:08:41.372409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.763 qpair failed and we were unable to recover it. 00:42:46.763 [2024-05-15 09:08:41.372602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.763 [2024-05-15 09:08:41.372729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.763 [2024-05-15 09:08:41.372753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.763 qpair failed and we were unable to recover it. 00:42:46.763 [2024-05-15 09:08:41.372879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.763 [2024-05-15 09:08:41.373030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.763 [2024-05-15 09:08:41.373056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.763 qpair failed and we were unable to recover it. 00:42:46.763 [2024-05-15 09:08:41.373195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.763 [2024-05-15 09:08:41.373360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.763 [2024-05-15 09:08:41.373385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.763 qpair failed and we were unable to recover it. 00:42:46.763 [2024-05-15 09:08:41.373485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.763 [2024-05-15 09:08:41.373604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.763 [2024-05-15 09:08:41.373646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.763 qpair failed and we were unable to recover it. 00:42:46.763 [2024-05-15 09:08:41.373800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.763 [2024-05-15 09:08:41.373914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.763 [2024-05-15 09:08:41.373940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.763 qpair failed and we were unable to recover it. 00:42:46.763 [2024-05-15 09:08:41.374050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.763 [2024-05-15 09:08:41.374175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.763 [2024-05-15 09:08:41.374198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.763 qpair failed and we were unable to recover it. 00:42:46.763 [2024-05-15 09:08:41.374419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.763 [2024-05-15 09:08:41.374527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.763 [2024-05-15 09:08:41.374553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.763 qpair failed and we were unable to recover it. 00:42:46.763 [2024-05-15 09:08:41.374683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.763 [2024-05-15 09:08:41.374841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.763 [2024-05-15 09:08:41.374873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.763 qpair failed and we were unable to recover it. 00:42:46.763 [2024-05-15 09:08:41.375017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.763 [2024-05-15 09:08:41.375171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.763 [2024-05-15 09:08:41.375195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.763 qpair failed and we were unable to recover it. 00:42:46.763 [2024-05-15 09:08:41.375311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.763 [2024-05-15 09:08:41.375432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.763 [2024-05-15 09:08:41.375458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.763 qpair failed and we were unable to recover it. 00:42:46.763 [2024-05-15 09:08:41.375574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.763 [2024-05-15 09:08:41.375724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.763 [2024-05-15 09:08:41.375752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.763 qpair failed and we were unable to recover it. 00:42:46.763 [2024-05-15 09:08:41.375900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.763 [2024-05-15 09:08:41.376031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.763 [2024-05-15 09:08:41.376055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.763 qpair failed and we were unable to recover it. 00:42:46.763 [2024-05-15 09:08:41.376229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.763 [2024-05-15 09:08:41.376356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.763 [2024-05-15 09:08:41.376381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.763 qpair failed and we were unable to recover it. 00:42:46.763 [2024-05-15 09:08:41.376504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.763 [2024-05-15 09:08:41.376622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.763 [2024-05-15 09:08:41.376649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.763 qpair failed and we were unable to recover it. 00:42:46.763 [2024-05-15 09:08:41.376767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.763 [2024-05-15 09:08:41.376862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.763 [2024-05-15 09:08:41.376886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.763 qpair failed and we were unable to recover it. 00:42:46.763 [2024-05-15 09:08:41.377047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.763 [2024-05-15 09:08:41.377206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.763 [2024-05-15 09:08:41.377238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.763 qpair failed and we were unable to recover it. 00:42:46.763 [2024-05-15 09:08:41.377371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.763 [2024-05-15 09:08:41.377509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.763 [2024-05-15 09:08:41.377535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.763 qpair failed and we were unable to recover it. 00:42:46.763 [2024-05-15 09:08:41.377656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.763 [2024-05-15 09:08:41.377762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.763 [2024-05-15 09:08:41.377786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.763 qpair failed and we were unable to recover it. 00:42:46.763 [2024-05-15 09:08:41.377926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.763 [2024-05-15 09:08:41.378050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.763 [2024-05-15 09:08:41.378073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.763 qpair failed and we were unable to recover it. 00:42:46.763 [2024-05-15 09:08:41.378177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.763 [2024-05-15 09:08:41.378300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.763 [2024-05-15 09:08:41.378328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.763 qpair failed and we were unable to recover it. 00:42:46.763 [2024-05-15 09:08:41.378455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.763 [2024-05-15 09:08:41.378567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.764 [2024-05-15 09:08:41.378591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.764 qpair failed and we were unable to recover it. 00:42:46.764 [2024-05-15 09:08:41.378687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.764 [2024-05-15 09:08:41.378810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.764 [2024-05-15 09:08:41.378834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.764 qpair failed and we were unable to recover it. 00:42:46.764 [2024-05-15 09:08:41.378944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.764 [2024-05-15 09:08:41.379040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.764 [2024-05-15 09:08:41.379064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.764 qpair failed and we were unable to recover it. 00:42:46.764 [2024-05-15 09:08:41.379187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.764 [2024-05-15 09:08:41.379319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.764 [2024-05-15 09:08:41.379343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.764 qpair failed and we were unable to recover it. 00:42:46.764 [2024-05-15 09:08:41.379477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.764 [2024-05-15 09:08:41.379631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.764 [2024-05-15 09:08:41.379658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.764 qpair failed and we were unable to recover it. 00:42:46.764 [2024-05-15 09:08:41.379770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.764 [2024-05-15 09:08:41.379892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.764 [2024-05-15 09:08:41.379916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.764 qpair failed and we were unable to recover it. 00:42:46.764 [2024-05-15 09:08:41.380019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.764 [2024-05-15 09:08:41.380119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.764 [2024-05-15 09:08:41.380143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.764 qpair failed and we were unable to recover it. 00:42:46.764 [2024-05-15 09:08:41.380276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.764 [2024-05-15 09:08:41.380412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.764 [2024-05-15 09:08:41.380435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.764 qpair failed and we were unable to recover it. 00:42:46.764 [2024-05-15 09:08:41.380542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.764 [2024-05-15 09:08:41.380648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.764 [2024-05-15 09:08:41.380671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.764 qpair failed and we were unable to recover it. 00:42:46.764 [2024-05-15 09:08:41.380825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.764 [2024-05-15 09:08:41.380953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.764 [2024-05-15 09:08:41.380994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.764 qpair failed and we were unable to recover it. 00:42:46.764 [2024-05-15 09:08:41.381160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.764 [2024-05-15 09:08:41.381283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.764 [2024-05-15 09:08:41.381311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.764 qpair failed and we were unable to recover it. 00:42:46.764 [2024-05-15 09:08:41.381454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.764 [2024-05-15 09:08:41.381604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.764 [2024-05-15 09:08:41.381632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.764 qpair failed and we were unable to recover it. 00:42:46.764 [2024-05-15 09:08:41.381767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.764 [2024-05-15 09:08:41.381896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.764 [2024-05-15 09:08:41.381920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.764 qpair failed and we were unable to recover it. 00:42:46.764 [2024-05-15 09:08:41.382064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.764 [2024-05-15 09:08:41.382163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.764 [2024-05-15 09:08:41.382188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.764 qpair failed and we were unable to recover it. 00:42:46.764 [2024-05-15 09:08:41.382303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.764 [2024-05-15 09:08:41.382454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.764 [2024-05-15 09:08:41.382479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.764 qpair failed and we were unable to recover it. 00:42:46.764 [2024-05-15 09:08:41.382612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.764 [2024-05-15 09:08:41.382746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.764 [2024-05-15 09:08:41.382772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.764 qpair failed and we were unable to recover it. 00:42:46.764 [2024-05-15 09:08:41.382934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.764 [2024-05-15 09:08:41.383087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.764 [2024-05-15 09:08:41.383113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.764 qpair failed and we were unable to recover it. 00:42:46.764 [2024-05-15 09:08:41.383266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.764 [2024-05-15 09:08:41.383360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.764 [2024-05-15 09:08:41.383384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.764 qpair failed and we were unable to recover it. 00:42:46.764 [2024-05-15 09:08:41.383497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.764 [2024-05-15 09:08:41.383624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.764 [2024-05-15 09:08:41.383648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.764 qpair failed and we were unable to recover it. 00:42:46.764 [2024-05-15 09:08:41.383791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.764 [2024-05-15 09:08:41.383908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.764 [2024-05-15 09:08:41.383935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.764 qpair failed and we were unable to recover it. 00:42:46.764 [2024-05-15 09:08:41.384076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.764 [2024-05-15 09:08:41.384233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.764 [2024-05-15 09:08:41.384260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.764 qpair failed and we were unable to recover it. 00:42:46.764 [2024-05-15 09:08:41.384365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.764 [2024-05-15 09:08:41.384471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.764 [2024-05-15 09:08:41.384496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.764 qpair failed and we were unable to recover it. 00:42:46.764 [2024-05-15 09:08:41.384610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.764 [2024-05-15 09:08:41.384720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.764 [2024-05-15 09:08:41.384745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.765 qpair failed and we were unable to recover it. 00:42:46.765 [2024-05-15 09:08:41.384852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.765 [2024-05-15 09:08:41.384982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.765 [2024-05-15 09:08:41.385007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.765 qpair failed and we were unable to recover it. 00:42:46.765 [2024-05-15 09:08:41.385156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.765 [2024-05-15 09:08:41.385339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.765 [2024-05-15 09:08:41.385364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.765 qpair failed and we were unable to recover it. 00:42:46.765 [2024-05-15 09:08:41.385513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.765 [2024-05-15 09:08:41.385627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.765 [2024-05-15 09:08:41.385654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.765 qpair failed and we were unable to recover it. 00:42:46.765 [2024-05-15 09:08:41.385816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.765 [2024-05-15 09:08:41.385921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.765 [2024-05-15 09:08:41.385945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.765 qpair failed and we were unable to recover it. 00:42:46.765 [2024-05-15 09:08:41.386044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.765 [2024-05-15 09:08:41.386172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.765 [2024-05-15 09:08:41.386196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.765 qpair failed and we were unable to recover it. 00:42:46.765 [2024-05-15 09:08:41.386316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.765 [2024-05-15 09:08:41.386464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.765 [2024-05-15 09:08:41.386492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.765 qpair failed and we were unable to recover it. 00:42:46.765 [2024-05-15 09:08:41.386632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.765 [2024-05-15 09:08:41.386816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.765 [2024-05-15 09:08:41.386841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.765 qpair failed and we were unable to recover it. 00:42:46.765 [2024-05-15 09:08:41.386938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.765 [2024-05-15 09:08:41.387041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.765 [2024-05-15 09:08:41.387066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.765 qpair failed and we were unable to recover it. 00:42:46.765 [2024-05-15 09:08:41.387165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.765 [2024-05-15 09:08:41.387293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.765 [2024-05-15 09:08:41.387336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.765 qpair failed and we were unable to recover it. 00:42:46.765 [2024-05-15 09:08:41.387502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.765 [2024-05-15 09:08:41.387646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.765 [2024-05-15 09:08:41.387687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.765 qpair failed and we were unable to recover it. 00:42:46.765 [2024-05-15 09:08:41.387814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.765 [2024-05-15 09:08:41.387917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.765 [2024-05-15 09:08:41.387942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.765 qpair failed and we were unable to recover it. 00:42:46.765 [2024-05-15 09:08:41.388130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.765 [2024-05-15 09:08:41.388254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.765 [2024-05-15 09:08:41.388279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.765 qpair failed and we were unable to recover it. 00:42:46.765 [2024-05-15 09:08:41.388425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.765 [2024-05-15 09:08:41.388590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.765 [2024-05-15 09:08:41.388617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.765 qpair failed and we were unable to recover it. 00:42:46.765 [2024-05-15 09:08:41.388765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.765 [2024-05-15 09:08:41.388895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.765 [2024-05-15 09:08:41.388919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.765 qpair failed and we were unable to recover it. 00:42:46.765 [2024-05-15 09:08:41.389068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.765 [2024-05-15 09:08:41.389245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.765 [2024-05-15 09:08:41.389270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.765 qpair failed and we were unable to recover it. 00:42:46.765 [2024-05-15 09:08:41.389371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.765 [2024-05-15 09:08:41.389513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.765 [2024-05-15 09:08:41.389542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.765 qpair failed and we were unable to recover it. 00:42:46.765 [2024-05-15 09:08:41.389653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.765 [2024-05-15 09:08:41.389775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.765 [2024-05-15 09:08:41.389800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.765 qpair failed and we were unable to recover it. 00:42:46.765 [2024-05-15 09:08:41.389973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.765 [2024-05-15 09:08:41.390137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.765 [2024-05-15 09:08:41.390163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.765 qpair failed and we were unable to recover it. 00:42:46.765 [2024-05-15 09:08:41.390309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.765 [2024-05-15 09:08:41.390466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.765 [2024-05-15 09:08:41.390491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.765 qpair failed and we were unable to recover it. 00:42:46.765 [2024-05-15 09:08:41.390643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.765 [2024-05-15 09:08:41.390773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.765 [2024-05-15 09:08:41.390797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.765 qpair failed and we were unable to recover it. 00:42:46.765 [2024-05-15 09:08:41.390976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.765 [2024-05-15 09:08:41.391110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.765 [2024-05-15 09:08:41.391137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.765 qpair failed and we were unable to recover it. 00:42:46.765 [2024-05-15 09:08:41.391277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.765 [2024-05-15 09:08:41.391393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.765 [2024-05-15 09:08:41.391420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.765 qpair failed and we were unable to recover it. 00:42:46.765 [2024-05-15 09:08:41.391555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.765 [2024-05-15 09:08:41.391682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.765 [2024-05-15 09:08:41.391707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.765 qpair failed and we were unable to recover it. 00:42:46.765 [2024-05-15 09:08:41.391837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.765 [2024-05-15 09:08:41.392009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.765 [2024-05-15 09:08:41.392034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.765 qpair failed and we were unable to recover it. 00:42:46.766 [2024-05-15 09:08:41.392157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.766 [2024-05-15 09:08:41.392263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.766 [2024-05-15 09:08:41.392288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.766 qpair failed and we were unable to recover it. 00:42:46.766 [2024-05-15 09:08:41.392384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.766 [2024-05-15 09:08:41.392512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.766 [2024-05-15 09:08:41.392536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.766 qpair failed and we were unable to recover it. 00:42:46.766 [2024-05-15 09:08:41.392676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.766 [2024-05-15 09:08:41.392805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.766 [2024-05-15 09:08:41.392829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.766 qpair failed and we were unable to recover it. 00:42:46.766 [2024-05-15 09:08:41.392985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.766 [2024-05-15 09:08:41.393132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.766 [2024-05-15 09:08:41.393159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.766 qpair failed and we were unable to recover it. 00:42:46.766 [2024-05-15 09:08:41.393318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.766 [2024-05-15 09:08:41.393421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.766 [2024-05-15 09:08:41.393447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.766 qpair failed and we were unable to recover it. 00:42:46.766 [2024-05-15 09:08:41.393594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.766 [2024-05-15 09:08:41.393733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.766 [2024-05-15 09:08:41.393758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.766 qpair failed and we were unable to recover it. 00:42:46.766 [2024-05-15 09:08:41.393897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.766 [2024-05-15 09:08:41.394035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.766 [2024-05-15 09:08:41.394062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.766 qpair failed and we were unable to recover it. 00:42:46.766 [2024-05-15 09:08:41.394222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.766 [2024-05-15 09:08:41.394355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.766 [2024-05-15 09:08:41.394379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.766 qpair failed and we were unable to recover it. 00:42:46.766 [2024-05-15 09:08:41.394642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.766 [2024-05-15 09:08:41.394802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.766 [2024-05-15 09:08:41.394829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.766 qpair failed and we were unable to recover it. 00:42:46.766 [2024-05-15 09:08:41.394976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.766 [2024-05-15 09:08:41.395101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.766 [2024-05-15 09:08:41.395126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.766 qpair failed and we were unable to recover it. 00:42:46.766 [2024-05-15 09:08:41.395226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.766 [2024-05-15 09:08:41.395380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.766 [2024-05-15 09:08:41.395405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.766 qpair failed and we were unable to recover it. 00:42:46.766 [2024-05-15 09:08:41.395559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.766 [2024-05-15 09:08:41.395758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.766 [2024-05-15 09:08:41.395795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.766 qpair failed and we were unable to recover it. 00:42:46.766 [2024-05-15 09:08:41.395968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.766 [2024-05-15 09:08:41.396106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.766 [2024-05-15 09:08:41.396133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.766 qpair failed and we were unable to recover it. 00:42:46.766 [2024-05-15 09:08:41.396264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.766 [2024-05-15 09:08:41.396399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.766 [2024-05-15 09:08:41.396424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.766 qpair failed and we were unable to recover it. 00:42:46.766 [2024-05-15 09:08:41.396541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.766 [2024-05-15 09:08:41.396669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.766 [2024-05-15 09:08:41.396693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.766 qpair failed and we were unable to recover it. 00:42:46.766 [2024-05-15 09:08:41.396860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.766 [2024-05-15 09:08:41.396967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.766 [2024-05-15 09:08:41.396995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.766 qpair failed and we were unable to recover it. 00:42:46.766 [2024-05-15 09:08:41.397177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.766 [2024-05-15 09:08:41.397361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.766 [2024-05-15 09:08:41.397389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.766 qpair failed and we were unable to recover it. 00:42:46.766 [2024-05-15 09:08:41.397544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.766 [2024-05-15 09:08:41.397816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.766 [2024-05-15 09:08:41.397867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.766 qpair failed and we were unable to recover it. 00:42:46.766 [2024-05-15 09:08:41.397986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.766 [2024-05-15 09:08:41.398146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.766 [2024-05-15 09:08:41.398174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.766 qpair failed and we were unable to recover it. 00:42:46.766 [2024-05-15 09:08:41.398331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.766 [2024-05-15 09:08:41.398458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.766 [2024-05-15 09:08:41.398483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.766 qpair failed and we were unable to recover it. 00:42:46.766 [2024-05-15 09:08:41.398630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.766 [2024-05-15 09:08:41.398749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.766 [2024-05-15 09:08:41.398774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.766 qpair failed and we were unable to recover it. 00:42:46.766 [2024-05-15 09:08:41.398937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.766 [2024-05-15 09:08:41.399075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.766 [2024-05-15 09:08:41.399102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.766 qpair failed and we were unable to recover it. 00:42:46.766 [2024-05-15 09:08:41.399246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.766 [2024-05-15 09:08:41.399382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.766 [2024-05-15 09:08:41.399407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.766 qpair failed and we were unable to recover it. 00:42:46.766 [2024-05-15 09:08:41.399590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.766 [2024-05-15 09:08:41.399811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.766 [2024-05-15 09:08:41.399839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.766 qpair failed and we were unable to recover it. 00:42:46.766 [2024-05-15 09:08:41.399986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.766 [2024-05-15 09:08:41.400113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.766 [2024-05-15 09:08:41.400138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.766 qpair failed and we were unable to recover it. 00:42:46.766 [2024-05-15 09:08:41.400272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.766 [2024-05-15 09:08:41.400377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.766 [2024-05-15 09:08:41.400402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.766 qpair failed and we were unable to recover it. 00:42:46.767 [2024-05-15 09:08:41.400614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.767 [2024-05-15 09:08:41.400775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.767 [2024-05-15 09:08:41.400803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.767 qpair failed and we were unable to recover it. 00:42:46.767 [2024-05-15 09:08:41.400908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.767 [2024-05-15 09:08:41.401013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.767 [2024-05-15 09:08:41.401041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.767 qpair failed and we were unable to recover it. 00:42:46.767 [2024-05-15 09:08:41.401186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.767 [2024-05-15 09:08:41.401304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.767 [2024-05-15 09:08:41.401329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.767 qpair failed and we were unable to recover it. 00:42:46.767 [2024-05-15 09:08:41.401459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.767 [2024-05-15 09:08:41.401564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.767 [2024-05-15 09:08:41.401587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.767 qpair failed and we were unable to recover it. 00:42:46.767 [2024-05-15 09:08:41.401715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.767 [2024-05-15 09:08:41.401840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.767 [2024-05-15 09:08:41.401866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.767 qpair failed and we were unable to recover it. 00:42:46.767 [2024-05-15 09:08:41.402018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.767 [2024-05-15 09:08:41.402145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.767 [2024-05-15 09:08:41.402173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.767 qpair failed and we were unable to recover it. 00:42:46.767 [2024-05-15 09:08:41.402333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.767 [2024-05-15 09:08:41.402490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.767 [2024-05-15 09:08:41.402515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.767 qpair failed and we were unable to recover it. 00:42:46.767 [2024-05-15 09:08:41.402620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.767 [2024-05-15 09:08:41.402746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.767 [2024-05-15 09:08:41.402771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.767 qpair failed and we were unable to recover it. 00:42:46.767 [2024-05-15 09:08:41.402922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.767 [2024-05-15 09:08:41.403043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.767 [2024-05-15 09:08:41.403067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.767 qpair failed and we were unable to recover it. 00:42:46.767 [2024-05-15 09:08:41.403177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.767 [2024-05-15 09:08:41.403312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.767 [2024-05-15 09:08:41.403337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.767 qpair failed and we were unable to recover it. 00:42:46.767 [2024-05-15 09:08:41.403502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.767 [2024-05-15 09:08:41.403644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.767 [2024-05-15 09:08:41.403670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.767 qpair failed and we were unable to recover it. 00:42:46.767 [2024-05-15 09:08:41.403819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.767 [2024-05-15 09:08:41.403919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.767 [2024-05-15 09:08:41.403943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.767 qpair failed and we were unable to recover it. 00:42:46.767 [2024-05-15 09:08:41.404084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.767 [2024-05-15 09:08:41.404201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.767 [2024-05-15 09:08:41.404233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.767 qpair failed and we were unable to recover it. 00:42:46.767 [2024-05-15 09:08:41.404338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.767 [2024-05-15 09:08:41.404505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.767 [2024-05-15 09:08:41.404534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.767 qpair failed and we were unable to recover it. 00:42:46.767 [2024-05-15 09:08:41.404651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.767 [2024-05-15 09:08:41.404776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.767 [2024-05-15 09:08:41.404800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.767 qpair failed and we were unable to recover it. 00:42:46.767 [2024-05-15 09:08:41.404912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.767 [2024-05-15 09:08:41.405013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.767 [2024-05-15 09:08:41.405040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.767 qpair failed and we were unable to recover it. 00:42:46.767 [2024-05-15 09:08:41.405139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.767 [2024-05-15 09:08:41.405294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.767 [2024-05-15 09:08:41.405325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.767 qpair failed and we were unable to recover it. 00:42:46.767 [2024-05-15 09:08:41.405473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.767 [2024-05-15 09:08:41.405614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.767 [2024-05-15 09:08:41.405641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.767 qpair failed and we were unable to recover it. 00:42:46.767 [2024-05-15 09:08:41.405791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.767 [2024-05-15 09:08:41.405931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.767 [2024-05-15 09:08:41.405956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.767 qpair failed and we were unable to recover it. 00:42:46.767 [2024-05-15 09:08:41.406134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.767 [2024-05-15 09:08:41.406275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.767 [2024-05-15 09:08:41.406300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.767 qpair failed and we were unable to recover it. 00:42:46.767 [2024-05-15 09:08:41.406450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.767 [2024-05-15 09:08:41.406558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.767 [2024-05-15 09:08:41.406585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.767 qpair failed and we were unable to recover it. 00:42:46.767 [2024-05-15 09:08:41.406723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.767 [2024-05-15 09:08:41.406839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.767 [2024-05-15 09:08:41.406867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.767 qpair failed and we were unable to recover it. 00:42:46.767 [2024-05-15 09:08:41.407035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.767 [2024-05-15 09:08:41.407164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.767 [2024-05-15 09:08:41.407188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.767 qpair failed and we were unable to recover it. 00:42:46.767 [2024-05-15 09:08:41.407317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.767 [2024-05-15 09:08:41.407417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.767 [2024-05-15 09:08:41.407441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.767 qpair failed and we were unable to recover it. 00:42:46.767 [2024-05-15 09:08:41.407551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.767 [2024-05-15 09:08:41.407666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.767 [2024-05-15 09:08:41.407693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.767 qpair failed and we were unable to recover it. 00:42:46.767 [2024-05-15 09:08:41.407808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.767 [2024-05-15 09:08:41.407920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.767 [2024-05-15 09:08:41.407946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.767 qpair failed and we were unable to recover it. 00:42:46.767 [2024-05-15 09:08:41.408082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.767 [2024-05-15 09:08:41.408192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.768 [2024-05-15 09:08:41.408234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.768 qpair failed and we were unable to recover it. 00:42:46.768 [2024-05-15 09:08:41.408385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.768 [2024-05-15 09:08:41.408503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.768 [2024-05-15 09:08:41.408530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.768 qpair failed and we were unable to recover it. 00:42:46.768 [2024-05-15 09:08:41.408633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.768 [2024-05-15 09:08:41.408774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.768 [2024-05-15 09:08:41.408797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.768 qpair failed and we were unable to recover it. 00:42:46.768 [2024-05-15 09:08:41.408936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.768 [2024-05-15 09:08:41.409036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.768 [2024-05-15 09:08:41.409060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.768 qpair failed and we were unable to recover it. 00:42:46.768 [2024-05-15 09:08:41.409187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.768 [2024-05-15 09:08:41.409312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.768 [2024-05-15 09:08:41.409337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.768 qpair failed and we were unable to recover it. 00:42:46.768 [2024-05-15 09:08:41.409445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.768 [2024-05-15 09:08:41.409572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.768 [2024-05-15 09:08:41.409599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.768 qpair failed and we were unable to recover it. 00:42:46.768 [2024-05-15 09:08:41.409750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.768 [2024-05-15 09:08:41.409875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.768 [2024-05-15 09:08:41.409898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.768 qpair failed and we were unable to recover it. 00:42:46.768 [2024-05-15 09:08:41.410043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.768 [2024-05-15 09:08:41.410171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.768 [2024-05-15 09:08:41.410196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.768 qpair failed and we were unable to recover it. 00:42:46.768 [2024-05-15 09:08:41.410372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.768 [2024-05-15 09:08:41.410514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.768 [2024-05-15 09:08:41.410541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.768 qpair failed and we were unable to recover it. 00:42:46.768 [2024-05-15 09:08:41.410661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.768 [2024-05-15 09:08:41.410788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.768 [2024-05-15 09:08:41.410812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.768 qpair failed and we were unable to recover it. 00:42:46.768 [2024-05-15 09:08:41.410917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.768 [2024-05-15 09:08:41.411043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.768 [2024-05-15 09:08:41.411066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.768 qpair failed and we were unable to recover it. 00:42:46.768 [2024-05-15 09:08:41.411248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.768 [2024-05-15 09:08:41.411361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.768 [2024-05-15 09:08:41.411401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.768 qpair failed and we were unable to recover it. 00:42:46.768 [2024-05-15 09:08:41.411508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.768 [2024-05-15 09:08:41.411640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.768 [2024-05-15 09:08:41.411664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.768 qpair failed and we were unable to recover it. 00:42:46.768 [2024-05-15 09:08:41.411845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.768 [2024-05-15 09:08:41.411998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.768 [2024-05-15 09:08:41.412025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.768 qpair failed and we were unable to recover it. 00:42:46.768 [2024-05-15 09:08:41.412162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.768 [2024-05-15 09:08:41.412318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.768 [2024-05-15 09:08:41.412345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.768 qpair failed and we were unable to recover it. 00:42:46.768 [2024-05-15 09:08:41.412477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.768 [2024-05-15 09:08:41.412586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.768 [2024-05-15 09:08:41.412610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.768 qpair failed and we were unable to recover it. 00:42:46.768 [2024-05-15 09:08:41.412718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.768 [2024-05-15 09:08:41.412823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.768 [2024-05-15 09:08:41.412847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.768 qpair failed and we were unable to recover it. 00:42:46.768 [2024-05-15 09:08:41.412952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.768 [2024-05-15 09:08:41.413070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.768 [2024-05-15 09:08:41.413097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.768 qpair failed and we were unable to recover it. 00:42:46.768 [2024-05-15 09:08:41.413212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.768 [2024-05-15 09:08:41.413321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.768 [2024-05-15 09:08:41.413345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.768 qpair failed and we were unable to recover it. 00:42:46.768 [2024-05-15 09:08:41.413480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.768 [2024-05-15 09:08:41.413629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.768 [2024-05-15 09:08:41.413655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.768 qpair failed and we were unable to recover it. 00:42:46.768 [2024-05-15 09:08:41.413788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.768 [2024-05-15 09:08:41.413953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.768 [2024-05-15 09:08:41.413981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.768 qpair failed and we were unable to recover it. 00:42:46.768 [2024-05-15 09:08:41.414112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.768 [2024-05-15 09:08:41.414274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.768 [2024-05-15 09:08:41.414299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.768 qpair failed and we were unable to recover it. 00:42:46.768 [2024-05-15 09:08:41.414437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.768 [2024-05-15 09:08:41.414583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.768 [2024-05-15 09:08:41.414610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.768 qpair failed and we were unable to recover it. 00:42:46.768 [2024-05-15 09:08:41.414748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.768 [2024-05-15 09:08:41.414915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.768 [2024-05-15 09:08:41.414942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.768 qpair failed and we were unable to recover it. 00:42:46.769 [2024-05-15 09:08:41.415103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.769 [2024-05-15 09:08:41.415239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.769 [2024-05-15 09:08:41.415264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.769 qpair failed and we were unable to recover it. 00:42:46.769 [2024-05-15 09:08:41.415394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.769 [2024-05-15 09:08:41.415543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.769 [2024-05-15 09:08:41.415570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.769 qpair failed and we were unable to recover it. 00:42:46.769 [2024-05-15 09:08:41.415711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.769 [2024-05-15 09:08:41.415892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.769 [2024-05-15 09:08:41.415916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.769 qpair failed and we were unable to recover it. 00:42:46.769 [2024-05-15 09:08:41.416016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.769 [2024-05-15 09:08:41.416145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.769 [2024-05-15 09:08:41.416169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.769 qpair failed and we were unable to recover it. 00:42:46.769 [2024-05-15 09:08:41.416329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.769 [2024-05-15 09:08:41.416447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.769 [2024-05-15 09:08:41.416475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.769 qpair failed and we were unable to recover it. 00:42:46.769 [2024-05-15 09:08:41.416632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.769 [2024-05-15 09:08:41.416730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.769 [2024-05-15 09:08:41.416754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.769 qpair failed and we were unable to recover it. 00:42:46.769 [2024-05-15 09:08:41.416904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.769 [2024-05-15 09:08:41.417030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.769 [2024-05-15 09:08:41.417054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.769 qpair failed and we were unable to recover it. 00:42:46.769 [2024-05-15 09:08:41.417211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.769 [2024-05-15 09:08:41.417347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.769 [2024-05-15 09:08:41.417375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.769 qpair failed and we were unable to recover it. 00:42:46.769 [2024-05-15 09:08:41.417489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.769 [2024-05-15 09:08:41.417605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.769 [2024-05-15 09:08:41.417633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.769 qpair failed and we were unable to recover it. 00:42:46.769 [2024-05-15 09:08:41.417782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.769 [2024-05-15 09:08:41.417910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.769 [2024-05-15 09:08:41.417934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.769 qpair failed and we were unable to recover it. 00:42:46.769 [2024-05-15 09:08:41.418052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.769 [2024-05-15 09:08:41.418212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.769 [2024-05-15 09:08:41.418244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.769 qpair failed and we were unable to recover it. 00:42:46.769 [2024-05-15 09:08:41.418394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.769 [2024-05-15 09:08:41.418559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.769 [2024-05-15 09:08:41.418584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.769 qpair failed and we were unable to recover it. 00:42:46.769 [2024-05-15 09:08:41.418708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.769 [2024-05-15 09:08:41.418860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.769 [2024-05-15 09:08:41.418900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.769 qpair failed and we were unable to recover it. 00:42:46.769 [2024-05-15 09:08:41.419035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.769 [2024-05-15 09:08:41.419174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.769 [2024-05-15 09:08:41.419201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.769 qpair failed and we were unable to recover it. 00:42:46.769 [2024-05-15 09:08:41.419354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.769 [2024-05-15 09:08:41.419531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.769 [2024-05-15 09:08:41.419555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.769 qpair failed and we were unable to recover it. 00:42:46.769 [2024-05-15 09:08:41.419658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.769 [2024-05-15 09:08:41.419781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.769 [2024-05-15 09:08:41.419806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.769 qpair failed and we were unable to recover it. 00:42:46.769 [2024-05-15 09:08:41.419997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.769 [2024-05-15 09:08:41.420122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.769 [2024-05-15 09:08:41.420147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.769 qpair failed and we were unable to recover it. 00:42:46.769 [2024-05-15 09:08:41.420296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.769 [2024-05-15 09:08:41.420444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.769 [2024-05-15 09:08:41.420472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.769 qpair failed and we were unable to recover it. 00:42:46.769 [2024-05-15 09:08:41.420654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.769 [2024-05-15 09:08:41.420777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.769 [2024-05-15 09:08:41.420800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.769 qpair failed and we were unable to recover it. 00:42:46.769 [2024-05-15 09:08:41.420958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.769 [2024-05-15 09:08:41.421108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.769 [2024-05-15 09:08:41.421132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.769 qpair failed and we were unable to recover it. 00:42:46.769 [2024-05-15 09:08:41.421263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.769 [2024-05-15 09:08:41.421414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.769 [2024-05-15 09:08:41.421439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.769 qpair failed and we were unable to recover it. 00:42:46.769 [2024-05-15 09:08:41.421599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.769 [2024-05-15 09:08:41.421721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.769 [2024-05-15 09:08:41.421762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.769 qpair failed and we were unable to recover it. 00:42:46.769 [2024-05-15 09:08:41.421902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.769 [2024-05-15 09:08:41.422073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.769 [2024-05-15 09:08:41.422099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.769 qpair failed and we were unable to recover it. 00:42:46.769 [2024-05-15 09:08:41.422254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.769 [2024-05-15 09:08:41.422421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.769 [2024-05-15 09:08:41.422449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.769 qpair failed and we were unable to recover it. 00:42:46.769 [2024-05-15 09:08:41.422572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.769 [2024-05-15 09:08:41.422696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.769 [2024-05-15 09:08:41.422722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.769 qpair failed and we were unable to recover it. 00:42:46.769 [2024-05-15 09:08:41.422864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.769 [2024-05-15 09:08:41.423036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.769 [2024-05-15 09:08:41.423061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.769 qpair failed and we were unable to recover it. 00:42:46.769 [2024-05-15 09:08:41.423269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.769 [2024-05-15 09:08:41.423403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.770 [2024-05-15 09:08:41.423428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.770 qpair failed and we were unable to recover it. 00:42:46.770 [2024-05-15 09:08:41.423543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.770 [2024-05-15 09:08:41.423695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.770 [2024-05-15 09:08:41.423723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.770 qpair failed and we were unable to recover it. 00:42:46.770 [2024-05-15 09:08:41.423853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.770 [2024-05-15 09:08:41.423979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.770 [2024-05-15 09:08:41.424003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.770 qpair failed and we were unable to recover it. 00:42:46.770 [2024-05-15 09:08:41.424151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.770 [2024-05-15 09:08:41.424324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.770 [2024-05-15 09:08:41.424353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.770 qpair failed and we were unable to recover it. 00:42:46.770 [2024-05-15 09:08:41.424481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.770 [2024-05-15 09:08:41.424606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.770 [2024-05-15 09:08:41.424631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.770 qpair failed and we were unable to recover it. 00:42:46.770 [2024-05-15 09:08:41.424808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.770 [2024-05-15 09:08:41.424953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.770 [2024-05-15 09:08:41.424980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.770 qpair failed and we were unable to recover it. 00:42:46.770 [2024-05-15 09:08:41.425152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.770 [2024-05-15 09:08:41.425269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.770 [2024-05-15 09:08:41.425294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.770 qpair failed and we were unable to recover it. 00:42:46.770 [2024-05-15 09:08:41.425398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.770 [2024-05-15 09:08:41.425496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.770 [2024-05-15 09:08:41.425530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.770 qpair failed and we were unable to recover it. 00:42:46.770 [2024-05-15 09:08:41.425662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.770 [2024-05-15 09:08:41.425815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.770 [2024-05-15 09:08:41.425842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.770 qpair failed and we were unable to recover it. 00:42:46.770 [2024-05-15 09:08:41.425980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.770 [2024-05-15 09:08:41.426146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.770 [2024-05-15 09:08:41.426175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.770 qpair failed and we were unable to recover it. 00:42:46.770 [2024-05-15 09:08:41.426337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.770 [2024-05-15 09:08:41.426458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.770 [2024-05-15 09:08:41.426482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.770 qpair failed and we were unable to recover it. 00:42:46.770 [2024-05-15 09:08:41.426647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.770 [2024-05-15 09:08:41.426788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.770 [2024-05-15 09:08:41.426815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.770 qpair failed and we were unable to recover it. 00:42:46.770 [2024-05-15 09:08:41.426959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.770 [2024-05-15 09:08:41.427124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.770 [2024-05-15 09:08:41.427150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.770 qpair failed and we were unable to recover it. 00:42:46.770 [2024-05-15 09:08:41.427339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.770 [2024-05-15 09:08:41.427442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.770 [2024-05-15 09:08:41.427467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.770 qpair failed and we were unable to recover it. 00:42:46.770 [2024-05-15 09:08:41.427627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.770 [2024-05-15 09:08:41.427794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.770 [2024-05-15 09:08:41.427821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.770 qpair failed and we were unable to recover it. 00:42:46.770 [2024-05-15 09:08:41.427990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.770 [2024-05-15 09:08:41.428107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.770 [2024-05-15 09:08:41.428133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.770 qpair failed and we were unable to recover it. 00:42:46.770 [2024-05-15 09:08:41.428264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.770 [2024-05-15 09:08:41.428368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.770 [2024-05-15 09:08:41.428392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.770 qpair failed and we were unable to recover it. 00:42:46.770 [2024-05-15 09:08:41.428502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.770 [2024-05-15 09:08:41.428632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.770 [2024-05-15 09:08:41.428656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.770 qpair failed and we were unable to recover it. 00:42:46.770 [2024-05-15 09:08:41.428812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.770 [2024-05-15 09:08:41.428965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.770 [2024-05-15 09:08:41.428989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.770 qpair failed and we were unable to recover it. 00:42:46.770 [2024-05-15 09:08:41.429093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.770 [2024-05-15 09:08:41.429189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.770 [2024-05-15 09:08:41.429224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.770 qpair failed and we were unable to recover it. 00:42:46.770 [2024-05-15 09:08:41.429336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.770 [2024-05-15 09:08:41.429494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.770 [2024-05-15 09:08:41.429528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.770 qpair failed and we were unable to recover it. 00:42:46.770 [2024-05-15 09:08:41.429694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.770 [2024-05-15 09:08:41.429842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.770 [2024-05-15 09:08:41.429868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.770 qpair failed and we were unable to recover it. 00:42:46.770 [2024-05-15 09:08:41.430040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.770 [2024-05-15 09:08:41.430144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.771 [2024-05-15 09:08:41.430169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.771 qpair failed and we were unable to recover it. 00:42:46.771 [2024-05-15 09:08:41.430286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.771 [2024-05-15 09:08:41.430383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.771 [2024-05-15 09:08:41.430408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.771 qpair failed and we were unable to recover it. 00:42:46.771 [2024-05-15 09:08:41.430518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.771 [2024-05-15 09:08:41.430670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.771 [2024-05-15 09:08:41.430697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.771 qpair failed and we were unable to recover it. 00:42:46.771 [2024-05-15 09:08:41.430849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.771 [2024-05-15 09:08:41.431002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.771 [2024-05-15 09:08:41.431027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.771 qpair failed and we were unable to recover it. 00:42:46.771 [2024-05-15 09:08:41.431184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.771 [2024-05-15 09:08:41.431381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.771 [2024-05-15 09:08:41.431407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.771 qpair failed and we were unable to recover it. 00:42:46.771 [2024-05-15 09:08:41.431549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.771 [2024-05-15 09:08:41.431646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.771 [2024-05-15 09:08:41.431672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.771 qpair failed and we were unable to recover it. 00:42:46.771 [2024-05-15 09:08:41.431794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.771 [2024-05-15 09:08:41.431890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.771 [2024-05-15 09:08:41.431914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.771 qpair failed and we were unable to recover it. 00:42:46.771 [2024-05-15 09:08:41.432070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.771 [2024-05-15 09:08:41.432178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.771 [2024-05-15 09:08:41.432205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.771 qpair failed and we were unable to recover it. 00:42:46.771 [2024-05-15 09:08:41.432393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.771 [2024-05-15 09:08:41.432494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.771 [2024-05-15 09:08:41.432518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.771 qpair failed and we were unable to recover it. 00:42:46.771 [2024-05-15 09:08:41.432653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.771 [2024-05-15 09:08:41.432781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.771 [2024-05-15 09:08:41.432806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.771 qpair failed and we were unable to recover it. 00:42:46.771 [2024-05-15 09:08:41.432966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.771 [2024-05-15 09:08:41.433128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.771 [2024-05-15 09:08:41.433156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.771 qpair failed and we were unable to recover it. 00:42:46.771 [2024-05-15 09:08:41.433285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.771 [2024-05-15 09:08:41.433424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.771 [2024-05-15 09:08:41.433451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.771 qpair failed and we were unable to recover it. 00:42:46.771 [2024-05-15 09:08:41.433608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.771 [2024-05-15 09:08:41.433717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.771 [2024-05-15 09:08:41.433742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.771 qpair failed and we were unable to recover it. 00:42:46.771 [2024-05-15 09:08:41.433852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.771 [2024-05-15 09:08:41.434020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.771 [2024-05-15 09:08:41.434047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.771 qpair failed and we were unable to recover it. 00:42:46.771 [2024-05-15 09:08:41.434164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.771 [2024-05-15 09:08:41.434296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.771 [2024-05-15 09:08:41.434324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.771 qpair failed and we were unable to recover it. 00:42:46.771 [2024-05-15 09:08:41.434457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.771 [2024-05-15 09:08:41.434589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.771 [2024-05-15 09:08:41.434613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.771 qpair failed and we were unable to recover it. 00:42:46.771 [2024-05-15 09:08:41.434789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.771 [2024-05-15 09:08:41.434898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.771 [2024-05-15 09:08:41.434926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.771 qpair failed and we were unable to recover it. 00:42:46.771 [2024-05-15 09:08:41.435090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.771 [2024-05-15 09:08:41.435204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.771 [2024-05-15 09:08:41.435241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.771 qpair failed and we were unable to recover it. 00:42:46.771 [2024-05-15 09:08:41.435387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.771 [2024-05-15 09:08:41.435545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.771 [2024-05-15 09:08:41.435569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.771 qpair failed and we were unable to recover it. 00:42:46.771 [2024-05-15 09:08:41.435699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.771 [2024-05-15 09:08:41.435816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.771 [2024-05-15 09:08:41.435843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.771 qpair failed and we were unable to recover it. 00:42:46.771 [2024-05-15 09:08:41.435984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.771 [2024-05-15 09:08:41.436104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.771 [2024-05-15 09:08:41.436131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.771 qpair failed and we were unable to recover it. 00:42:46.771 [2024-05-15 09:08:41.436282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.771 [2024-05-15 09:08:41.436403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.771 [2024-05-15 09:08:41.436429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.771 qpair failed and we were unable to recover it. 00:42:46.771 [2024-05-15 09:08:41.436566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.771 [2024-05-15 09:08:41.436725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.771 [2024-05-15 09:08:41.436750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.771 qpair failed and we were unable to recover it. 00:42:46.771 [2024-05-15 09:08:41.436934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.772 [2024-05-15 09:08:41.437067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.772 [2024-05-15 09:08:41.437092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.772 qpair failed and we were unable to recover it. 00:42:46.772 [2024-05-15 09:08:41.437252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.772 [2024-05-15 09:08:41.437388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.772 [2024-05-15 09:08:41.437412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.772 qpair failed and we were unable to recover it. 00:42:46.772 [2024-05-15 09:08:41.437552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.772 [2024-05-15 09:08:41.437735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.772 [2024-05-15 09:08:41.437763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.772 qpair failed and we were unable to recover it. 00:42:46.772 [2024-05-15 09:08:41.437909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.772 [2024-05-15 09:08:41.438055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.772 [2024-05-15 09:08:41.438082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.772 qpair failed and we were unable to recover it. 00:42:46.772 [2024-05-15 09:08:41.438246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.772 [2024-05-15 09:08:41.438386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.772 [2024-05-15 09:08:41.438411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.772 qpair failed and we were unable to recover it. 00:42:46.772 [2024-05-15 09:08:41.438564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.772 [2024-05-15 09:08:41.438692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.772 [2024-05-15 09:08:41.438716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.772 qpair failed and we were unable to recover it. 00:42:46.772 [2024-05-15 09:08:41.438827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.772 [2024-05-15 09:08:41.438973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.772 [2024-05-15 09:08:41.439000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.772 qpair failed and we were unable to recover it. 00:42:46.772 [2024-05-15 09:08:41.439154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.772 [2024-05-15 09:08:41.439271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.772 [2024-05-15 09:08:41.439302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.772 qpair failed and we were unable to recover it. 00:42:46.772 [2024-05-15 09:08:41.439440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.772 [2024-05-15 09:08:41.439577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.772 [2024-05-15 09:08:41.439602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.772 qpair failed and we were unable to recover it. 00:42:46.772 [2024-05-15 09:08:41.439727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.772 [2024-05-15 09:08:41.439860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.772 [2024-05-15 09:08:41.439885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.772 qpair failed and we were unable to recover it. 00:42:46.772 [2024-05-15 09:08:41.440013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.772 [2024-05-15 09:08:41.440145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.772 [2024-05-15 09:08:41.440168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.772 qpair failed and we were unable to recover it. 00:42:46.772 [2024-05-15 09:08:41.440318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.772 [2024-05-15 09:08:41.440496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.772 [2024-05-15 09:08:41.440523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.772 qpair failed and we were unable to recover it. 00:42:46.772 [2024-05-15 09:08:41.440698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.772 [2024-05-15 09:08:41.440815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.772 [2024-05-15 09:08:41.440842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.772 qpair failed and we were unable to recover it. 00:42:46.772 [2024-05-15 09:08:41.441008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.772 [2024-05-15 09:08:41.441142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.772 [2024-05-15 09:08:41.441167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.772 qpair failed and we were unable to recover it. 00:42:46.772 [2024-05-15 09:08:41.441332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.772 [2024-05-15 09:08:41.441468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.772 [2024-05-15 09:08:41.441495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.772 qpair failed and we were unable to recover it. 00:42:46.772 [2024-05-15 09:08:41.441636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.772 [2024-05-15 09:08:41.441767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.772 [2024-05-15 09:08:41.441793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.772 qpair failed and we were unable to recover it. 00:42:46.772 [2024-05-15 09:08:41.441948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.772 [2024-05-15 09:08:41.442073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.772 [2024-05-15 09:08:41.442098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.772 qpair failed and we were unable to recover it. 00:42:46.772 [2024-05-15 09:08:41.442262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.772 [2024-05-15 09:08:41.442421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.772 [2024-05-15 09:08:41.442448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.772 qpair failed and we were unable to recover it. 00:42:46.772 [2024-05-15 09:08:41.442612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.772 [2024-05-15 09:08:41.442739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.772 [2024-05-15 09:08:41.442763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.772 qpair failed and we were unable to recover it. 00:42:46.772 [2024-05-15 09:08:41.442868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.772 [2024-05-15 09:08:41.442994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.773 [2024-05-15 09:08:41.443018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.773 qpair failed and we were unable to recover it. 00:42:46.773 [2024-05-15 09:08:41.443144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.773 [2024-05-15 09:08:41.443286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.773 [2024-05-15 09:08:41.443313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.773 qpair failed and we were unable to recover it. 00:42:46.773 [2024-05-15 09:08:41.443452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.773 [2024-05-15 09:08:41.443572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.773 [2024-05-15 09:08:41.443599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.773 qpair failed and we were unable to recover it. 00:42:46.773 [2024-05-15 09:08:41.443733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.773 [2024-05-15 09:08:41.443885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.773 [2024-05-15 09:08:41.443909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.773 qpair failed and we were unable to recover it. 00:42:46.773 [2024-05-15 09:08:41.444087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.773 [2024-05-15 09:08:41.444221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.773 [2024-05-15 09:08:41.444246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.773 qpair failed and we were unable to recover it. 00:42:46.773 [2024-05-15 09:08:41.444394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.773 [2024-05-15 09:08:41.444529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.773 [2024-05-15 09:08:41.444556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.773 qpair failed and we were unable to recover it. 00:42:46.773 [2024-05-15 09:08:41.444674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.773 [2024-05-15 09:08:41.444826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.773 [2024-05-15 09:08:41.444850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.773 qpair failed and we were unable to recover it. 00:42:46.773 [2024-05-15 09:08:41.444974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.773 [2024-05-15 09:08:41.445122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.773 [2024-05-15 09:08:41.445149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.773 qpair failed and we were unable to recover it. 00:42:46.773 [2024-05-15 09:08:41.445317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.773 [2024-05-15 09:08:41.445464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.773 [2024-05-15 09:08:41.445492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.773 qpair failed and we were unable to recover it. 00:42:46.773 [2024-05-15 09:08:41.445630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.773 [2024-05-15 09:08:41.445731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.773 [2024-05-15 09:08:41.445755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.773 qpair failed and we were unable to recover it. 00:42:46.773 [2024-05-15 09:08:41.445910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.773 [2024-05-15 09:08:41.446066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.773 [2024-05-15 09:08:41.446094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.773 qpair failed and we were unable to recover it. 00:42:46.773 [2024-05-15 09:08:41.446249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.773 [2024-05-15 09:08:41.446378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.773 [2024-05-15 09:08:41.446403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.773 qpair failed and we were unable to recover it. 00:42:46.773 [2024-05-15 09:08:41.446508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.773 [2024-05-15 09:08:41.446606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.773 [2024-05-15 09:08:41.446630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.773 qpair failed and we were unable to recover it. 00:42:46.773 [2024-05-15 09:08:41.446793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.773 [2024-05-15 09:08:41.446967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.773 [2024-05-15 09:08:41.446994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.773 qpair failed and we were unable to recover it. 00:42:46.773 [2024-05-15 09:08:41.447147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.773 [2024-05-15 09:08:41.447290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.773 [2024-05-15 09:08:41.447315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.773 qpair failed and we were unable to recover it. 00:42:46.773 [2024-05-15 09:08:41.447443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.773 [2024-05-15 09:08:41.447584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.773 [2024-05-15 09:08:41.447609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.773 qpair failed and we were unable to recover it. 00:42:46.773 [2024-05-15 09:08:41.447774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.773 [2024-05-15 09:08:41.447901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.773 [2024-05-15 09:08:41.447925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.773 qpair failed and we were unable to recover it. 00:42:46.773 [2024-05-15 09:08:41.448050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.773 [2024-05-15 09:08:41.448173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.773 [2024-05-15 09:08:41.448197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.773 qpair failed and we were unable to recover it. 00:42:46.773 Read completed with error (sct=0, sc=8) 00:42:46.773 starting I/O failed 00:42:46.773 Read completed with error (sct=0, sc=8) 00:42:46.773 starting I/O failed 00:42:46.773 Read completed with error (sct=0, sc=8) 00:42:46.773 starting I/O failed 00:42:46.773 Write completed with error (sct=0, sc=8) 00:42:46.773 starting I/O failed 00:42:46.773 Write completed with error (sct=0, sc=8) 00:42:46.773 starting I/O failed 00:42:46.773 Write completed with error (sct=0, sc=8) 00:42:46.773 starting I/O failed 00:42:46.773 Read completed with error (sct=0, sc=8) 00:42:46.773 starting I/O failed 00:42:46.773 Write completed with error (sct=0, sc=8) 00:42:46.773 starting I/O failed 00:42:46.773 Read completed with error (sct=0, sc=8) 00:42:46.773 starting I/O failed 00:42:46.773 Write completed with error (sct=0, sc=8) 00:42:46.773 starting I/O failed 00:42:46.773 Write completed with error (sct=0, sc=8) 00:42:46.773 starting I/O failed 00:42:46.773 Write completed with error (sct=0, sc=8) 00:42:46.773 starting I/O failed 00:42:46.773 Read completed with error (sct=0, sc=8) 00:42:46.773 starting I/O failed 00:42:46.773 Write completed with error (sct=0, sc=8) 00:42:46.773 starting I/O failed 00:42:46.773 Read completed with error (sct=0, sc=8) 00:42:46.773 starting I/O failed 00:42:46.773 Write completed with error (sct=0, sc=8) 00:42:46.774 starting I/O failed 00:42:46.774 Read completed with error (sct=0, sc=8) 00:42:46.774 starting I/O failed 00:42:46.774 Write completed with error (sct=0, sc=8) 00:42:46.774 starting I/O failed 00:42:46.774 Write completed with error (sct=0, sc=8) 00:42:46.774 starting I/O failed 00:42:46.774 Write completed with error (sct=0, sc=8) 00:42:46.774 starting I/O failed 00:42:46.774 Write completed with error (sct=0, sc=8) 00:42:46.774 starting I/O failed 00:42:46.774 Read completed with error (sct=0, sc=8) 00:42:46.774 starting I/O failed 00:42:46.774 Write completed with error (sct=0, sc=8) 00:42:46.774 starting I/O failed 00:42:46.774 Read completed with error (sct=0, sc=8) 00:42:46.774 starting I/O failed 00:42:46.774 Write completed with error (sct=0, sc=8) 00:42:46.774 starting I/O failed 00:42:46.774 Write completed with error (sct=0, sc=8) 00:42:46.774 starting I/O failed 00:42:46.774 Write completed with error (sct=0, sc=8) 00:42:46.774 starting I/O failed 00:42:46.774 Read completed with error (sct=0, sc=8) 00:42:46.774 starting I/O failed 00:42:46.774 Write completed with error (sct=0, sc=8) 00:42:46.774 starting I/O failed 00:42:46.774 Write completed with error (sct=0, sc=8) 00:42:46.774 starting I/O failed 00:42:46.774 Write completed with error (sct=0, sc=8) 00:42:46.774 starting I/O failed 00:42:46.774 Write completed with error (sct=0, sc=8) 00:42:46.774 starting I/O failed 00:42:46.774 [2024-05-15 09:08:41.448578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:46.774 [2024-05-15 09:08:41.448722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.774 [2024-05-15 09:08:41.448902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.774 [2024-05-15 09:08:41.448934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:46.774 qpair failed and we were unable to recover it. 00:42:46.774 [2024-05-15 09:08:41.449057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.774 [2024-05-15 09:08:41.449210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.774 [2024-05-15 09:08:41.449261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:46.774 qpair failed and we were unable to recover it. 00:42:46.774 [2024-05-15 09:08:41.449383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.774 [2024-05-15 09:08:41.449554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.774 [2024-05-15 09:08:41.449583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:46.774 qpair failed and we were unable to recover it. 00:42:46.774 [2024-05-15 09:08:41.449768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.774 [2024-05-15 09:08:41.449905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.774 [2024-05-15 09:08:41.449934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:46.774 qpair failed and we were unable to recover it. 00:42:46.774 [2024-05-15 09:08:41.450081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.774 [2024-05-15 09:08:41.450228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.774 [2024-05-15 09:08:41.450271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:46.774 qpair failed and we were unable to recover it. 00:42:46.774 [2024-05-15 09:08:41.450394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.774 [2024-05-15 09:08:41.450525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.774 [2024-05-15 09:08:41.450549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.774 qpair failed and we were unable to recover it. 00:42:46.774 [2024-05-15 09:08:41.450675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.774 [2024-05-15 09:08:41.450780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.774 [2024-05-15 09:08:41.450807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.774 qpair failed and we were unable to recover it. 00:42:46.774 [2024-05-15 09:08:41.450936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.774 [2024-05-15 09:08:41.451105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.774 [2024-05-15 09:08:41.451133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.774 qpair failed and we were unable to recover it. 00:42:46.774 [2024-05-15 09:08:41.451261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.774 [2024-05-15 09:08:41.451414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.774 [2024-05-15 09:08:41.451439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.774 qpair failed and we were unable to recover it. 00:42:46.774 [2024-05-15 09:08:41.451610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.774 [2024-05-15 09:08:41.451756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.774 [2024-05-15 09:08:41.451783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.774 qpair failed and we were unable to recover it. 00:42:46.774 [2024-05-15 09:08:41.451938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.774 [2024-05-15 09:08:41.452066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.774 [2024-05-15 09:08:41.452091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.774 qpair failed and we were unable to recover it. 00:42:46.774 [2024-05-15 09:08:41.452226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.774 [2024-05-15 09:08:41.452386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.774 [2024-05-15 09:08:41.452411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.774 qpair failed and we were unable to recover it. 00:42:46.774 [2024-05-15 09:08:41.452518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.774 [2024-05-15 09:08:41.452644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.774 [2024-05-15 09:08:41.452672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.774 qpair failed and we were unable to recover it. 00:42:46.774 [2024-05-15 09:08:41.452817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.774 [2024-05-15 09:08:41.452959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.774 [2024-05-15 09:08:41.452986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.774 qpair failed and we were unable to recover it. 00:42:46.774 [2024-05-15 09:08:41.453143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.774 [2024-05-15 09:08:41.453251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.774 [2024-05-15 09:08:41.453276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.774 qpair failed and we were unable to recover it. 00:42:46.774 [2024-05-15 09:08:41.453384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.774 [2024-05-15 09:08:41.453500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.774 [2024-05-15 09:08:41.453525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.774 qpair failed and we were unable to recover it. 00:42:46.774 [2024-05-15 09:08:41.453667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.774 [2024-05-15 09:08:41.453810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.774 [2024-05-15 09:08:41.453838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.774 qpair failed and we were unable to recover it. 00:42:46.774 [2024-05-15 09:08:41.453976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.774 [2024-05-15 09:08:41.454126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.774 [2024-05-15 09:08:41.454151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.774 qpair failed and we were unable to recover it. 00:42:46.774 [2024-05-15 09:08:41.454292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.774 [2024-05-15 09:08:41.454418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.774 [2024-05-15 09:08:41.454443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.774 qpair failed and we were unable to recover it. 00:42:46.774 [2024-05-15 09:08:41.454572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.774 [2024-05-15 09:08:41.454741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.774 [2024-05-15 09:08:41.454768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.774 qpair failed and we were unable to recover it. 00:42:46.774 [2024-05-15 09:08:41.454921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.774 [2024-05-15 09:08:41.455227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.774 [2024-05-15 09:08:41.455270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.774 qpair failed and we were unable to recover it. 00:42:46.774 [2024-05-15 09:08:41.455380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.774 [2024-05-15 09:08:41.455491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.774 [2024-05-15 09:08:41.455516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.774 qpair failed and we were unable to recover it. 00:42:46.774 [2024-05-15 09:08:41.455659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.775 [2024-05-15 09:08:41.455828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.775 [2024-05-15 09:08:41.455853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.775 qpair failed and we were unable to recover it. 00:42:46.775 [2024-05-15 09:08:41.456033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.775 [2024-05-15 09:08:41.456172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.775 [2024-05-15 09:08:41.456199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.775 qpair failed and we were unable to recover it. 00:42:46.775 [2024-05-15 09:08:41.456371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.775 [2024-05-15 09:08:41.456470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.775 [2024-05-15 09:08:41.456495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.775 qpair failed and we were unable to recover it. 00:42:46.775 [2024-05-15 09:08:41.456621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.775 [2024-05-15 09:08:41.456736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.775 [2024-05-15 09:08:41.456764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.775 qpair failed and we were unable to recover it. 00:42:46.775 [2024-05-15 09:08:41.456904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.775 [2024-05-15 09:08:41.457012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.775 [2024-05-15 09:08:41.457040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.775 qpair failed and we were unable to recover it. 00:42:46.775 [2024-05-15 09:08:41.457212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.775 [2024-05-15 09:08:41.457352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.775 [2024-05-15 09:08:41.457394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.775 qpair failed and we were unable to recover it. 00:42:46.775 [2024-05-15 09:08:41.457512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.775 [2024-05-15 09:08:41.457626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.775 [2024-05-15 09:08:41.457653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.775 qpair failed and we were unable to recover it. 00:42:46.775 [2024-05-15 09:08:41.457796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.775 [2024-05-15 09:08:41.457957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.775 [2024-05-15 09:08:41.457985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.775 qpair failed and we were unable to recover it. 00:42:46.775 [2024-05-15 09:08:41.458116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.775 [2024-05-15 09:08:41.458267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.775 [2024-05-15 09:08:41.458293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.775 qpair failed and we were unable to recover it. 00:42:46.775 [2024-05-15 09:08:41.458417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.775 [2024-05-15 09:08:41.458535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.775 [2024-05-15 09:08:41.458562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.775 qpair failed and we were unable to recover it. 00:42:46.775 [2024-05-15 09:08:41.458692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.775 [2024-05-15 09:08:41.458866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.775 [2024-05-15 09:08:41.458894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.775 qpair failed and we were unable to recover it. 00:42:46.775 [2024-05-15 09:08:41.459015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.775 [2024-05-15 09:08:41.459143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.775 [2024-05-15 09:08:41.459169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.775 qpair failed and we were unable to recover it. 00:42:46.775 [2024-05-15 09:08:41.459345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.775 [2024-05-15 09:08:41.459473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.775 [2024-05-15 09:08:41.459498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.775 qpair failed and we were unable to recover it. 00:42:46.775 [2024-05-15 09:08:41.459643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.775 [2024-05-15 09:08:41.459776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.775 [2024-05-15 09:08:41.459800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.775 qpair failed and we were unable to recover it. 00:42:46.775 [2024-05-15 09:08:41.459923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.775 [2024-05-15 09:08:41.460084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.775 [2024-05-15 09:08:41.460129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.775 qpair failed and we were unable to recover it. 00:42:46.775 [2024-05-15 09:08:41.460255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.775 [2024-05-15 09:08:41.460374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.775 [2024-05-15 09:08:41.460401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.775 qpair failed and we were unable to recover it. 00:42:46.775 [2024-05-15 09:08:41.460540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.775 [2024-05-15 09:08:41.460666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.775 [2024-05-15 09:08:41.460691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.775 qpair failed and we were unable to recover it. 00:42:46.775 [2024-05-15 09:08:41.460840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.775 [2024-05-15 09:08:41.460964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.775 [2024-05-15 09:08:41.461004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.775 qpair failed and we were unable to recover it. 00:42:46.775 [2024-05-15 09:08:41.461122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.775 [2024-05-15 09:08:41.461260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.775 [2024-05-15 09:08:41.461288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.775 qpair failed and we were unable to recover it. 00:42:46.775 [2024-05-15 09:08:41.461401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.775 [2024-05-15 09:08:41.461568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.775 [2024-05-15 09:08:41.461595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.775 qpair failed and we were unable to recover it. 00:42:46.775 [2024-05-15 09:08:41.461767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.775 [2024-05-15 09:08:41.461866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.775 [2024-05-15 09:08:41.461891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.775 qpair failed and we were unable to recover it. 00:42:46.775 [2024-05-15 09:08:41.462020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.775 [2024-05-15 09:08:41.462118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.775 [2024-05-15 09:08:41.462142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.775 qpair failed and we were unable to recover it. 00:42:46.775 [2024-05-15 09:08:41.462252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.775 [2024-05-15 09:08:41.462428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.775 [2024-05-15 09:08:41.462456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.775 qpair failed and we were unable to recover it. 00:42:46.775 [2024-05-15 09:08:41.462614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.775 [2024-05-15 09:08:41.462744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.775 [2024-05-15 09:08:41.462769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.775 qpair failed and we were unable to recover it. 00:42:46.776 [2024-05-15 09:08:41.462870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.776 [2024-05-15 09:08:41.463035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.776 [2024-05-15 09:08:41.463063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.776 qpair failed and we were unable to recover it. 00:42:46.776 [2024-05-15 09:08:41.463226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.776 [2024-05-15 09:08:41.463336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.776 [2024-05-15 09:08:41.463362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.776 qpair failed and we were unable to recover it. 00:42:46.776 [2024-05-15 09:08:41.463488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.776 [2024-05-15 09:08:41.463652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.776 [2024-05-15 09:08:41.463694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.776 qpair failed and we were unable to recover it. 00:42:46.776 [2024-05-15 09:08:41.463861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.776 [2024-05-15 09:08:41.463988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.776 [2024-05-15 09:08:41.464016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.776 qpair failed and we were unable to recover it. 00:42:46.776 [2024-05-15 09:08:41.464179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.776 [2024-05-15 09:08:41.464351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.776 [2024-05-15 09:08:41.464380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.776 qpair failed and we were unable to recover it. 00:42:46.776 [2024-05-15 09:08:41.464509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.776 [2024-05-15 09:08:41.464638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.776 [2024-05-15 09:08:41.464662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.776 qpair failed and we were unable to recover it. 00:42:46.776 [2024-05-15 09:08:41.464787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.776 [2024-05-15 09:08:41.464917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.776 [2024-05-15 09:08:41.464943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.776 qpair failed and we were unable to recover it. 00:42:46.776 [2024-05-15 09:08:41.465098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.776 [2024-05-15 09:08:41.465250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.776 [2024-05-15 09:08:41.465278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.776 qpair failed and we were unable to recover it. 00:42:46.776 [2024-05-15 09:08:41.465412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.776 [2024-05-15 09:08:41.465540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.776 [2024-05-15 09:08:41.465565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.776 qpair failed and we were unable to recover it. 00:42:46.776 [2024-05-15 09:08:41.465664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.776 [2024-05-15 09:08:41.465813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.776 [2024-05-15 09:08:41.465840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.776 qpair failed and we were unable to recover it. 00:42:46.776 [2024-05-15 09:08:41.465955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.776 [2024-05-15 09:08:41.466118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.776 [2024-05-15 09:08:41.466145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.776 qpair failed and we were unable to recover it. 00:42:46.776 [2024-05-15 09:08:41.466278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.776 [2024-05-15 09:08:41.466408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.776 [2024-05-15 09:08:41.466432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.776 qpair failed and we were unable to recover it. 00:42:46.776 [2024-05-15 09:08:41.466583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.776 [2024-05-15 09:08:41.466700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.776 [2024-05-15 09:08:41.466729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.776 qpair failed and we were unable to recover it. 00:42:46.776 [2024-05-15 09:08:41.466913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.776 [2024-05-15 09:08:41.467022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.776 [2024-05-15 09:08:41.467046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.776 qpair failed and we were unable to recover it. 00:42:46.776 [2024-05-15 09:08:41.467183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.776 [2024-05-15 09:08:41.467314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.776 [2024-05-15 09:08:41.467339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.776 qpair failed and we were unable to recover it. 00:42:46.776 [2024-05-15 09:08:41.467467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.776 [2024-05-15 09:08:41.467602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.776 [2024-05-15 09:08:41.467642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.776 qpair failed and we were unable to recover it. 00:42:46.776 [2024-05-15 09:08:41.467742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.776 [2024-05-15 09:08:41.467866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.776 [2024-05-15 09:08:41.467891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.776 qpair failed and we were unable to recover it. 00:42:46.776 [2024-05-15 09:08:41.468046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.776 [2024-05-15 09:08:41.468171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.776 [2024-05-15 09:08:41.468212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.776 qpair failed and we were unable to recover it. 00:42:46.776 [2024-05-15 09:08:41.468383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.776 [2024-05-15 09:08:41.468537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.776 [2024-05-15 09:08:41.468563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.776 qpair failed and we were unable to recover it. 00:42:46.776 [2024-05-15 09:08:41.468692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.776 [2024-05-15 09:08:41.468847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.776 [2024-05-15 09:08:41.468872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.776 qpair failed and we were unable to recover it. 00:42:46.776 [2024-05-15 09:08:41.469000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.776 [2024-05-15 09:08:41.469153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.776 [2024-05-15 09:08:41.469194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:46.776 qpair failed and we were unable to recover it. 00:42:46.776 [2024-05-15 09:08:41.469356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.776 [2024-05-15 09:08:41.469499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.776 [2024-05-15 09:08:41.469527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.776 qpair failed and we were unable to recover it. 00:42:46.776 [2024-05-15 09:08:41.469674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.776 [2024-05-15 09:08:41.469841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.776 [2024-05-15 09:08:41.469869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.776 qpair failed and we were unable to recover it. 00:42:46.776 [2024-05-15 09:08:41.470066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.776 [2024-05-15 09:08:41.470225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.776 [2024-05-15 09:08:41.470252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.776 qpair failed and we were unable to recover it. 00:42:46.776 [2024-05-15 09:08:41.470354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.776 [2024-05-15 09:08:41.470461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.776 [2024-05-15 09:08:41.470486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.776 qpair failed and we were unable to recover it. 00:42:46.776 [2024-05-15 09:08:41.470606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.776 [2024-05-15 09:08:41.470769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.776 [2024-05-15 09:08:41.470803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.776 qpair failed and we were unable to recover it. 00:42:46.776 [2024-05-15 09:08:41.470941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.776 [2024-05-15 09:08:41.471093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.776 [2024-05-15 09:08:41.471119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.776 qpair failed and we were unable to recover it. 00:42:46.776 [2024-05-15 09:08:41.471230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.776 [2024-05-15 09:08:41.471354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.776 [2024-05-15 09:08:41.471380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.776 qpair failed and we were unable to recover it. 00:42:46.776 [2024-05-15 09:08:41.471523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.776 [2024-05-15 09:08:41.471643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.776 [2024-05-15 09:08:41.471671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.777 qpair failed and we were unable to recover it. 00:42:46.777 [2024-05-15 09:08:41.471822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.777 [2024-05-15 09:08:41.471948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.777 [2024-05-15 09:08:41.471973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.777 qpair failed and we were unable to recover it. 00:42:46.777 [2024-05-15 09:08:41.472154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.777 [2024-05-15 09:08:41.472288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.777 [2024-05-15 09:08:41.472313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.777 qpair failed and we were unable to recover it. 00:42:46.777 [2024-05-15 09:08:41.472424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.777 [2024-05-15 09:08:41.472567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.777 [2024-05-15 09:08:41.472592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.777 qpair failed and we were unable to recover it. 00:42:46.777 [2024-05-15 09:08:41.472720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.777 [2024-05-15 09:08:41.472875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.777 [2024-05-15 09:08:41.472917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.777 qpair failed and we were unable to recover it. 00:42:46.777 [2024-05-15 09:08:41.473023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.777 [2024-05-15 09:08:41.473189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.777 [2024-05-15 09:08:41.473221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.777 qpair failed and we were unable to recover it. 00:42:46.777 [2024-05-15 09:08:41.473376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.777 [2024-05-15 09:08:41.473472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.777 [2024-05-15 09:08:41.473496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.777 qpair failed and we were unable to recover it. 00:42:46.777 [2024-05-15 09:08:41.473659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.777 [2024-05-15 09:08:41.473787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.777 [2024-05-15 09:08:41.473828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.777 qpair failed and we were unable to recover it. 00:42:46.777 [2024-05-15 09:08:41.473983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.777 [2024-05-15 09:08:41.474138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.777 [2024-05-15 09:08:41.474162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.777 qpair failed and we were unable to recover it. 00:42:46.777 [2024-05-15 09:08:41.474365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.777 [2024-05-15 09:08:41.474465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.777 [2024-05-15 09:08:41.474490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.777 qpair failed and we were unable to recover it. 00:42:46.777 [2024-05-15 09:08:41.474597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.777 [2024-05-15 09:08:41.474756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.777 [2024-05-15 09:08:41.474781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.777 qpair failed and we were unable to recover it. 00:42:46.777 [2024-05-15 09:08:41.474925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.777 [2024-05-15 09:08:41.475065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.777 [2024-05-15 09:08:41.475092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.777 qpair failed and we were unable to recover it. 00:42:46.777 [2024-05-15 09:08:41.475237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.777 [2024-05-15 09:08:41.475355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.777 [2024-05-15 09:08:41.475379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.777 qpair failed and we were unable to recover it. 00:42:46.777 [2024-05-15 09:08:41.475489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.777 [2024-05-15 09:08:41.475626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.777 [2024-05-15 09:08:41.475651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.777 qpair failed and we were unable to recover it. 00:42:46.777 [2024-05-15 09:08:41.475779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.777 [2024-05-15 09:08:41.475878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.777 [2024-05-15 09:08:41.475902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.777 qpair failed and we were unable to recover it. 00:42:46.777 [2024-05-15 09:08:41.476034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.777 [2024-05-15 09:08:41.476141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.777 [2024-05-15 09:08:41.476166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.777 qpair failed and we were unable to recover it. 00:42:46.777 [2024-05-15 09:08:41.476295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.777 [2024-05-15 09:08:41.476399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.777 [2024-05-15 09:08:41.476425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.777 qpair failed and we were unable to recover it. 00:42:46.777 [2024-05-15 09:08:41.476566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.777 [2024-05-15 09:08:41.476707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.777 [2024-05-15 09:08:41.476735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.777 qpair failed and we were unable to recover it. 00:42:46.777 [2024-05-15 09:08:41.476852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.777 [2024-05-15 09:08:41.476955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.777 [2024-05-15 09:08:41.476980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.777 qpair failed and we were unable to recover it. 00:42:46.777 [2024-05-15 09:08:41.477102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.777 [2024-05-15 09:08:41.477233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.777 [2024-05-15 09:08:41.477258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.777 qpair failed and we were unable to recover it. 00:42:46.777 [2024-05-15 09:08:41.477385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.777 [2024-05-15 09:08:41.477549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.777 [2024-05-15 09:08:41.477576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.777 qpair failed and we were unable to recover it. 00:42:46.777 [2024-05-15 09:08:41.477741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.777 [2024-05-15 09:08:41.477857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.777 [2024-05-15 09:08:41.477885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.777 qpair failed and we were unable to recover it. 00:42:46.777 [2024-05-15 09:08:41.478059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.777 [2024-05-15 09:08:41.478185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.777 [2024-05-15 09:08:41.478234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.777 qpair failed and we were unable to recover it. 00:42:46.777 [2024-05-15 09:08:41.478362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.777 [2024-05-15 09:08:41.478490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.777 [2024-05-15 09:08:41.478515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.777 qpair failed and we were unable to recover it. 00:42:46.777 [2024-05-15 09:08:41.478655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.777 [2024-05-15 09:08:41.478770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.777 [2024-05-15 09:08:41.478797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.777 qpair failed and we were unable to recover it. 00:42:46.777 [2024-05-15 09:08:41.478951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.777 [2024-05-15 09:08:41.479060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.777 [2024-05-15 09:08:41.479084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.777 qpair failed and we were unable to recover it. 00:42:46.777 [2024-05-15 09:08:41.479210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.777 [2024-05-15 09:08:41.479355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.777 [2024-05-15 09:08:41.479380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.777 qpair failed and we were unable to recover it. 00:42:46.777 [2024-05-15 09:08:41.479552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.777 [2024-05-15 09:08:41.479695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.777 [2024-05-15 09:08:41.479722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.777 qpair failed and we were unable to recover it. 00:42:46.777 [2024-05-15 09:08:41.479897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.777 [2024-05-15 09:08:41.480024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.777 [2024-05-15 09:08:41.480049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.777 qpair failed and we were unable to recover it. 00:42:46.777 [2024-05-15 09:08:41.480201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.777 [2024-05-15 09:08:41.480339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.778 [2024-05-15 09:08:41.480364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.778 qpair failed and we were unable to recover it. 00:42:46.778 [2024-05-15 09:08:41.480482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.778 [2024-05-15 09:08:41.480639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.778 [2024-05-15 09:08:41.480663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.778 qpair failed and we were unable to recover it. 00:42:46.778 [2024-05-15 09:08:41.480787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.778 [2024-05-15 09:08:41.480914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.778 [2024-05-15 09:08:41.480938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.778 qpair failed and we were unable to recover it. 00:42:46.778 [2024-05-15 09:08:41.481093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.778 [2024-05-15 09:08:41.481227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.778 [2024-05-15 09:08:41.481252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.778 qpair failed and we were unable to recover it. 00:42:46.778 [2024-05-15 09:08:41.481362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.778 [2024-05-15 09:08:41.481521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.778 [2024-05-15 09:08:41.481550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.778 qpair failed and we were unable to recover it. 00:42:46.778 [2024-05-15 09:08:41.481720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.778 [2024-05-15 09:08:41.481828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.778 [2024-05-15 09:08:41.481854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.778 qpair failed and we were unable to recover it. 00:42:46.778 [2024-05-15 09:08:41.482012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.778 [2024-05-15 09:08:41.482155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.778 [2024-05-15 09:08:41.482183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.778 qpair failed and we were unable to recover it. 00:42:46.778 [2024-05-15 09:08:41.482319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.778 [2024-05-15 09:08:41.482430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.778 [2024-05-15 09:08:41.482454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.778 qpair failed and we were unable to recover it. 00:42:46.778 [2024-05-15 09:08:41.482843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.778 [2024-05-15 09:08:41.482981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.778 [2024-05-15 09:08:41.483010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.778 qpair failed and we were unable to recover it. 00:42:46.778 [2024-05-15 09:08:41.483126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.778 [2024-05-15 09:08:41.483293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.778 [2024-05-15 09:08:41.483319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.778 qpair failed and we were unable to recover it. 00:42:46.778 [2024-05-15 09:08:41.483450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.778 [2024-05-15 09:08:41.483553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.778 [2024-05-15 09:08:41.483578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.778 qpair failed and we were unable to recover it. 00:42:46.778 [2024-05-15 09:08:41.483686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.778 [2024-05-15 09:08:41.483815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.778 [2024-05-15 09:08:41.483859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.778 qpair failed and we were unable to recover it. 00:42:46.778 [2024-05-15 09:08:41.484011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.778 [2024-05-15 09:08:41.484158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.778 [2024-05-15 09:08:41.484184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.778 qpair failed and we were unable to recover it. 00:42:46.778 [2024-05-15 09:08:41.484316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.778 [2024-05-15 09:08:41.484420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.778 [2024-05-15 09:08:41.484444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.778 qpair failed and we were unable to recover it. 00:42:46.778 [2024-05-15 09:08:41.484577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.778 [2024-05-15 09:08:41.484709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.778 [2024-05-15 09:08:41.484738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.778 qpair failed and we were unable to recover it. 00:42:46.778 [2024-05-15 09:08:41.484867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.778 [2024-05-15 09:08:41.484992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.778 [2024-05-15 09:08:41.485020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.778 qpair failed and we were unable to recover it. 00:42:46.778 [2024-05-15 09:08:41.485180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.778 [2024-05-15 09:08:41.485319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.778 [2024-05-15 09:08:41.485344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.778 qpair failed and we were unable to recover it. 00:42:46.778 [2024-05-15 09:08:41.485453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.778 [2024-05-15 09:08:41.485590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.778 [2024-05-15 09:08:41.485614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.778 qpair failed and we were unable to recover it. 00:42:46.778 [2024-05-15 09:08:41.485744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.778 [2024-05-15 09:08:41.485899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.778 [2024-05-15 09:08:41.485924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.778 qpair failed and we were unable to recover it. 00:42:46.778 [2024-05-15 09:08:41.486068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.778 [2024-05-15 09:08:41.486220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.778 [2024-05-15 09:08:41.486264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.778 qpair failed and we were unable to recover it. 00:42:46.778 [2024-05-15 09:08:41.486370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.778 [2024-05-15 09:08:41.486500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.778 [2024-05-15 09:08:41.486525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.778 qpair failed and we were unable to recover it. 00:42:46.778 [2024-05-15 09:08:41.486713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.778 [2024-05-15 09:08:41.486840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.778 [2024-05-15 09:08:41.486863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.778 qpair failed and we were unable to recover it. 00:42:46.778 [2024-05-15 09:08:41.486961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.778 [2024-05-15 09:08:41.487118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.778 [2024-05-15 09:08:41.487142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.778 qpair failed and we were unable to recover it. 00:42:46.778 [2024-05-15 09:08:41.487301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.778 [2024-05-15 09:08:41.487432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.778 [2024-05-15 09:08:41.487456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.778 qpair failed and we were unable to recover it. 00:42:46.778 [2024-05-15 09:08:41.487590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.778 [2024-05-15 09:08:41.487742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.778 [2024-05-15 09:08:41.487773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.778 qpair failed and we were unable to recover it. 00:42:46.778 [2024-05-15 09:08:41.487919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.778 [2024-05-15 09:08:41.488074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.778 [2024-05-15 09:08:41.488098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.778 qpair failed and we were unable to recover it. 00:42:46.778 [2024-05-15 09:08:41.488267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.778 [2024-05-15 09:08:41.488401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.778 [2024-05-15 09:08:41.488425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.778 qpair failed and we were unable to recover it. 00:42:46.778 [2024-05-15 09:08:41.488552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.778 [2024-05-15 09:08:41.488678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.778 [2024-05-15 09:08:41.488704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.778 qpair failed and we were unable to recover it. 00:42:46.778 [2024-05-15 09:08:41.488831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.778 [2024-05-15 09:08:41.488955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.778 [2024-05-15 09:08:41.488980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.778 qpair failed and we were unable to recover it. 00:42:46.779 [2024-05-15 09:08:41.489103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.779 [2024-05-15 09:08:41.489247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.779 [2024-05-15 09:08:41.489291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.779 qpair failed and we were unable to recover it. 00:42:46.779 [2024-05-15 09:08:41.489397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.779 [2024-05-15 09:08:41.489547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.779 [2024-05-15 09:08:41.489575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.779 qpair failed and we were unable to recover it. 00:42:46.779 [2024-05-15 09:08:41.489696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.779 [2024-05-15 09:08:41.489854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.779 [2024-05-15 09:08:41.489878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.779 qpair failed and we were unable to recover it. 00:42:46.779 [2024-05-15 09:08:41.490045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.779 [2024-05-15 09:08:41.490198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.779 [2024-05-15 09:08:41.490227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.779 qpair failed and we were unable to recover it. 00:42:46.779 [2024-05-15 09:08:41.490328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.779 [2024-05-15 09:08:41.490427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.779 [2024-05-15 09:08:41.490451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.779 qpair failed and we were unable to recover it. 00:42:46.779 [2024-05-15 09:08:41.490585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.779 [2024-05-15 09:08:41.490713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.779 [2024-05-15 09:08:41.490742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.779 qpair failed and we were unable to recover it. 00:42:46.779 [2024-05-15 09:08:41.490863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.779 [2024-05-15 09:08:41.491034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.779 [2024-05-15 09:08:41.491061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.779 qpair failed and we were unable to recover it. 00:42:46.779 [2024-05-15 09:08:41.491262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.779 [2024-05-15 09:08:41.491397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.779 [2024-05-15 09:08:41.491422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.779 qpair failed and we were unable to recover it. 00:42:46.779 [2024-05-15 09:08:41.491540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.779 [2024-05-15 09:08:41.491696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.779 [2024-05-15 09:08:41.491721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.779 qpair failed and we were unable to recover it. 00:42:46.779 [2024-05-15 09:08:41.491870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.779 [2024-05-15 09:08:41.492010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.779 [2024-05-15 09:08:41.492038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.779 qpair failed and we were unable to recover it. 00:42:46.779 [2024-05-15 09:08:41.492146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.779 [2024-05-15 09:08:41.492306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.779 [2024-05-15 09:08:41.492331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.779 qpair failed and we were unable to recover it. 00:42:46.779 [2024-05-15 09:08:41.492423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.779 [2024-05-15 09:08:41.492540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.779 [2024-05-15 09:08:41.492564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.779 qpair failed and we were unable to recover it. 00:42:46.779 [2024-05-15 09:08:41.492671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.779 [2024-05-15 09:08:41.492822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.779 [2024-05-15 09:08:41.492847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.779 qpair failed and we were unable to recover it. 00:42:46.779 [2024-05-15 09:08:41.493017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.779 [2024-05-15 09:08:41.493124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.779 [2024-05-15 09:08:41.493151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.779 qpair failed and we were unable to recover it. 00:42:46.779 [2024-05-15 09:08:41.493299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.779 [2024-05-15 09:08:41.493399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.779 [2024-05-15 09:08:41.493423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.779 qpair failed and we were unable to recover it. 00:42:46.779 [2024-05-15 09:08:41.493537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.779 [2024-05-15 09:08:41.493695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.779 [2024-05-15 09:08:41.493724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.779 qpair failed and we were unable to recover it. 00:42:46.779 [2024-05-15 09:08:41.493829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.779 [2024-05-15 09:08:41.493966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.779 [2024-05-15 09:08:41.493993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.779 qpair failed and we were unable to recover it. 00:42:46.779 [2024-05-15 09:08:41.494115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.779 [2024-05-15 09:08:41.494245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.779 [2024-05-15 09:08:41.494269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.779 qpair failed and we were unable to recover it. 00:42:46.779 [2024-05-15 09:08:41.494395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.779 [2024-05-15 09:08:41.494509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.779 [2024-05-15 09:08:41.494537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.779 qpair failed and we were unable to recover it. 00:42:46.779 [2024-05-15 09:08:41.494700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.779 [2024-05-15 09:08:41.494875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.779 [2024-05-15 09:08:41.494899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.779 qpair failed and we were unable to recover it. 00:42:46.779 [2024-05-15 09:08:41.495034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.779 [2024-05-15 09:08:41.495170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.779 [2024-05-15 09:08:41.495197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.779 qpair failed and we were unable to recover it. 00:42:46.779 [2024-05-15 09:08:41.495327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.779 [2024-05-15 09:08:41.495432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.779 [2024-05-15 09:08:41.495457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.779 qpair failed and we were unable to recover it. 00:42:46.779 [2024-05-15 09:08:41.495611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.779 [2024-05-15 09:08:41.495741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.779 [2024-05-15 09:08:41.495765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.779 qpair failed and we were unable to recover it. 00:42:46.779 [2024-05-15 09:08:41.495892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.779 [2024-05-15 09:08:41.496016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.779 [2024-05-15 09:08:41.496042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.779 qpair failed and we were unable to recover it. 00:42:46.779 [2024-05-15 09:08:41.496169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.779 [2024-05-15 09:08:41.496306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.779 [2024-05-15 09:08:41.496333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.779 qpair failed and we were unable to recover it. 00:42:46.779 [2024-05-15 09:08:41.496442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.779 [2024-05-15 09:08:41.496546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.779 [2024-05-15 09:08:41.496571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.779 qpair failed and we were unable to recover it. 00:42:46.779 [2024-05-15 09:08:41.496727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.779 [2024-05-15 09:08:41.496851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.779 [2024-05-15 09:08:41.496877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.779 qpair failed and we were unable to recover it. 00:42:46.779 [2024-05-15 09:08:41.497038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.779 [2024-05-15 09:08:41.497193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.780 [2024-05-15 09:08:41.497223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.780 qpair failed and we were unable to recover it. 00:42:46.780 [2024-05-15 09:08:41.497338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.780 [2024-05-15 09:08:41.497441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.780 [2024-05-15 09:08:41.497464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.780 qpair failed and we were unable to recover it. 00:42:46.780 [2024-05-15 09:08:41.497595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.780 [2024-05-15 09:08:41.497722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.780 [2024-05-15 09:08:41.497747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.780 qpair failed and we were unable to recover it. 00:42:46.780 [2024-05-15 09:08:41.497871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.780 [2024-05-15 09:08:41.497973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.780 [2024-05-15 09:08:41.497998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.780 qpair failed and we were unable to recover it. 00:42:46.780 [2024-05-15 09:08:41.498103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.780 [2024-05-15 09:08:41.498287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.780 [2024-05-15 09:08:41.498313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.780 qpair failed and we were unable to recover it. 00:42:46.780 [2024-05-15 09:08:41.498440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.780 [2024-05-15 09:08:41.498576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.780 [2024-05-15 09:08:41.498601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.780 qpair failed and we were unable to recover it. 00:42:46.780 [2024-05-15 09:08:41.498735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.780 [2024-05-15 09:08:41.498893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.780 [2024-05-15 09:08:41.498921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.780 qpair failed and we were unable to recover it. 00:42:46.780 [2024-05-15 09:08:41.499061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.780 [2024-05-15 09:08:41.499194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.780 [2024-05-15 09:08:41.499228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.780 qpair failed and we were unable to recover it. 00:42:46.780 [2024-05-15 09:08:41.499359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.780 [2024-05-15 09:08:41.499490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.780 [2024-05-15 09:08:41.499515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.780 qpair failed and we were unable to recover it. 00:42:46.780 [2024-05-15 09:08:41.499691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.780 [2024-05-15 09:08:41.499797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.780 [2024-05-15 09:08:41.499823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.780 qpair failed and we were unable to recover it. 00:42:46.780 [2024-05-15 09:08:41.499958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.780 [2024-05-15 09:08:41.500075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.780 [2024-05-15 09:08:41.500103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.780 qpair failed and we were unable to recover it. 00:42:46.780 [2024-05-15 09:08:41.500249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.780 [2024-05-15 09:08:41.500353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.780 [2024-05-15 09:08:41.500378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.780 qpair failed and we were unable to recover it. 00:42:46.780 [2024-05-15 09:08:41.500505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.780 [2024-05-15 09:08:41.500671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.780 [2024-05-15 09:08:41.500695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.780 qpair failed and we were unable to recover it. 00:42:46.780 [2024-05-15 09:08:41.500828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.780 [2024-05-15 09:08:41.500935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.780 [2024-05-15 09:08:41.500959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.780 qpair failed and we were unable to recover it. 00:42:46.780 [2024-05-15 09:08:41.501077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.780 [2024-05-15 09:08:41.501176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.780 [2024-05-15 09:08:41.501200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.780 qpair failed and we were unable to recover it. 00:42:46.780 [2024-05-15 09:08:41.501338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.780 [2024-05-15 09:08:41.501459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.780 [2024-05-15 09:08:41.501484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.780 qpair failed and we were unable to recover it. 00:42:46.780 [2024-05-15 09:08:41.501606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.780 [2024-05-15 09:08:41.501723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.780 [2024-05-15 09:08:41.501750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.780 qpair failed and we were unable to recover it. 00:42:46.780 [2024-05-15 09:08:41.501911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.780 [2024-05-15 09:08:41.502065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.780 [2024-05-15 09:08:41.502105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.780 qpair failed and we were unable to recover it. 00:42:46.780 [2024-05-15 09:08:41.502261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.780 [2024-05-15 09:08:41.502383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.780 [2024-05-15 09:08:41.502408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.780 qpair failed and we were unable to recover it. 00:42:46.780 [2024-05-15 09:08:41.502542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.780 [2024-05-15 09:08:41.502670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.780 [2024-05-15 09:08:41.502695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.780 qpair failed and we were unable to recover it. 00:42:46.780 [2024-05-15 09:08:41.502854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.780 [2024-05-15 09:08:41.502989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.780 [2024-05-15 09:08:41.503032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.780 qpair failed and we were unable to recover it. 00:42:46.780 [2024-05-15 09:08:41.503150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.780 [2024-05-15 09:08:41.503268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.780 [2024-05-15 09:08:41.503292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.780 qpair failed and we were unable to recover it. 00:42:46.780 [2024-05-15 09:08:41.503393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.780 [2024-05-15 09:08:41.503520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.780 [2024-05-15 09:08:41.503544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.780 qpair failed and we were unable to recover it. 00:42:46.780 [2024-05-15 09:08:41.503669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.780 [2024-05-15 09:08:41.503795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.780 [2024-05-15 09:08:41.503821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.780 qpair failed and we were unable to recover it. 00:42:46.780 [2024-05-15 09:08:41.503957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.780 [2024-05-15 09:08:41.504131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.780 [2024-05-15 09:08:41.504156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.780 qpair failed and we were unable to recover it. 00:42:46.781 [2024-05-15 09:08:41.504312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.781 [2024-05-15 09:08:41.504420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.781 [2024-05-15 09:08:41.504446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.781 qpair failed and we were unable to recover it. 00:42:46.781 [2024-05-15 09:08:41.504609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.781 [2024-05-15 09:08:41.504736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.781 [2024-05-15 09:08:41.504760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.781 qpair failed and we were unable to recover it. 00:42:46.781 [2024-05-15 09:08:41.504854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.781 [2024-05-15 09:08:41.504983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.781 [2024-05-15 09:08:41.505009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.781 qpair failed and we were unable to recover it. 00:42:46.781 [2024-05-15 09:08:41.505160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.781 [2024-05-15 09:08:41.505316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.781 [2024-05-15 09:08:41.505341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.781 qpair failed and we were unable to recover it. 00:42:46.781 [2024-05-15 09:08:41.505446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.781 [2024-05-15 09:08:41.505559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.781 [2024-05-15 09:08:41.505583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.781 qpair failed and we were unable to recover it. 00:42:46.781 [2024-05-15 09:08:41.505742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.781 [2024-05-15 09:08:41.505898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.781 [2024-05-15 09:08:41.505923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.781 qpair failed and we were unable to recover it. 00:42:46.781 [2024-05-15 09:08:41.506028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.781 [2024-05-15 09:08:41.506129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.781 [2024-05-15 09:08:41.506154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.781 qpair failed and we were unable to recover it. 00:42:46.781 [2024-05-15 09:08:41.506264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.781 [2024-05-15 09:08:41.506376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.781 [2024-05-15 09:08:41.506400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.781 qpair failed and we were unable to recover it. 00:42:46.781 [2024-05-15 09:08:41.506552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.781 [2024-05-15 09:08:41.506692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.781 [2024-05-15 09:08:41.506719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.781 qpair failed and we were unable to recover it. 00:42:46.781 [2024-05-15 09:08:41.506902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.781 [2024-05-15 09:08:41.507055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.781 [2024-05-15 09:08:41.507096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.781 qpair failed and we were unable to recover it. 00:42:46.781 [2024-05-15 09:08:41.507238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.781 [2024-05-15 09:08:41.507334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.781 [2024-05-15 09:08:41.507359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.781 qpair failed and we were unable to recover it. 00:42:46.781 [2024-05-15 09:08:41.507485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.781 [2024-05-15 09:08:41.507608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.781 [2024-05-15 09:08:41.507633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.781 qpair failed and we were unable to recover it. 00:42:46.781 [2024-05-15 09:08:41.507763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.781 [2024-05-15 09:08:41.507907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.781 [2024-05-15 09:08:41.507934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.781 qpair failed and we were unable to recover it. 00:42:46.781 [2024-05-15 09:08:41.508107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.781 [2024-05-15 09:08:41.508213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.781 [2024-05-15 09:08:41.508245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.781 qpair failed and we were unable to recover it. 00:42:46.781 [2024-05-15 09:08:41.508352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.781 [2024-05-15 09:08:41.508461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.781 [2024-05-15 09:08:41.508486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.781 qpair failed and we were unable to recover it. 00:42:46.781 [2024-05-15 09:08:41.508690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.781 [2024-05-15 09:08:41.508826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.781 [2024-05-15 09:08:41.508850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.781 qpair failed and we were unable to recover it. 00:42:46.781 [2024-05-15 09:08:41.509008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.781 [2024-05-15 09:08:41.509111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.781 [2024-05-15 09:08:41.509136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.781 qpair failed and we were unable to recover it. 00:42:46.781 [2024-05-15 09:08:41.509265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.781 [2024-05-15 09:08:41.509390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.781 [2024-05-15 09:08:41.509414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.781 qpair failed and we were unable to recover it. 00:42:46.781 [2024-05-15 09:08:41.509565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.781 [2024-05-15 09:08:41.509683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.781 [2024-05-15 09:08:41.509710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.781 qpair failed and we were unable to recover it. 00:42:46.781 [2024-05-15 09:08:41.509862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.781 [2024-05-15 09:08:41.509996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.781 [2024-05-15 09:08:41.510021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.781 qpair failed and we were unable to recover it. 00:42:46.781 [2024-05-15 09:08:41.510144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.781 [2024-05-15 09:08:41.510273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.781 [2024-05-15 09:08:41.510316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.781 qpair failed and we were unable to recover it. 00:42:46.781 [2024-05-15 09:08:41.510411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.781 [2024-05-15 09:08:41.510530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.781 [2024-05-15 09:08:41.510558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.781 qpair failed and we were unable to recover it. 00:42:46.781 [2024-05-15 09:08:41.510716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.781 [2024-05-15 09:08:41.510821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:46.781 [2024-05-15 09:08:41.510846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:46.781 qpair failed and we were unable to recover it. 00:42:47.062 [2024-05-15 09:08:41.510959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.062 [2024-05-15 09:08:41.511096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.062 [2024-05-15 09:08:41.511124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.062 qpair failed and we were unable to recover it. 00:42:47.062 [2024-05-15 09:08:41.511238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.062 [2024-05-15 09:08:41.511367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.062 [2024-05-15 09:08:41.511396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.062 qpair failed and we were unable to recover it. 00:42:47.062 [2024-05-15 09:08:41.511526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.062 [2024-05-15 09:08:41.511655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.062 [2024-05-15 09:08:41.511679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.062 qpair failed and we were unable to recover it. 00:42:47.062 [2024-05-15 09:08:41.511838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.062 [2024-05-15 09:08:41.511992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.062 [2024-05-15 09:08:41.512017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.062 qpair failed and we were unable to recover it. 00:42:47.062 [2024-05-15 09:08:41.512120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.062 [2024-05-15 09:08:41.512225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.062 [2024-05-15 09:08:41.512251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.062 qpair failed and we were unable to recover it. 00:42:47.062 [2024-05-15 09:08:41.512383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.062 [2024-05-15 09:08:41.512515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.062 [2024-05-15 09:08:41.512539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.062 qpair failed and we were unable to recover it. 00:42:47.062 [2024-05-15 09:08:41.512726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.062 [2024-05-15 09:08:41.512830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.062 [2024-05-15 09:08:41.512855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.062 qpair failed and we were unable to recover it. 00:42:47.062 [2024-05-15 09:08:41.513025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.062 [2024-05-15 09:08:41.513164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.062 [2024-05-15 09:08:41.513192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.062 qpair failed and we were unable to recover it. 00:42:47.062 [2024-05-15 09:08:41.513333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.062 [2024-05-15 09:08:41.513446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.062 [2024-05-15 09:08:41.513471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.062 qpair failed and we were unable to recover it. 00:42:47.062 [2024-05-15 09:08:41.513609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.062 [2024-05-15 09:08:41.513752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.062 [2024-05-15 09:08:41.513780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.062 qpair failed and we were unable to recover it. 00:42:47.062 [2024-05-15 09:08:41.513899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.062 [2024-05-15 09:08:41.514037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.062 [2024-05-15 09:08:41.514063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.062 qpair failed and we were unable to recover it. 00:42:47.062 [2024-05-15 09:08:41.514227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.062 [2024-05-15 09:08:41.514336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.062 [2024-05-15 09:08:41.514360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.062 qpair failed and we were unable to recover it. 00:42:47.062 [2024-05-15 09:08:41.514467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.062 [2024-05-15 09:08:41.514592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.062 [2024-05-15 09:08:41.514615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.062 qpair failed and we were unable to recover it. 00:42:47.062 [2024-05-15 09:08:41.514759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.062 [2024-05-15 09:08:41.514869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.062 [2024-05-15 09:08:41.514896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.062 qpair failed and we were unable to recover it. 00:42:47.062 [2024-05-15 09:08:41.515039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.062 [2024-05-15 09:08:41.515148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.062 [2024-05-15 09:08:41.515172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.062 qpair failed and we were unable to recover it. 00:42:47.062 [2024-05-15 09:08:41.515339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.062 [2024-05-15 09:08:41.515482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.062 [2024-05-15 09:08:41.515510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.062 qpair failed and we were unable to recover it. 00:42:47.062 [2024-05-15 09:08:41.515628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.062 [2024-05-15 09:08:41.515770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.062 [2024-05-15 09:08:41.515798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.062 qpair failed and we were unable to recover it. 00:42:47.062 [2024-05-15 09:08:41.515942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.062 [2024-05-15 09:08:41.516068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.062 [2024-05-15 09:08:41.516092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.062 qpair failed and we were unable to recover it. 00:42:47.062 [2024-05-15 09:08:41.516197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.062 [2024-05-15 09:08:41.516332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.062 [2024-05-15 09:08:41.516359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.062 qpair failed and we were unable to recover it. 00:42:47.062 [2024-05-15 09:08:41.516476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.062 [2024-05-15 09:08:41.516605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.062 [2024-05-15 09:08:41.516630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.062 qpair failed and we were unable to recover it. 00:42:47.062 [2024-05-15 09:08:41.516764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.062 [2024-05-15 09:08:41.516891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.062 [2024-05-15 09:08:41.516916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.062 qpair failed and we were unable to recover it. 00:42:47.062 [2024-05-15 09:08:41.517047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.062 [2024-05-15 09:08:41.517191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.062 [2024-05-15 09:08:41.517236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.062 qpair failed and we were unable to recover it. 00:42:47.062 [2024-05-15 09:08:41.517398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.062 [2024-05-15 09:08:41.517530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.062 [2024-05-15 09:08:41.517563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.062 qpair failed and we were unable to recover it. 00:42:47.062 [2024-05-15 09:08:41.517687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.062 [2024-05-15 09:08:41.517801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.062 [2024-05-15 09:08:41.517827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.062 qpair failed and we were unable to recover it. 00:42:47.062 [2024-05-15 09:08:41.517976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.062 [2024-05-15 09:08:41.518135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.062 [2024-05-15 09:08:41.518165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.062 qpair failed and we were unable to recover it. 00:42:47.063 [2024-05-15 09:08:41.518342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.063 [2024-05-15 09:08:41.518458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.063 [2024-05-15 09:08:41.518485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.063 qpair failed and we were unable to recover it. 00:42:47.063 [2024-05-15 09:08:41.518676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.063 [2024-05-15 09:08:41.518817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.063 [2024-05-15 09:08:41.518843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.063 qpair failed and we were unable to recover it. 00:42:47.063 [2024-05-15 09:08:41.518999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.063 [2024-05-15 09:08:41.519160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.063 [2024-05-15 09:08:41.519190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.063 qpair failed and we were unable to recover it. 00:42:47.063 [2024-05-15 09:08:41.519334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.063 [2024-05-15 09:08:41.519469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.063 [2024-05-15 09:08:41.519498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.063 qpair failed and we were unable to recover it. 00:42:47.063 [2024-05-15 09:08:41.519690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.063 [2024-05-15 09:08:41.519817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.063 [2024-05-15 09:08:41.519862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.063 qpair failed and we were unable to recover it. 00:42:47.063 [2024-05-15 09:08:41.519994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.063 [2024-05-15 09:08:41.520163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.063 [2024-05-15 09:08:41.520195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.063 qpair failed and we were unable to recover it. 00:42:47.063 [2024-05-15 09:08:41.520356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.063 [2024-05-15 09:08:41.520508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.063 [2024-05-15 09:08:41.520557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.063 qpair failed and we were unable to recover it. 00:42:47.063 [2024-05-15 09:08:41.520716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.063 [2024-05-15 09:08:41.520825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.063 [2024-05-15 09:08:41.520852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.063 qpair failed and we were unable to recover it. 00:42:47.063 [2024-05-15 09:08:41.520994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.063 [2024-05-15 09:08:41.521154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.063 [2024-05-15 09:08:41.521184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.063 qpair failed and we were unable to recover it. 00:42:47.063 [2024-05-15 09:08:41.521312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.063 [2024-05-15 09:08:41.521426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.063 [2024-05-15 09:08:41.521454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.063 qpair failed and we were unable to recover it. 00:42:47.063 [2024-05-15 09:08:41.521593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.063 [2024-05-15 09:08:41.521734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.063 [2024-05-15 09:08:41.521779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.063 qpair failed and we were unable to recover it. 00:42:47.063 [2024-05-15 09:08:41.521927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.063 [2024-05-15 09:08:41.522051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.063 [2024-05-15 09:08:41.522083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.063 qpair failed and we were unable to recover it. 00:42:47.063 [2024-05-15 09:08:41.522206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.063 [2024-05-15 09:08:41.522375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.063 [2024-05-15 09:08:41.522404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.063 qpair failed and we were unable to recover it. 00:42:47.063 [2024-05-15 09:08:41.522547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.063 [2024-05-15 09:08:41.522654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.063 [2024-05-15 09:08:41.522687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.063 qpair failed and we were unable to recover it. 00:42:47.063 [2024-05-15 09:08:41.522849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.063 [2024-05-15 09:08:41.523015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.063 [2024-05-15 09:08:41.523042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.063 qpair failed and we were unable to recover it. 00:42:47.063 [2024-05-15 09:08:41.523160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.063 [2024-05-15 09:08:41.523284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.063 [2024-05-15 09:08:41.523310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.063 qpair failed and we were unable to recover it. 00:42:47.063 [2024-05-15 09:08:41.523424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.063 [2024-05-15 09:08:41.523542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.063 [2024-05-15 09:08:41.523576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.063 qpair failed and we were unable to recover it. 00:42:47.063 [2024-05-15 09:08:41.523766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.063 [2024-05-15 09:08:41.523905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.063 [2024-05-15 09:08:41.523937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.063 qpair failed and we were unable to recover it. 00:42:47.063 [2024-05-15 09:08:41.524062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.063 [2024-05-15 09:08:41.524186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.063 [2024-05-15 09:08:41.524223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.063 qpair failed and we were unable to recover it. 00:42:47.063 [2024-05-15 09:08:41.524410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.063 [2024-05-15 09:08:41.524521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.063 [2024-05-15 09:08:41.524547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.063 qpair failed and we were unable to recover it. 00:42:47.063 [2024-05-15 09:08:41.524685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.063 [2024-05-15 09:08:41.524794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.063 [2024-05-15 09:08:41.524825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.063 qpair failed and we were unable to recover it. 00:42:47.063 [2024-05-15 09:08:41.524945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.063 [2024-05-15 09:08:41.525086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.063 [2024-05-15 09:08:41.525115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.063 qpair failed and we were unable to recover it. 00:42:47.064 [2024-05-15 09:08:41.525245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.064 [2024-05-15 09:08:41.525358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.064 [2024-05-15 09:08:41.525384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.064 qpair failed and we were unable to recover it. 00:42:47.064 [2024-05-15 09:08:41.525516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.064 [2024-05-15 09:08:41.525700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.064 [2024-05-15 09:08:41.525732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.064 qpair failed and we were unable to recover it. 00:42:47.064 [2024-05-15 09:08:41.525847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.064 [2024-05-15 09:08:41.526000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.064 [2024-05-15 09:08:41.526033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.064 qpair failed and we were unable to recover it. 00:42:47.064 [2024-05-15 09:08:41.526221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.064 [2024-05-15 09:08:41.526360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.064 [2024-05-15 09:08:41.526385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.064 qpair failed and we were unable to recover it. 00:42:47.064 [2024-05-15 09:08:41.526548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.064 [2024-05-15 09:08:41.526737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.064 [2024-05-15 09:08:41.526768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.064 qpair failed and we were unable to recover it. 00:42:47.064 [2024-05-15 09:08:41.526924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.064 [2024-05-15 09:08:41.527053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.064 [2024-05-15 09:08:41.527079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.064 qpair failed and we were unable to recover it. 00:42:47.064 [2024-05-15 09:08:41.527206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.064 [2024-05-15 09:08:41.527365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.064 [2024-05-15 09:08:41.527390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.064 qpair failed and we were unable to recover it. 00:42:47.064 [2024-05-15 09:08:41.527523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.064 [2024-05-15 09:08:41.527624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.064 [2024-05-15 09:08:41.527649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.064 qpair failed and we were unable to recover it. 00:42:47.064 [2024-05-15 09:08:41.527791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.064 [2024-05-15 09:08:41.527889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.064 [2024-05-15 09:08:41.527913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.064 qpair failed and we were unable to recover it. 00:42:47.064 [2024-05-15 09:08:41.528019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.064 [2024-05-15 09:08:41.528119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.064 [2024-05-15 09:08:41.528145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.064 qpair failed and we were unable to recover it. 00:42:47.064 [2024-05-15 09:08:41.528267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.064 [2024-05-15 09:08:41.528396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.064 [2024-05-15 09:08:41.528421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.064 qpair failed and we were unable to recover it. 00:42:47.064 [2024-05-15 09:08:41.528522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.064 [2024-05-15 09:08:41.528616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.064 [2024-05-15 09:08:41.528640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.064 qpair failed and we were unable to recover it. 00:42:47.064 [2024-05-15 09:08:41.528746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.064 [2024-05-15 09:08:41.528872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.064 [2024-05-15 09:08:41.528897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.064 qpair failed and we were unable to recover it. 00:42:47.064 [2024-05-15 09:08:41.529011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.064 [2024-05-15 09:08:41.529122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.064 [2024-05-15 09:08:41.529150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.064 qpair failed and we were unable to recover it. 00:42:47.064 [2024-05-15 09:08:41.529272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.064 [2024-05-15 09:08:41.529383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.064 [2024-05-15 09:08:41.529408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.064 qpair failed and we were unable to recover it. 00:42:47.064 [2024-05-15 09:08:41.529527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.064 [2024-05-15 09:08:41.529658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.064 [2024-05-15 09:08:41.529683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.064 qpair failed and we were unable to recover it. 00:42:47.064 [2024-05-15 09:08:41.529844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.064 [2024-05-15 09:08:41.529951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.064 [2024-05-15 09:08:41.529975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.064 qpair failed and we were unable to recover it. 00:42:47.064 [2024-05-15 09:08:41.530086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.064 [2024-05-15 09:08:41.530248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.064 [2024-05-15 09:08:41.530292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.064 qpair failed and we were unable to recover it. 00:42:47.064 [2024-05-15 09:08:41.530420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.064 [2024-05-15 09:08:41.530527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.064 [2024-05-15 09:08:41.530552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.064 qpair failed and we were unable to recover it. 00:42:47.064 [2024-05-15 09:08:41.530677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.064 [2024-05-15 09:08:41.530820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.064 [2024-05-15 09:08:41.530849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.064 qpair failed and we were unable to recover it. 00:42:47.064 [2024-05-15 09:08:41.530974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.064 [2024-05-15 09:08:41.531090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.064 [2024-05-15 09:08:41.531116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.064 qpair failed and we were unable to recover it. 00:42:47.064 [2024-05-15 09:08:41.531294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.064 [2024-05-15 09:08:41.531430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.064 [2024-05-15 09:08:41.531455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.064 qpair failed and we were unable to recover it. 00:42:47.064 [2024-05-15 09:08:41.531559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.064 [2024-05-15 09:08:41.531662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.064 [2024-05-15 09:08:41.531685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.064 qpair failed and we were unable to recover it. 00:42:47.064 [2024-05-15 09:08:41.531791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.064 [2024-05-15 09:08:41.531891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.064 [2024-05-15 09:08:41.531915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.064 qpair failed and we were unable to recover it. 00:42:47.064 [2024-05-15 09:08:41.532024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.064 [2024-05-15 09:08:41.532172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.064 [2024-05-15 09:08:41.532196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.064 qpair failed and we were unable to recover it. 00:42:47.064 [2024-05-15 09:08:41.532358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.064 [2024-05-15 09:08:41.532473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.064 [2024-05-15 09:08:41.532500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.064 qpair failed and we were unable to recover it. 00:42:47.064 [2024-05-15 09:08:41.532660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.064 [2024-05-15 09:08:41.532807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.064 [2024-05-15 09:08:41.532840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.064 qpair failed and we were unable to recover it. 00:42:47.064 [2024-05-15 09:08:41.532976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.064 [2024-05-15 09:08:41.533083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.064 [2024-05-15 09:08:41.533115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.065 qpair failed and we were unable to recover it. 00:42:47.065 [2024-05-15 09:08:41.533300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.065 [2024-05-15 09:08:41.533409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.065 [2024-05-15 09:08:41.533437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.065 qpair failed and we were unable to recover it. 00:42:47.065 [2024-05-15 09:08:41.533557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.065 [2024-05-15 09:08:41.533756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.065 [2024-05-15 09:08:41.533783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.065 qpair failed and we were unable to recover it. 00:42:47.065 [2024-05-15 09:08:41.533900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.065 [2024-05-15 09:08:41.534017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.065 [2024-05-15 09:08:41.534046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.065 qpair failed and we were unable to recover it. 00:42:47.065 [2024-05-15 09:08:41.534200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.065 [2024-05-15 09:08:41.534392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.065 [2024-05-15 09:08:41.534421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.065 qpair failed and we were unable to recover it. 00:42:47.065 [2024-05-15 09:08:41.534542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.065 [2024-05-15 09:08:41.534678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.065 [2024-05-15 09:08:41.534711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.065 qpair failed and we were unable to recover it. 00:42:47.065 [2024-05-15 09:08:41.534854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.065 [2024-05-15 09:08:41.535010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.065 [2024-05-15 09:08:41.535040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.065 qpair failed and we were unable to recover it. 00:42:47.065 [2024-05-15 09:08:41.535193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.065 [2024-05-15 09:08:41.535345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.065 [2024-05-15 09:08:41.535372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.065 qpair failed and we were unable to recover it. 00:42:47.065 [2024-05-15 09:08:41.535502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.065 [2024-05-15 09:08:41.535726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.065 [2024-05-15 09:08:41.535755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.065 qpair failed and we were unable to recover it. 00:42:47.065 [2024-05-15 09:08:41.535894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.065 [2024-05-15 09:08:41.536078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.065 [2024-05-15 09:08:41.536110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.065 qpair failed and we were unable to recover it. 00:42:47.065 [2024-05-15 09:08:41.536286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.065 [2024-05-15 09:08:41.536424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.065 [2024-05-15 09:08:41.536455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.065 qpair failed and we were unable to recover it. 00:42:47.065 [2024-05-15 09:08:41.536591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.065 [2024-05-15 09:08:41.536734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.065 [2024-05-15 09:08:41.536766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.065 qpair failed and we were unable to recover it. 00:42:47.065 [2024-05-15 09:08:41.536906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.065 [2024-05-15 09:08:41.537043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.065 [2024-05-15 09:08:41.537070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.065 qpair failed and we were unable to recover it. 00:42:47.065 [2024-05-15 09:08:41.537190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.065 [2024-05-15 09:08:41.537310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.065 [2024-05-15 09:08:41.537339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.065 qpair failed and we were unable to recover it. 00:42:47.065 [2024-05-15 09:08:41.537457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.065 [2024-05-15 09:08:41.537569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.065 [2024-05-15 09:08:41.537613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.065 qpair failed and we were unable to recover it. 00:42:47.065 [2024-05-15 09:08:41.537771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.065 [2024-05-15 09:08:41.537890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.065 [2024-05-15 09:08:41.537916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.065 qpair failed and we were unable to recover it. 00:42:47.065 [2024-05-15 09:08:41.538046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.065 [2024-05-15 09:08:41.538191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.065 [2024-05-15 09:08:41.538224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.065 qpair failed and we were unable to recover it. 00:42:47.065 [2024-05-15 09:08:41.538365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.065 [2024-05-15 09:08:41.538495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.065 [2024-05-15 09:08:41.538523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.065 qpair failed and we were unable to recover it. 00:42:47.065 [2024-05-15 09:08:41.538638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.065 [2024-05-15 09:08:41.538761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.065 [2024-05-15 09:08:41.538787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.065 qpair failed and we were unable to recover it. 00:42:47.065 [2024-05-15 09:08:41.538954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.065 [2024-05-15 09:08:41.539125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.065 [2024-05-15 09:08:41.539156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.065 qpair failed and we were unable to recover it. 00:42:47.065 [2024-05-15 09:08:41.539318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.065 [2024-05-15 09:08:41.539455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.065 [2024-05-15 09:08:41.539480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.065 qpair failed and we were unable to recover it. 00:42:47.065 [2024-05-15 09:08:41.539620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.065 [2024-05-15 09:08:41.539753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.065 [2024-05-15 09:08:41.539783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.065 qpair failed and we were unable to recover it. 00:42:47.065 [2024-05-15 09:08:41.539894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.065 [2024-05-15 09:08:41.540043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.065 [2024-05-15 09:08:41.540071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.065 qpair failed and we were unable to recover it. 00:42:47.065 [2024-05-15 09:08:41.540186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.065 [2024-05-15 09:08:41.540359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.065 [2024-05-15 09:08:41.540389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.065 qpair failed and we were unable to recover it. 00:42:47.065 [2024-05-15 09:08:41.540529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.066 [2024-05-15 09:08:41.540666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.066 [2024-05-15 09:08:41.540712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.066 qpair failed and we were unable to recover it. 00:42:47.066 [2024-05-15 09:08:41.540866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.066 [2024-05-15 09:08:41.540988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.066 [2024-05-15 09:08:41.541017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.066 qpair failed and we were unable to recover it. 00:42:47.066 [2024-05-15 09:08:41.541142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.066 [2024-05-15 09:08:41.541333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.066 [2024-05-15 09:08:41.541363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.066 qpair failed and we were unable to recover it. 00:42:47.066 [2024-05-15 09:08:41.541502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.066 [2024-05-15 09:08:41.541613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.066 [2024-05-15 09:08:41.541640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.066 qpair failed and we were unable to recover it. 00:42:47.066 [2024-05-15 09:08:41.541797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.066 [2024-05-15 09:08:41.541939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.066 [2024-05-15 09:08:41.541971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.066 qpair failed and we were unable to recover it. 00:42:47.066 [2024-05-15 09:08:41.542142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.066 [2024-05-15 09:08:41.542326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.066 [2024-05-15 09:08:41.542353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.066 qpair failed and we were unable to recover it. 00:42:47.066 [2024-05-15 09:08:41.542461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.066 [2024-05-15 09:08:41.542594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.066 [2024-05-15 09:08:41.542620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.066 qpair failed and we were unable to recover it. 00:42:47.066 [2024-05-15 09:08:41.542777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.066 [2024-05-15 09:08:41.542899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.066 [2024-05-15 09:08:41.542930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.066 qpair failed and we were unable to recover it. 00:42:47.066 [2024-05-15 09:08:41.543042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.066 [2024-05-15 09:08:41.543199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.066 [2024-05-15 09:08:41.543247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.066 qpair failed and we were unable to recover it. 00:42:47.066 [2024-05-15 09:08:41.543383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.066 [2024-05-15 09:08:41.543542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.066 [2024-05-15 09:08:41.543570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.066 qpair failed and we were unable to recover it. 00:42:47.066 [2024-05-15 09:08:41.543703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.066 [2024-05-15 09:08:41.543856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.066 [2024-05-15 09:08:41.543886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.066 qpair failed and we were unable to recover it. 00:42:47.066 [2024-05-15 09:08:41.544041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.066 [2024-05-15 09:08:41.544227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.066 [2024-05-15 09:08:41.544253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.066 qpair failed and we were unable to recover it. 00:42:47.066 [2024-05-15 09:08:41.544385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.066 [2024-05-15 09:08:41.544490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.066 [2024-05-15 09:08:41.544521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.066 qpair failed and we were unable to recover it. 00:42:47.066 [2024-05-15 09:08:41.544627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.066 [2024-05-15 09:08:41.544744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.066 [2024-05-15 09:08:41.544786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.066 qpair failed and we were unable to recover it. 00:42:47.066 [2024-05-15 09:08:41.544906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.066 [2024-05-15 09:08:41.545016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.066 [2024-05-15 09:08:41.545053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.066 qpair failed and we were unable to recover it. 00:42:47.066 [2024-05-15 09:08:41.545192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.066 [2024-05-15 09:08:41.545334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.066 [2024-05-15 09:08:41.545361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.066 qpair failed and we were unable to recover it. 00:42:47.066 [2024-05-15 09:08:41.545469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.066 [2024-05-15 09:08:41.545595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.066 [2024-05-15 09:08:41.545627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.066 qpair failed and we were unable to recover it. 00:42:47.066 [2024-05-15 09:08:41.545751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.066 [2024-05-15 09:08:41.545870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.066 [2024-05-15 09:08:41.545899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.066 qpair failed and we were unable to recover it. 00:42:47.066 [2024-05-15 09:08:41.546064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.066 [2024-05-15 09:08:41.546205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.066 [2024-05-15 09:08:41.546242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.066 qpair failed and we were unable to recover it. 00:42:47.066 [2024-05-15 09:08:41.546363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.066 [2024-05-15 09:08:41.546550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.066 [2024-05-15 09:08:41.546583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.066 qpair failed and we were unable to recover it. 00:42:47.066 [2024-05-15 09:08:41.546733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.066 [2024-05-15 09:08:41.546848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.066 [2024-05-15 09:08:41.546881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.066 qpair failed and we were unable to recover it. 00:42:47.066 [2024-05-15 09:08:41.547036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.066 [2024-05-15 09:08:41.547164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.066 [2024-05-15 09:08:41.547192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.066 qpair failed and we were unable to recover it. 00:42:47.066 [2024-05-15 09:08:41.547343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.066 [2024-05-15 09:08:41.547458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.066 [2024-05-15 09:08:41.547488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.066 qpair failed and we were unable to recover it. 00:42:47.066 [2024-05-15 09:08:41.547594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.066 [2024-05-15 09:08:41.547748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.066 [2024-05-15 09:08:41.547778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.066 qpair failed and we were unable to recover it. 00:42:47.066 [2024-05-15 09:08:41.547928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.066 [2024-05-15 09:08:41.548039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.066 [2024-05-15 09:08:41.548065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.066 qpair failed and we were unable to recover it. 00:42:47.066 [2024-05-15 09:08:41.548212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.066 [2024-05-15 09:08:41.548360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.066 [2024-05-15 09:08:41.548390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.066 qpair failed and we were unable to recover it. 00:42:47.066 [2024-05-15 09:08:41.548570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.066 [2024-05-15 09:08:41.548711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.066 [2024-05-15 09:08:41.548745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.066 qpair failed and we were unable to recover it. 00:42:47.066 [2024-05-15 09:08:41.548881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.066 [2024-05-15 09:08:41.549020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.067 [2024-05-15 09:08:41.549047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.067 qpair failed and we were unable to recover it. 00:42:47.067 [2024-05-15 09:08:41.549206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.067 [2024-05-15 09:08:41.549338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.067 [2024-05-15 09:08:41.549365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.067 qpair failed and we were unable to recover it. 00:42:47.067 [2024-05-15 09:08:41.549480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.067 [2024-05-15 09:08:41.549650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.067 [2024-05-15 09:08:41.549679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.067 qpair failed and we were unable to recover it. 00:42:47.067 [2024-05-15 09:08:41.549835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.067 [2024-05-15 09:08:41.549952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.067 [2024-05-15 09:08:41.549981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.067 qpair failed and we were unable to recover it. 00:42:47.067 [2024-05-15 09:08:41.550120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.067 [2024-05-15 09:08:41.550244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.067 [2024-05-15 09:08:41.550271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.067 qpair failed and we were unable to recover it. 00:42:47.067 [2024-05-15 09:08:41.550443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.067 [2024-05-15 09:08:41.550625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.067 [2024-05-15 09:08:41.550659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.067 qpair failed and we were unable to recover it. 00:42:47.067 [2024-05-15 09:08:41.550821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.067 [2024-05-15 09:08:41.550935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.067 [2024-05-15 09:08:41.550963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.067 qpair failed and we were unable to recover it. 00:42:47.067 [2024-05-15 09:08:41.551108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.067 [2024-05-15 09:08:41.551252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.067 [2024-05-15 09:08:41.551296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.067 qpair failed and we were unable to recover it. 00:42:47.067 [2024-05-15 09:08:41.551411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.067 [2024-05-15 09:08:41.551522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.067 [2024-05-15 09:08:41.551547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.067 qpair failed and we were unable to recover it. 00:42:47.067 [2024-05-15 09:08:41.551656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.067 [2024-05-15 09:08:41.551800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.067 [2024-05-15 09:08:41.551830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.067 qpair failed and we were unable to recover it. 00:42:47.067 [2024-05-15 09:08:41.551979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.067 [2024-05-15 09:08:41.552120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.067 [2024-05-15 09:08:41.552146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.067 qpair failed and we were unable to recover it. 00:42:47.067 [2024-05-15 09:08:41.552328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.067 [2024-05-15 09:08:41.552446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.067 [2024-05-15 09:08:41.552476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.067 qpair failed and we were unable to recover it. 00:42:47.067 [2024-05-15 09:08:41.552639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.067 [2024-05-15 09:08:41.552800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.067 [2024-05-15 09:08:41.552845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.067 qpair failed and we were unable to recover it. 00:42:47.067 [2024-05-15 09:08:41.552971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.067 [2024-05-15 09:08:41.553134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.067 [2024-05-15 09:08:41.553160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.067 qpair failed and we were unable to recover it. 00:42:47.067 [2024-05-15 09:08:41.553266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.067 [2024-05-15 09:08:41.553396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.067 [2024-05-15 09:08:41.553425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.067 qpair failed and we were unable to recover it. 00:42:47.067 [2024-05-15 09:08:41.553535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.067 [2024-05-15 09:08:41.553669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.067 [2024-05-15 09:08:41.553699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.067 qpair failed and we were unable to recover it. 00:42:47.067 [2024-05-15 09:08:41.553839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.067 [2024-05-15 09:08:41.553946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.067 [2024-05-15 09:08:41.553975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.067 qpair failed and we were unable to recover it. 00:42:47.067 [2024-05-15 09:08:41.554117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.068 [2024-05-15 09:08:41.554302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.068 [2024-05-15 09:08:41.554332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.068 qpair failed and we were unable to recover it. 00:42:47.068 [2024-05-15 09:08:41.554494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.068 [2024-05-15 09:08:41.554627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.068 [2024-05-15 09:08:41.554655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.068 qpair failed and we were unable to recover it. 00:42:47.068 [2024-05-15 09:08:41.554762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.068 [2024-05-15 09:08:41.554897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.068 [2024-05-15 09:08:41.554926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.068 qpair failed and we were unable to recover it. 00:42:47.068 [2024-05-15 09:08:41.555025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.068 [2024-05-15 09:08:41.555158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.068 [2024-05-15 09:08:41.555184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.068 qpair failed and we were unable to recover it. 00:42:47.068 [2024-05-15 09:08:41.555299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.068 [2024-05-15 09:08:41.555431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.068 [2024-05-15 09:08:41.555457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.068 qpair failed and we were unable to recover it. 00:42:47.068 [2024-05-15 09:08:41.555586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.068 [2024-05-15 09:08:41.555735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.068 [2024-05-15 09:08:41.555768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.068 qpair failed and we were unable to recover it. 00:42:47.068 [2024-05-15 09:08:41.555919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.068 [2024-05-15 09:08:41.556035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.068 [2024-05-15 09:08:41.556063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.068 qpair failed and we were unable to recover it. 00:42:47.068 [2024-05-15 09:08:41.556290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.068 [2024-05-15 09:08:41.556401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.068 [2024-05-15 09:08:41.556430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.068 qpair failed and we were unable to recover it. 00:42:47.068 [2024-05-15 09:08:41.556571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.068 [2024-05-15 09:08:41.556722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.068 [2024-05-15 09:08:41.556764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.068 qpair failed and we were unable to recover it. 00:42:47.068 [2024-05-15 09:08:41.556910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.068 [2024-05-15 09:08:41.557097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.068 [2024-05-15 09:08:41.557123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.068 qpair failed and we were unable to recover it. 00:42:47.068 [2024-05-15 09:08:41.557254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.068 [2024-05-15 09:08:41.557363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.068 [2024-05-15 09:08:41.557391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.068 qpair failed and we were unable to recover it. 00:42:47.068 [2024-05-15 09:08:41.557558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.068 [2024-05-15 09:08:41.557705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.068 [2024-05-15 09:08:41.557731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.068 qpair failed and we were unable to recover it. 00:42:47.068 [2024-05-15 09:08:41.557880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.068 [2024-05-15 09:08:41.558026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.068 [2024-05-15 09:08:41.558053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.068 qpair failed and we were unable to recover it. 00:42:47.068 [2024-05-15 09:08:41.558192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.068 [2024-05-15 09:08:41.558302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.068 [2024-05-15 09:08:41.558327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.068 qpair failed and we were unable to recover it. 00:42:47.068 [2024-05-15 09:08:41.558469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.068 [2024-05-15 09:08:41.558615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.068 [2024-05-15 09:08:41.558640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.068 qpair failed and we were unable to recover it. 00:42:47.068 [2024-05-15 09:08:41.558739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.068 [2024-05-15 09:08:41.558894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.068 [2024-05-15 09:08:41.558919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.068 qpair failed and we were unable to recover it. 00:42:47.068 [2024-05-15 09:08:41.559055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.068 [2024-05-15 09:08:41.559178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.068 [2024-05-15 09:08:41.559203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.068 qpair failed and we were unable to recover it. 00:42:47.068 [2024-05-15 09:08:41.559394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.068 [2024-05-15 09:08:41.559552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.068 [2024-05-15 09:08:41.559577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.068 qpair failed and we were unable to recover it. 00:42:47.068 [2024-05-15 09:08:41.559709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.068 [2024-05-15 09:08:41.559890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.068 [2024-05-15 09:08:41.559917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.068 qpair failed and we were unable to recover it. 00:42:47.068 [2024-05-15 09:08:41.560067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.068 [2024-05-15 09:08:41.560196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.068 [2024-05-15 09:08:41.560246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.068 qpair failed and we were unable to recover it. 00:42:47.068 [2024-05-15 09:08:41.560435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.068 [2024-05-15 09:08:41.560590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.068 [2024-05-15 09:08:41.560615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.068 qpair failed and we were unable to recover it. 00:42:47.068 [2024-05-15 09:08:41.560744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.068 [2024-05-15 09:08:41.560885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.068 [2024-05-15 09:08:41.560918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.068 qpair failed and we were unable to recover it. 00:42:47.068 [2024-05-15 09:08:41.561066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.068 [2024-05-15 09:08:41.561194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.068 [2024-05-15 09:08:41.561227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.068 qpair failed and we were unable to recover it. 00:42:47.068 [2024-05-15 09:08:41.561380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.068 [2024-05-15 09:08:41.561520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.068 [2024-05-15 09:08:41.561547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.068 qpair failed and we were unable to recover it. 00:42:47.068 [2024-05-15 09:08:41.561667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.068 [2024-05-15 09:08:41.561844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.068 [2024-05-15 09:08:41.561868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.068 qpair failed and we were unable to recover it. 00:42:47.068 [2024-05-15 09:08:41.561993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.068 [2024-05-15 09:08:41.562122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.068 [2024-05-15 09:08:41.562147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.068 qpair failed and we were unable to recover it. 00:42:47.068 [2024-05-15 09:08:41.562299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.068 [2024-05-15 09:08:41.562405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.068 [2024-05-15 09:08:41.562433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.068 qpair failed and we were unable to recover it. 00:42:47.068 [2024-05-15 09:08:41.562545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.068 [2024-05-15 09:08:41.562690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.068 [2024-05-15 09:08:41.562717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.068 qpair failed and we were unable to recover it. 00:42:47.069 [2024-05-15 09:08:41.562899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.069 [2024-05-15 09:08:41.563024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.069 [2024-05-15 09:08:41.563049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.069 qpair failed and we were unable to recover it. 00:42:47.069 [2024-05-15 09:08:41.563201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.069 [2024-05-15 09:08:41.563348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.069 [2024-05-15 09:08:41.563376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.069 qpair failed and we were unable to recover it. 00:42:47.069 [2024-05-15 09:08:41.563494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.069 [2024-05-15 09:08:41.563633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.069 [2024-05-15 09:08:41.563660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.069 qpair failed and we were unable to recover it. 00:42:47.069 [2024-05-15 09:08:41.563808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.069 [2024-05-15 09:08:41.563935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.069 [2024-05-15 09:08:41.563960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.069 qpair failed and we were unable to recover it. 00:42:47.069 [2024-05-15 09:08:41.564064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.069 [2024-05-15 09:08:41.564188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.069 [2024-05-15 09:08:41.564221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.069 qpair failed and we were unable to recover it. 00:42:47.069 [2024-05-15 09:08:41.564370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.069 [2024-05-15 09:08:41.564474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.069 [2024-05-15 09:08:41.564501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.069 qpair failed and we were unable to recover it. 00:42:47.069 [2024-05-15 09:08:41.564657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.069 [2024-05-15 09:08:41.564752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.069 [2024-05-15 09:08:41.564776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.069 qpair failed and we were unable to recover it. 00:42:47.069 [2024-05-15 09:08:41.564926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.069 [2024-05-15 09:08:41.565033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.069 [2024-05-15 09:08:41.565060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.069 qpair failed and we were unable to recover it. 00:42:47.069 [2024-05-15 09:08:41.565214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.069 [2024-05-15 09:08:41.565344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.069 [2024-05-15 09:08:41.565369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.069 qpair failed and we were unable to recover it. 00:42:47.069 [2024-05-15 09:08:41.565490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.069 [2024-05-15 09:08:41.565599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.069 [2024-05-15 09:08:41.565625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.069 qpair failed and we were unable to recover it. 00:42:47.069 [2024-05-15 09:08:41.565802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.069 [2024-05-15 09:08:41.565909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.069 [2024-05-15 09:08:41.565936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.069 qpair failed and we were unable to recover it. 00:42:47.069 [2024-05-15 09:08:41.566107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.069 [2024-05-15 09:08:41.566224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.069 [2024-05-15 09:08:41.566253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.069 qpair failed and we were unable to recover it. 00:42:47.069 [2024-05-15 09:08:41.566373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.069 [2024-05-15 09:08:41.566498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.069 [2024-05-15 09:08:41.566522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.069 qpair failed and we were unable to recover it. 00:42:47.069 [2024-05-15 09:08:41.566673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.069 [2024-05-15 09:08:41.566803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.069 [2024-05-15 09:08:41.566828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.069 qpair failed and we were unable to recover it. 00:42:47.069 [2024-05-15 09:08:41.566961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.069 [2024-05-15 09:08:41.567058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.069 [2024-05-15 09:08:41.567083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.069 qpair failed and we were unable to recover it. 00:42:47.069 [2024-05-15 09:08:41.567211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.069 [2024-05-15 09:08:41.567347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.069 [2024-05-15 09:08:41.567372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.069 qpair failed and we were unable to recover it. 00:42:47.069 [2024-05-15 09:08:41.567524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.069 [2024-05-15 09:08:41.567643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.069 [2024-05-15 09:08:41.567672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.069 qpair failed and we were unable to recover it. 00:42:47.069 [2024-05-15 09:08:41.567792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.069 [2024-05-15 09:08:41.567882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.069 [2024-05-15 09:08:41.567907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.069 qpair failed and we were unable to recover it. 00:42:47.069 [2024-05-15 09:08:41.568059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.069 [2024-05-15 09:08:41.568189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.069 [2024-05-15 09:08:41.568231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.069 qpair failed and we were unable to recover it. 00:42:47.069 [2024-05-15 09:08:41.568386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.069 [2024-05-15 09:08:41.568538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.069 [2024-05-15 09:08:41.568563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.069 qpair failed and we were unable to recover it. 00:42:47.069 [2024-05-15 09:08:41.568709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.069 [2024-05-15 09:08:41.568846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.069 [2024-05-15 09:08:41.568873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.069 qpair failed and we were unable to recover it. 00:42:47.069 [2024-05-15 09:08:41.569001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.069 [2024-05-15 09:08:41.569135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.069 [2024-05-15 09:08:41.569159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.069 qpair failed and we were unable to recover it. 00:42:47.069 [2024-05-15 09:08:41.569348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.069 [2024-05-15 09:08:41.569476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.069 [2024-05-15 09:08:41.569501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.069 qpair failed and we were unable to recover it. 00:42:47.069 [2024-05-15 09:08:41.569673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.069 [2024-05-15 09:08:41.569814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.069 [2024-05-15 09:08:41.569840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.069 qpair failed and we were unable to recover it. 00:42:47.069 [2024-05-15 09:08:41.569966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.069 [2024-05-15 09:08:41.570090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.069 [2024-05-15 09:08:41.570114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.069 qpair failed and we were unable to recover it. 00:42:47.070 [2024-05-15 09:08:41.570223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.070 [2024-05-15 09:08:41.570368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.070 [2024-05-15 09:08:41.570396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.070 qpair failed and we were unable to recover it. 00:42:47.070 [2024-05-15 09:08:41.570535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.070 [2024-05-15 09:08:41.570702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.070 [2024-05-15 09:08:41.570729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.070 qpair failed and we were unable to recover it. 00:42:47.070 [2024-05-15 09:08:41.570880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.070 [2024-05-15 09:08:41.571009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.070 [2024-05-15 09:08:41.571034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.070 qpair failed and we were unable to recover it. 00:42:47.070 [2024-05-15 09:08:41.571135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.070 [2024-05-15 09:08:41.571258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.070 [2024-05-15 09:08:41.571284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.070 qpair failed and we were unable to recover it. 00:42:47.070 [2024-05-15 09:08:41.571392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.070 [2024-05-15 09:08:41.571501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.070 [2024-05-15 09:08:41.571526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.070 qpair failed and we were unable to recover it. 00:42:47.070 [2024-05-15 09:08:41.571622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.070 [2024-05-15 09:08:41.571776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.070 [2024-05-15 09:08:41.571800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.070 qpair failed and we were unable to recover it. 00:42:47.070 [2024-05-15 09:08:41.571955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.070 [2024-05-15 09:08:41.572071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.070 [2024-05-15 09:08:41.572099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.070 qpair failed and we were unable to recover it. 00:42:47.070 [2024-05-15 09:08:41.572204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.070 [2024-05-15 09:08:41.572357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.070 [2024-05-15 09:08:41.572382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.070 qpair failed and we were unable to recover it. 00:42:47.070 [2024-05-15 09:08:41.572488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.070 [2024-05-15 09:08:41.572617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.070 [2024-05-15 09:08:41.572642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.070 qpair failed and we were unable to recover it. 00:42:47.070 [2024-05-15 09:08:41.572815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.070 [2024-05-15 09:08:41.572993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.070 [2024-05-15 09:08:41.573018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.070 qpair failed and we were unable to recover it. 00:42:47.070 [2024-05-15 09:08:41.573151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.070 [2024-05-15 09:08:41.573300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.070 [2024-05-15 09:08:41.573329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.070 qpair failed and we were unable to recover it. 00:42:47.070 [2024-05-15 09:08:41.573470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.070 [2024-05-15 09:08:41.573592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.070 [2024-05-15 09:08:41.573616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.070 qpair failed and we were unable to recover it. 00:42:47.070 [2024-05-15 09:08:41.573760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.070 [2024-05-15 09:08:41.573922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.070 [2024-05-15 09:08:41.573950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.070 qpair failed and we were unable to recover it. 00:42:47.070 [2024-05-15 09:08:41.574100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.070 [2024-05-15 09:08:41.574190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.070 [2024-05-15 09:08:41.574220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.070 qpair failed and we were unable to recover it. 00:42:47.070 [2024-05-15 09:08:41.574328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.070 [2024-05-15 09:08:41.574452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.070 [2024-05-15 09:08:41.574477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.070 qpair failed and we were unable to recover it. 00:42:47.070 [2024-05-15 09:08:41.574670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.070 [2024-05-15 09:08:41.574774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.070 [2024-05-15 09:08:41.574799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.070 qpair failed and we were unable to recover it. 00:42:47.070 [2024-05-15 09:08:41.574985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.070 [2024-05-15 09:08:41.575151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.070 [2024-05-15 09:08:41.575179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.070 qpair failed and we were unable to recover it. 00:42:47.070 [2024-05-15 09:08:41.575320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.070 [2024-05-15 09:08:41.575478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.070 [2024-05-15 09:08:41.575503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.070 qpair failed and we were unable to recover it. 00:42:47.070 [2024-05-15 09:08:41.575648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.070 [2024-05-15 09:08:41.575791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.070 [2024-05-15 09:08:41.575818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.070 qpair failed and we were unable to recover it. 00:42:47.070 [2024-05-15 09:08:41.575930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.070 [2024-05-15 09:08:41.576077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.070 [2024-05-15 09:08:41.576109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.070 qpair failed and we were unable to recover it. 00:42:47.070 [2024-05-15 09:08:41.576262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.070 [2024-05-15 09:08:41.576369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.070 [2024-05-15 09:08:41.576394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.070 qpair failed and we were unable to recover it. 00:42:47.070 [2024-05-15 09:08:41.576554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.070 [2024-05-15 09:08:41.576681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.070 [2024-05-15 09:08:41.576706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.070 qpair failed and we were unable to recover it. 00:42:47.070 [2024-05-15 09:08:41.576838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.070 [2024-05-15 09:08:41.576959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.070 [2024-05-15 09:08:41.576984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.070 qpair failed and we were unable to recover it. 00:42:47.070 [2024-05-15 09:08:41.577142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.070 [2024-05-15 09:08:41.577271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.070 [2024-05-15 09:08:41.577315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.070 qpair failed and we were unable to recover it. 00:42:47.070 [2024-05-15 09:08:41.577428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.070 [2024-05-15 09:08:41.577550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.070 [2024-05-15 09:08:41.577577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.070 qpair failed and we were unable to recover it. 00:42:47.071 [2024-05-15 09:08:41.577717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.071 [2024-05-15 09:08:41.577877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.071 [2024-05-15 09:08:41.577904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.071 qpair failed and we were unable to recover it. 00:42:47.071 [2024-05-15 09:08:41.578047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.071 [2024-05-15 09:08:41.578177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.071 [2024-05-15 09:08:41.578203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.071 qpair failed and we were unable to recover it. 00:42:47.071 [2024-05-15 09:08:41.578402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.071 [2024-05-15 09:08:41.578513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.071 [2024-05-15 09:08:41.578540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.071 qpair failed and we were unable to recover it. 00:42:47.071 [2024-05-15 09:08:41.578690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.071 [2024-05-15 09:08:41.578851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.071 [2024-05-15 09:08:41.578877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.071 qpair failed and we were unable to recover it. 00:42:47.071 [2024-05-15 09:08:41.579059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.071 [2024-05-15 09:08:41.579208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.071 [2024-05-15 09:08:41.579242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.071 qpair failed and we were unable to recover it. 00:42:47.071 [2024-05-15 09:08:41.579396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.071 [2024-05-15 09:08:41.579494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.071 [2024-05-15 09:08:41.579518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.071 qpair failed and we were unable to recover it. 00:42:47.071 [2024-05-15 09:08:41.579616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.071 [2024-05-15 09:08:41.579743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.071 [2024-05-15 09:08:41.579768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.071 qpair failed and we were unable to recover it. 00:42:47.071 [2024-05-15 09:08:41.579899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.071 [2024-05-15 09:08:41.580021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.071 [2024-05-15 09:08:41.580046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.071 qpair failed and we were unable to recover it. 00:42:47.071 [2024-05-15 09:08:41.580177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.071 [2024-05-15 09:08:41.580314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.071 [2024-05-15 09:08:41.580342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.071 qpair failed and we were unable to recover it. 00:42:47.071 [2024-05-15 09:08:41.580497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.071 [2024-05-15 09:08:41.580632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.071 [2024-05-15 09:08:41.580658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.071 qpair failed and we were unable to recover it. 00:42:47.071 [2024-05-15 09:08:41.580758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.071 [2024-05-15 09:08:41.580888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.071 [2024-05-15 09:08:41.580913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.071 qpair failed and we were unable to recover it. 00:42:47.071 [2024-05-15 09:08:41.581045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.071 [2024-05-15 09:08:41.581188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.071 [2024-05-15 09:08:41.581222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.071 qpair failed and we were unable to recover it. 00:42:47.071 [2024-05-15 09:08:41.581343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.071 [2024-05-15 09:08:41.581534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.071 [2024-05-15 09:08:41.581558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.071 qpair failed and we were unable to recover it. 00:42:47.071 [2024-05-15 09:08:41.581650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.071 [2024-05-15 09:08:41.581798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.071 [2024-05-15 09:08:41.581823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.071 qpair failed and we were unable to recover it. 00:42:47.071 [2024-05-15 09:08:41.581925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.071 [2024-05-15 09:08:41.582048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.071 [2024-05-15 09:08:41.582072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.071 qpair failed and we were unable to recover it. 00:42:47.071 [2024-05-15 09:08:41.582208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.071 [2024-05-15 09:08:41.582316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.071 [2024-05-15 09:08:41.582341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.071 qpair failed and we were unable to recover it. 00:42:47.071 [2024-05-15 09:08:41.582472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.071 [2024-05-15 09:08:41.582600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.071 [2024-05-15 09:08:41.582624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.071 qpair failed and we were unable to recover it. 00:42:47.071 [2024-05-15 09:08:41.582776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.071 [2024-05-15 09:08:41.582893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.071 [2024-05-15 09:08:41.582920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.071 qpair failed and we were unable to recover it. 00:42:47.071 [2024-05-15 09:08:41.583063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.071 [2024-05-15 09:08:41.583243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.071 [2024-05-15 09:08:41.583269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.071 qpair failed and we were unable to recover it. 00:42:47.071 [2024-05-15 09:08:41.583397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.071 [2024-05-15 09:08:41.583524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.071 [2024-05-15 09:08:41.583549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.071 qpair failed and we were unable to recover it. 00:42:47.071 [2024-05-15 09:08:41.583685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.071 [2024-05-15 09:08:41.583852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.071 [2024-05-15 09:08:41.583877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.071 qpair failed and we were unable to recover it. 00:42:47.071 [2024-05-15 09:08:41.584005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.071 [2024-05-15 09:08:41.584153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.071 [2024-05-15 09:08:41.584181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.071 qpair failed and we were unable to recover it. 00:42:47.071 [2024-05-15 09:08:41.584339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.071 [2024-05-15 09:08:41.584475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.071 [2024-05-15 09:08:41.584499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.071 qpair failed and we were unable to recover it. 00:42:47.071 [2024-05-15 09:08:41.584606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.071 [2024-05-15 09:08:41.584737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.071 [2024-05-15 09:08:41.584761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.071 qpair failed and we were unable to recover it. 00:42:47.071 [2024-05-15 09:08:41.584866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.071 [2024-05-15 09:08:41.584999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.071 [2024-05-15 09:08:41.585023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.071 qpair failed and we were unable to recover it. 00:42:47.071 [2024-05-15 09:08:41.585178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.071 [2024-05-15 09:08:41.585357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.072 [2024-05-15 09:08:41.585383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.072 qpair failed and we were unable to recover it. 00:42:47.072 [2024-05-15 09:08:41.585511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.072 [2024-05-15 09:08:41.585641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.072 [2024-05-15 09:08:41.585665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.072 qpair failed and we were unable to recover it. 00:42:47.072 [2024-05-15 09:08:41.585841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.072 [2024-05-15 09:08:41.585982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.072 [2024-05-15 09:08:41.586009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.072 qpair failed and we were unable to recover it. 00:42:47.072 [2024-05-15 09:08:41.586161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.072 [2024-05-15 09:08:41.586287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.072 [2024-05-15 09:08:41.586312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.072 qpair failed and we were unable to recover it. 00:42:47.072 [2024-05-15 09:08:41.586439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.072 [2024-05-15 09:08:41.586562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.072 [2024-05-15 09:08:41.586590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.072 qpair failed and we were unable to recover it. 00:42:47.072 [2024-05-15 09:08:41.586754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.072 [2024-05-15 09:08:41.586940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.072 [2024-05-15 09:08:41.586964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.072 qpair failed and we were unable to recover it. 00:42:47.072 [2024-05-15 09:08:41.587070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.072 [2024-05-15 09:08:41.587195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.072 [2024-05-15 09:08:41.587226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.072 qpair failed and we were unable to recover it. 00:42:47.072 [2024-05-15 09:08:41.587367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.072 [2024-05-15 09:08:41.587478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.072 [2024-05-15 09:08:41.587506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.072 qpair failed and we were unable to recover it. 00:42:47.072 [2024-05-15 09:08:41.587648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.072 [2024-05-15 09:08:41.587765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.072 [2024-05-15 09:08:41.587793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.072 qpair failed and we were unable to recover it. 00:42:47.072 [2024-05-15 09:08:41.587939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.072 [2024-05-15 09:08:41.588067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.072 [2024-05-15 09:08:41.588092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.072 qpair failed and we were unable to recover it. 00:42:47.072 [2024-05-15 09:08:41.588200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.072 [2024-05-15 09:08:41.588327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.072 [2024-05-15 09:08:41.588355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.072 qpair failed and we were unable to recover it. 00:42:47.072 [2024-05-15 09:08:41.588471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.072 [2024-05-15 09:08:41.588583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.072 [2024-05-15 09:08:41.588610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.072 qpair failed and we were unable to recover it. 00:42:47.072 [2024-05-15 09:08:41.588763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.072 [2024-05-15 09:08:41.588888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.072 [2024-05-15 09:08:41.588913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.072 qpair failed and we were unable to recover it. 00:42:47.072 [2024-05-15 09:08:41.589077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.072 [2024-05-15 09:08:41.589221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.072 [2024-05-15 09:08:41.589249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.072 qpair failed and we were unable to recover it. 00:42:47.072 [2024-05-15 09:08:41.589371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.072 [2024-05-15 09:08:41.589522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.072 [2024-05-15 09:08:41.589548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.072 qpair failed and we were unable to recover it. 00:42:47.072 [2024-05-15 09:08:41.589679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.072 [2024-05-15 09:08:41.589835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.072 [2024-05-15 09:08:41.589860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.072 qpair failed and we were unable to recover it. 00:42:47.072 [2024-05-15 09:08:41.589988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.072 [2024-05-15 09:08:41.590126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.072 [2024-05-15 09:08:41.590154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.072 qpair failed and we were unable to recover it. 00:42:47.072 [2024-05-15 09:08:41.590285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.072 [2024-05-15 09:08:41.590433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.072 [2024-05-15 09:08:41.590457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.072 qpair failed and we were unable to recover it. 00:42:47.072 [2024-05-15 09:08:41.590565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.072 [2024-05-15 09:08:41.590718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.072 [2024-05-15 09:08:41.590742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.072 qpair failed and we were unable to recover it. 00:42:47.072 [2024-05-15 09:08:41.590863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.072 [2024-05-15 09:08:41.591004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.072 [2024-05-15 09:08:41.591032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.072 qpair failed and we were unable to recover it. 00:42:47.072 [2024-05-15 09:08:41.591171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.072 [2024-05-15 09:08:41.591334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.072 [2024-05-15 09:08:41.591360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.072 qpair failed and we were unable to recover it. 00:42:47.072 [2024-05-15 09:08:41.591499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.072 [2024-05-15 09:08:41.591630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.072 [2024-05-15 09:08:41.591654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.072 qpair failed and we were unable to recover it. 00:42:47.072 [2024-05-15 09:08:41.591837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.072 [2024-05-15 09:08:41.591978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.072 [2024-05-15 09:08:41.592005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.072 qpair failed and we were unable to recover it. 00:42:47.072 [2024-05-15 09:08:41.592166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.072 [2024-05-15 09:08:41.592269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.072 [2024-05-15 09:08:41.592294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.072 qpair failed and we were unable to recover it. 00:42:47.073 [2024-05-15 09:08:41.592390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.073 [2024-05-15 09:08:41.592543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.073 [2024-05-15 09:08:41.592568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.073 qpair failed and we were unable to recover it. 00:42:47.073 [2024-05-15 09:08:41.592724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.073 [2024-05-15 09:08:41.592821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.073 [2024-05-15 09:08:41.592846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.073 qpair failed and we were unable to recover it. 00:42:47.073 [2024-05-15 09:08:41.592951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.073 [2024-05-15 09:08:41.593131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.073 [2024-05-15 09:08:41.593158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.073 qpair failed and we were unable to recover it. 00:42:47.073 [2024-05-15 09:08:41.593284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.073 [2024-05-15 09:08:41.593414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.073 [2024-05-15 09:08:41.593439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.073 qpair failed and we were unable to recover it. 00:42:47.073 [2024-05-15 09:08:41.593580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.073 [2024-05-15 09:08:41.593718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.073 [2024-05-15 09:08:41.593745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.073 qpair failed and we were unable to recover it. 00:42:47.073 [2024-05-15 09:08:41.593856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.073 [2024-05-15 09:08:41.593987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.073 [2024-05-15 09:08:41.594015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.073 qpair failed and we were unable to recover it. 00:42:47.073 [2024-05-15 09:08:41.594145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.073 [2024-05-15 09:08:41.594294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.073 [2024-05-15 09:08:41.594324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.073 qpair failed and we were unable to recover it. 00:42:47.073 [2024-05-15 09:08:41.594445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.073 [2024-05-15 09:08:41.594612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.073 [2024-05-15 09:08:41.594639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.073 qpair failed and we were unable to recover it. 00:42:47.073 [2024-05-15 09:08:41.594760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.073 [2024-05-15 09:08:41.594930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.073 [2024-05-15 09:08:41.594958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.073 qpair failed and we were unable to recover it. 00:42:47.073 [2024-05-15 09:08:41.595073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.073 [2024-05-15 09:08:41.595201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.073 [2024-05-15 09:08:41.595230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.073 qpair failed and we were unable to recover it. 00:42:47.073 [2024-05-15 09:08:41.595384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.073 [2024-05-15 09:08:41.595502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.073 [2024-05-15 09:08:41.595531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.073 qpair failed and we were unable to recover it. 00:42:47.073 [2024-05-15 09:08:41.595676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.073 [2024-05-15 09:08:41.595826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.073 [2024-05-15 09:08:41.595851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.073 qpair failed and we were unable to recover it. 00:42:47.073 [2024-05-15 09:08:41.595980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.073 [2024-05-15 09:08:41.596087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.073 [2024-05-15 09:08:41.596112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.073 qpair failed and we were unable to recover it. 00:42:47.073 [2024-05-15 09:08:41.596237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.073 [2024-05-15 09:08:41.596344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.073 [2024-05-15 09:08:41.596368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.073 qpair failed and we were unable to recover it. 00:42:47.073 [2024-05-15 09:08:41.596494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.073 [2024-05-15 09:08:41.596615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.073 [2024-05-15 09:08:41.596640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.073 qpair failed and we were unable to recover it. 00:42:47.073 [2024-05-15 09:08:41.596763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.073 [2024-05-15 09:08:41.596912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.073 [2024-05-15 09:08:41.596952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.073 qpair failed and we were unable to recover it. 00:42:47.073 [2024-05-15 09:08:41.597095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.073 [2024-05-15 09:08:41.597238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.073 [2024-05-15 09:08:41.597266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.073 qpair failed and we were unable to recover it. 00:42:47.073 [2024-05-15 09:08:41.597412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.073 [2024-05-15 09:08:41.597530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.073 [2024-05-15 09:08:41.597557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.073 qpair failed and we were unable to recover it. 00:42:47.073 [2024-05-15 09:08:41.597713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.073 [2024-05-15 09:08:41.597849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.073 [2024-05-15 09:08:41.597873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.073 qpair failed and we were unable to recover it. 00:42:47.073 [2024-05-15 09:08:41.598004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.073 [2024-05-15 09:08:41.598190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.073 [2024-05-15 09:08:41.598221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.073 qpair failed and we were unable to recover it. 00:42:47.073 [2024-05-15 09:08:41.598352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.073 [2024-05-15 09:08:41.598472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.073 [2024-05-15 09:08:41.598499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.073 qpair failed and we were unable to recover it. 00:42:47.073 [2024-05-15 09:08:41.598644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.073 [2024-05-15 09:08:41.598771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.073 [2024-05-15 09:08:41.598795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.073 qpair failed and we were unable to recover it. 00:42:47.073 [2024-05-15 09:08:41.598980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.073 [2024-05-15 09:08:41.599085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.073 [2024-05-15 09:08:41.599110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.073 qpair failed and we were unable to recover it. 00:42:47.073 [2024-05-15 09:08:41.599226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.073 [2024-05-15 09:08:41.599330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.073 [2024-05-15 09:08:41.599372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.073 qpair failed and we were unable to recover it. 00:42:47.073 [2024-05-15 09:08:41.599514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.073 [2024-05-15 09:08:41.599642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.073 [2024-05-15 09:08:41.599668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.073 qpair failed and we were unable to recover it. 00:42:47.073 [2024-05-15 09:08:41.599822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.073 [2024-05-15 09:08:41.599949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.073 [2024-05-15 09:08:41.599974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.074 qpair failed and we were unable to recover it. 00:42:47.074 [2024-05-15 09:08:41.600072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.074 [2024-05-15 09:08:41.600170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.074 [2024-05-15 09:08:41.600195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.074 qpair failed and we were unable to recover it. 00:42:47.074 [2024-05-15 09:08:41.600379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.074 [2024-05-15 09:08:41.600536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.074 [2024-05-15 09:08:41.600561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.074 qpair failed and we were unable to recover it. 00:42:47.074 [2024-05-15 09:08:41.600683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.074 [2024-05-15 09:08:41.600827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.074 [2024-05-15 09:08:41.600856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.074 qpair failed and we were unable to recover it. 00:42:47.074 [2024-05-15 09:08:41.601004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.074 [2024-05-15 09:08:41.601115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.074 [2024-05-15 09:08:41.601143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.074 qpair failed and we were unable to recover it. 00:42:47.074 [2024-05-15 09:08:41.601270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.074 [2024-05-15 09:08:41.601381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.074 [2024-05-15 09:08:41.601406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.074 qpair failed and we were unable to recover it. 00:42:47.074 [2024-05-15 09:08:41.601547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.074 [2024-05-15 09:08:41.601676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.074 [2024-05-15 09:08:41.601701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.074 qpair failed and we were unable to recover it. 00:42:47.074 [2024-05-15 09:08:41.601825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.074 [2024-05-15 09:08:41.601933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.074 [2024-05-15 09:08:41.601958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.074 qpair failed and we were unable to recover it. 00:42:47.074 [2024-05-15 09:08:41.602110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.074 [2024-05-15 09:08:41.602238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.074 [2024-05-15 09:08:41.602279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.074 qpair failed and we were unable to recover it. 00:42:47.074 [2024-05-15 09:08:41.602407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.074 [2024-05-15 09:08:41.602573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.074 [2024-05-15 09:08:41.602601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.074 qpair failed and we were unable to recover it. 00:42:47.074 [2024-05-15 09:08:41.602734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.074 [2024-05-15 09:08:41.602844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.074 [2024-05-15 09:08:41.602871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.074 qpair failed and we were unable to recover it. 00:42:47.074 [2024-05-15 09:08:41.603019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.074 [2024-05-15 09:08:41.603129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.074 [2024-05-15 09:08:41.603154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.074 qpair failed and we were unable to recover it. 00:42:47.074 [2024-05-15 09:08:41.603256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.074 [2024-05-15 09:08:41.603390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.074 [2024-05-15 09:08:41.603415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.074 qpair failed and we were unable to recover it. 00:42:47.074 [2024-05-15 09:08:41.603544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.074 [2024-05-15 09:08:41.603680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.074 [2024-05-15 09:08:41.603707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.074 qpair failed and we were unable to recover it. 00:42:47.074 [2024-05-15 09:08:41.603853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.074 [2024-05-15 09:08:41.603954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.074 [2024-05-15 09:08:41.603978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.074 qpair failed and we were unable to recover it. 00:42:47.074 [2024-05-15 09:08:41.604124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.074 [2024-05-15 09:08:41.604262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.074 [2024-05-15 09:08:41.604289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.074 qpair failed and we were unable to recover it. 00:42:47.074 [2024-05-15 09:08:41.604425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.074 [2024-05-15 09:08:41.604524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.074 [2024-05-15 09:08:41.604552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.074 qpair failed and we were unable to recover it. 00:42:47.074 [2024-05-15 09:08:41.604696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.074 [2024-05-15 09:08:41.604856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.074 [2024-05-15 09:08:41.604896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.074 qpair failed and we were unable to recover it. 00:42:47.074 [2024-05-15 09:08:41.605012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.074 [2024-05-15 09:08:41.605146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.074 [2024-05-15 09:08:41.605173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.074 qpair failed and we were unable to recover it. 00:42:47.074 [2024-05-15 09:08:41.605321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.074 [2024-05-15 09:08:41.605438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.074 [2024-05-15 09:08:41.605466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.074 qpair failed and we were unable to recover it. 00:42:47.074 [2024-05-15 09:08:41.605620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.074 [2024-05-15 09:08:41.605771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.074 [2024-05-15 09:08:41.605810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.074 qpair failed and we were unable to recover it. 00:42:47.074 [2024-05-15 09:08:41.605953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.074 [2024-05-15 09:08:41.606094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.074 [2024-05-15 09:08:41.606122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.074 qpair failed and we were unable to recover it. 00:42:47.074 [2024-05-15 09:08:41.606237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.074 [2024-05-15 09:08:41.606409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.074 [2024-05-15 09:08:41.606437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.074 qpair failed and we were unable to recover it. 00:42:47.074 [2024-05-15 09:08:41.606582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.074 [2024-05-15 09:08:41.606709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.074 [2024-05-15 09:08:41.606733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.074 qpair failed and we were unable to recover it. 00:42:47.074 [2024-05-15 09:08:41.606921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.074 [2024-05-15 09:08:41.607016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.074 [2024-05-15 09:08:41.607041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.074 qpair failed and we were unable to recover it. 00:42:47.074 [2024-05-15 09:08:41.607145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.074 [2024-05-15 09:08:41.607285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.074 [2024-05-15 09:08:41.607310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.074 qpair failed and we were unable to recover it. 00:42:47.074 [2024-05-15 09:08:41.607436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.074 [2024-05-15 09:08:41.607535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.074 [2024-05-15 09:08:41.607560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.074 qpair failed and we were unable to recover it. 00:42:47.074 [2024-05-15 09:08:41.607710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.074 [2024-05-15 09:08:41.607817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.074 [2024-05-15 09:08:41.607841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.074 qpair failed and we were unable to recover it. 00:42:47.074 [2024-05-15 09:08:41.607940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.074 [2024-05-15 09:08:41.608061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.074 [2024-05-15 09:08:41.608086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.074 qpair failed and we were unable to recover it. 00:42:47.074 [2024-05-15 09:08:41.608228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.074 [2024-05-15 09:08:41.608352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.075 [2024-05-15 09:08:41.608376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.075 qpair failed and we were unable to recover it. 00:42:47.075 [2024-05-15 09:08:41.608510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.075 [2024-05-15 09:08:41.608606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.075 [2024-05-15 09:08:41.608632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.075 qpair failed and we were unable to recover it. 00:42:47.075 [2024-05-15 09:08:41.608790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.075 [2024-05-15 09:08:41.608934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.075 [2024-05-15 09:08:41.608963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.075 qpair failed and we were unable to recover it. 00:42:47.075 [2024-05-15 09:08:41.609141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.075 [2024-05-15 09:08:41.609272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.075 [2024-05-15 09:08:41.609302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.075 qpair failed and we were unable to recover it. 00:42:47.075 [2024-05-15 09:08:41.609404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.075 [2024-05-15 09:08:41.609535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.075 [2024-05-15 09:08:41.609559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.075 qpair failed and we were unable to recover it. 00:42:47.075 [2024-05-15 09:08:41.609663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.075 [2024-05-15 09:08:41.609789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.075 [2024-05-15 09:08:41.609814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.075 qpair failed and we were unable to recover it. 00:42:47.075 [2024-05-15 09:08:41.609943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.075 [2024-05-15 09:08:41.610097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.075 [2024-05-15 09:08:41.610122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.075 qpair failed and we were unable to recover it. 00:42:47.075 [2024-05-15 09:08:41.610289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.075 [2024-05-15 09:08:41.610464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.075 [2024-05-15 09:08:41.610492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.075 qpair failed and we were unable to recover it. 00:42:47.075 [2024-05-15 09:08:41.610633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.075 [2024-05-15 09:08:41.610806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.075 [2024-05-15 09:08:41.610831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.075 qpair failed and we were unable to recover it. 00:42:47.075 [2024-05-15 09:08:41.610957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.075 [2024-05-15 09:08:41.611058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.075 [2024-05-15 09:08:41.611082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.075 qpair failed and we were unable to recover it. 00:42:47.075 [2024-05-15 09:08:41.611248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.075 [2024-05-15 09:08:41.611381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.075 [2024-05-15 09:08:41.611409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.075 qpair failed and we were unable to recover it. 00:42:47.075 [2024-05-15 09:08:41.611577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.075 [2024-05-15 09:08:41.611679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.075 [2024-05-15 09:08:41.611707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.075 qpair failed and we were unable to recover it. 00:42:47.075 [2024-05-15 09:08:41.611827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.075 [2024-05-15 09:08:41.611952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.075 [2024-05-15 09:08:41.611976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.075 qpair failed and we were unable to recover it. 00:42:47.075 [2024-05-15 09:08:41.612119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.075 [2024-05-15 09:08:41.612284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.075 [2024-05-15 09:08:41.612312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.075 qpair failed and we were unable to recover it. 00:42:47.075 [2024-05-15 09:08:41.612455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.075 [2024-05-15 09:08:41.612593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.075 [2024-05-15 09:08:41.612620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.075 qpair failed and we were unable to recover it. 00:42:47.075 [2024-05-15 09:08:41.612800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.075 [2024-05-15 09:08:41.612932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.075 [2024-05-15 09:08:41.612957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.075 qpair failed and we were unable to recover it. 00:42:47.075 [2024-05-15 09:08:41.613048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.075 [2024-05-15 09:08:41.613201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.075 [2024-05-15 09:08:41.613230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.075 qpair failed and we were unable to recover it. 00:42:47.075 [2024-05-15 09:08:41.613410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.075 [2024-05-15 09:08:41.613527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.075 [2024-05-15 09:08:41.613554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.075 qpair failed and we were unable to recover it. 00:42:47.075 [2024-05-15 09:08:41.613675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.075 [2024-05-15 09:08:41.613773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.075 [2024-05-15 09:08:41.613798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.075 qpair failed and we were unable to recover it. 00:42:47.075 [2024-05-15 09:08:41.613898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.075 [2024-05-15 09:08:41.614007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.075 [2024-05-15 09:08:41.614031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.075 qpair failed and we were unable to recover it. 00:42:47.075 [2024-05-15 09:08:41.614186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.075 [2024-05-15 09:08:41.614306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.075 [2024-05-15 09:08:41.614334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.075 qpair failed and we were unable to recover it. 00:42:47.075 [2024-05-15 09:08:41.614511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.075 [2024-05-15 09:08:41.614666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.075 [2024-05-15 09:08:41.614691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.075 qpair failed and we were unable to recover it. 00:42:47.075 [2024-05-15 09:08:41.614838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.075 [2024-05-15 09:08:41.614978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.075 [2024-05-15 09:08:41.615006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.075 qpair failed and we were unable to recover it. 00:42:47.075 [2024-05-15 09:08:41.615153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.075 [2024-05-15 09:08:41.615333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.075 [2024-05-15 09:08:41.615358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.075 qpair failed and we were unable to recover it. 00:42:47.075 [2024-05-15 09:08:41.615469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.075 [2024-05-15 09:08:41.615602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.075 [2024-05-15 09:08:41.615627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.075 qpair failed and we were unable to recover it. 00:42:47.075 [2024-05-15 09:08:41.615780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.075 [2024-05-15 09:08:41.615896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.075 [2024-05-15 09:08:41.615924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.076 qpair failed and we were unable to recover it. 00:42:47.076 [2024-05-15 09:08:41.616035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.076 [2024-05-15 09:08:41.616176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.076 [2024-05-15 09:08:41.616203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.076 qpair failed and we were unable to recover it. 00:42:47.076 [2024-05-15 09:08:41.616361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.076 [2024-05-15 09:08:41.616463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.076 [2024-05-15 09:08:41.616488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.076 qpair failed and we were unable to recover it. 00:42:47.076 [2024-05-15 09:08:41.616613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.076 [2024-05-15 09:08:41.616801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.076 [2024-05-15 09:08:41.616825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.076 qpair failed and we were unable to recover it. 00:42:47.076 [2024-05-15 09:08:41.616952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.076 [2024-05-15 09:08:41.617079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.076 [2024-05-15 09:08:41.617103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.076 qpair failed and we were unable to recover it. 00:42:47.076 [2024-05-15 09:08:41.617206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.076 [2024-05-15 09:08:41.617313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.076 [2024-05-15 09:08:41.617338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.076 qpair failed and we were unable to recover it. 00:42:47.076 [2024-05-15 09:08:41.617465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.076 [2024-05-15 09:08:41.617630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.076 [2024-05-15 09:08:41.617658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.076 qpair failed and we were unable to recover it. 00:42:47.076 [2024-05-15 09:08:41.617824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.076 [2024-05-15 09:08:41.617957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.076 [2024-05-15 09:08:41.617984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.076 qpair failed and we were unable to recover it. 00:42:47.076 [2024-05-15 09:08:41.618145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.076 [2024-05-15 09:08:41.618274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.076 [2024-05-15 09:08:41.618299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.076 qpair failed and we were unable to recover it. 00:42:47.076 [2024-05-15 09:08:41.618441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.076 [2024-05-15 09:08:41.618609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.076 [2024-05-15 09:08:41.618641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.076 qpair failed and we were unable to recover it. 00:42:47.076 [2024-05-15 09:08:41.618781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.076 [2024-05-15 09:08:41.618944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.076 [2024-05-15 09:08:41.618972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.076 qpair failed and we were unable to recover it. 00:42:47.076 [2024-05-15 09:08:41.619133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.076 [2024-05-15 09:08:41.619233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.076 [2024-05-15 09:08:41.619259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.076 qpair failed and we were unable to recover it. 00:42:47.076 [2024-05-15 09:08:41.619383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.076 [2024-05-15 09:08:41.619499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.076 [2024-05-15 09:08:41.619527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.076 qpair failed and we were unable to recover it. 00:42:47.076 [2024-05-15 09:08:41.619673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.076 [2024-05-15 09:08:41.619783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.076 [2024-05-15 09:08:41.619811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.076 qpair failed and we were unable to recover it. 00:42:47.076 [2024-05-15 09:08:41.619941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.076 [2024-05-15 09:08:41.620093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.076 [2024-05-15 09:08:41.620119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.076 qpair failed and we were unable to recover it. 00:42:47.076 [2024-05-15 09:08:41.620286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.076 [2024-05-15 09:08:41.620392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.076 [2024-05-15 09:08:41.620417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.076 qpair failed and we were unable to recover it. 00:42:47.076 [2024-05-15 09:08:41.620537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.076 [2024-05-15 09:08:41.620655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.076 [2024-05-15 09:08:41.620682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.076 qpair failed and we were unable to recover it. 00:42:47.076 [2024-05-15 09:08:41.620823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.076 [2024-05-15 09:08:41.620948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.076 [2024-05-15 09:08:41.620973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.076 qpair failed and we were unable to recover it. 00:42:47.076 [2024-05-15 09:08:41.621134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.076 [2024-05-15 09:08:41.621261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.076 [2024-05-15 09:08:41.621287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.076 qpair failed and we were unable to recover it. 00:42:47.076 [2024-05-15 09:08:41.621438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.076 [2024-05-15 09:08:41.621585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.076 [2024-05-15 09:08:41.621614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.076 qpair failed and we were unable to recover it. 00:42:47.076 [2024-05-15 09:08:41.621768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.076 [2024-05-15 09:08:41.621890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.076 [2024-05-15 09:08:41.621915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.076 qpair failed and we were unable to recover it. 00:42:47.076 [2024-05-15 09:08:41.622061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.076 [2024-05-15 09:08:41.622179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.076 [2024-05-15 09:08:41.622208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.076 qpair failed and we were unable to recover it. 00:42:47.076 [2024-05-15 09:08:41.622344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.076 [2024-05-15 09:08:41.622445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.076 [2024-05-15 09:08:41.622471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.076 qpair failed and we were unable to recover it. 00:42:47.076 [2024-05-15 09:08:41.622599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.076 [2024-05-15 09:08:41.622702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.076 [2024-05-15 09:08:41.622726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.076 qpair failed and we were unable to recover it. 00:42:47.076 [2024-05-15 09:08:41.622881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.076 [2024-05-15 09:08:41.623033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.076 [2024-05-15 09:08:41.623058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.076 qpair failed and we were unable to recover it. 00:42:47.076 [2024-05-15 09:08:41.623211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.076 [2024-05-15 09:08:41.623314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.076 [2024-05-15 09:08:41.623339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.076 qpair failed and we were unable to recover it. 00:42:47.076 [2024-05-15 09:08:41.623475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.076 [2024-05-15 09:08:41.623566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.076 [2024-05-15 09:08:41.623591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.076 qpair failed and we were unable to recover it. 00:42:47.076 [2024-05-15 09:08:41.623712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.076 [2024-05-15 09:08:41.623899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.076 [2024-05-15 09:08:41.623927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.076 qpair failed and we were unable to recover it. 00:42:47.076 [2024-05-15 09:08:41.624113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.076 [2024-05-15 09:08:41.624263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.076 [2024-05-15 09:08:41.624289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.076 qpair failed and we were unable to recover it. 00:42:47.076 [2024-05-15 09:08:41.624389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.076 [2024-05-15 09:08:41.624503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.077 [2024-05-15 09:08:41.624529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.077 qpair failed and we were unable to recover it. 00:42:47.077 [2024-05-15 09:08:41.624673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.077 [2024-05-15 09:08:41.624814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.077 [2024-05-15 09:08:41.624842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.077 qpair failed and we were unable to recover it. 00:42:47.077 [2024-05-15 09:08:41.624986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.077 [2024-05-15 09:08:41.625156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.077 [2024-05-15 09:08:41.625181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.077 qpair failed and we were unable to recover it. 00:42:47.077 [2024-05-15 09:08:41.625314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.077 [2024-05-15 09:08:41.625443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.077 [2024-05-15 09:08:41.625468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.077 qpair failed and we were unable to recover it. 00:42:47.077 [2024-05-15 09:08:41.625630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.077 [2024-05-15 09:08:41.625796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.077 [2024-05-15 09:08:41.625824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.077 qpair failed and we were unable to recover it. 00:42:47.077 [2024-05-15 09:08:41.625986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.077 [2024-05-15 09:08:41.626118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.077 [2024-05-15 09:08:41.626144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.077 qpair failed and we were unable to recover it. 00:42:47.077 [2024-05-15 09:08:41.626256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.077 [2024-05-15 09:08:41.626407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.077 [2024-05-15 09:08:41.626432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.077 qpair failed and we were unable to recover it. 00:42:47.077 [2024-05-15 09:08:41.626559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.077 [2024-05-15 09:08:41.626710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.077 [2024-05-15 09:08:41.626735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.077 qpair failed and we were unable to recover it. 00:42:47.077 [2024-05-15 09:08:41.626860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.077 [2024-05-15 09:08:41.626991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.077 [2024-05-15 09:08:41.627016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.077 qpair failed and we were unable to recover it. 00:42:47.077 [2024-05-15 09:08:41.627142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.077 [2024-05-15 09:08:41.627244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.077 [2024-05-15 09:08:41.627271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.077 qpair failed and we were unable to recover it. 00:42:47.077 [2024-05-15 09:08:41.627369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.077 [2024-05-15 09:08:41.627480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.077 [2024-05-15 09:08:41.627505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.077 qpair failed and we were unable to recover it. 00:42:47.077 [2024-05-15 09:08:41.627610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.077 [2024-05-15 09:08:41.627736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.077 [2024-05-15 09:08:41.627761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.077 qpair failed and we were unable to recover it. 00:42:47.077 [2024-05-15 09:08:41.627892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.077 [2024-05-15 09:08:41.628043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.077 [2024-05-15 09:08:41.628068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.077 qpair failed and we were unable to recover it. 00:42:47.077 [2024-05-15 09:08:41.628168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.077 [2024-05-15 09:08:41.628304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.077 [2024-05-15 09:08:41.628330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.077 qpair failed and we were unable to recover it. 00:42:47.077 [2024-05-15 09:08:41.628434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.077 [2024-05-15 09:08:41.628583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.077 [2024-05-15 09:08:41.628611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.077 qpair failed and we were unable to recover it. 00:42:47.077 [2024-05-15 09:08:41.628764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.077 [2024-05-15 09:08:41.628903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.077 [2024-05-15 09:08:41.628928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.077 qpair failed and we were unable to recover it. 00:42:47.077 [2024-05-15 09:08:41.629066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.077 [2024-05-15 09:08:41.629222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.077 [2024-05-15 09:08:41.629265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.077 qpair failed and we were unable to recover it. 00:42:47.077 [2024-05-15 09:08:41.629372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.077 [2024-05-15 09:08:41.629477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.077 [2024-05-15 09:08:41.629502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.077 qpair failed and we were unable to recover it. 00:42:47.077 [2024-05-15 09:08:41.629597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.077 [2024-05-15 09:08:41.629748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.077 [2024-05-15 09:08:41.629773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.077 qpair failed and we were unable to recover it. 00:42:47.077 [2024-05-15 09:08:41.629922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.077 [2024-05-15 09:08:41.630032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.077 [2024-05-15 09:08:41.630061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.077 qpair failed and we were unable to recover it. 00:42:47.077 [2024-05-15 09:08:41.630197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.077 [2024-05-15 09:08:41.630318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.077 [2024-05-15 09:08:41.630347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.077 qpair failed and we were unable to recover it. 00:42:47.078 [2024-05-15 09:08:41.630521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.078 [2024-05-15 09:08:41.630668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.078 [2024-05-15 09:08:41.630710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.078 qpair failed and we were unable to recover it. 00:42:47.078 [2024-05-15 09:08:41.630848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.078 [2024-05-15 09:08:41.630957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.078 [2024-05-15 09:08:41.630985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.078 qpair failed and we were unable to recover it. 00:42:47.078 [2024-05-15 09:08:41.631153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.078 [2024-05-15 09:08:41.631314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.078 [2024-05-15 09:08:41.631343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.078 qpair failed and we were unable to recover it. 00:42:47.078 [2024-05-15 09:08:41.631473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.078 [2024-05-15 09:08:41.631622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.078 [2024-05-15 09:08:41.631647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.078 qpair failed and we were unable to recover it. 00:42:47.078 [2024-05-15 09:08:41.631840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.078 [2024-05-15 09:08:41.631968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.078 [2024-05-15 09:08:41.631993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.078 qpair failed and we were unable to recover it. 00:42:47.078 [2024-05-15 09:08:41.632145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.078 [2024-05-15 09:08:41.632263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.078 [2024-05-15 09:08:41.632292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.078 qpair failed and we were unable to recover it. 00:42:47.078 [2024-05-15 09:08:41.632448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.078 [2024-05-15 09:08:41.632549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.078 [2024-05-15 09:08:41.632574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.078 qpair failed and we were unable to recover it. 00:42:47.078 [2024-05-15 09:08:41.632678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.078 [2024-05-15 09:08:41.632796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.078 [2024-05-15 09:08:41.632821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.078 qpair failed and we were unable to recover it. 00:42:47.078 [2024-05-15 09:08:41.632945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.078 [2024-05-15 09:08:41.633078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.078 [2024-05-15 09:08:41.633103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.078 qpair failed and we were unable to recover it. 00:42:47.078 [2024-05-15 09:08:41.633257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.078 [2024-05-15 09:08:41.633382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.078 [2024-05-15 09:08:41.633428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.078 qpair failed and we were unable to recover it. 00:42:47.078 [2024-05-15 09:08:41.633564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.078 [2024-05-15 09:08:41.633699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.078 [2024-05-15 09:08:41.633727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.078 qpair failed and we were unable to recover it. 00:42:47.078 [2024-05-15 09:08:41.633868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.078 [2024-05-15 09:08:41.634013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.078 [2024-05-15 09:08:41.634042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.078 qpair failed and we were unable to recover it. 00:42:47.078 [2024-05-15 09:08:41.634169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.078 [2024-05-15 09:08:41.634276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.078 [2024-05-15 09:08:41.634302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.078 qpair failed and we were unable to recover it. 00:42:47.078 [2024-05-15 09:08:41.634478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.078 [2024-05-15 09:08:41.634618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.078 [2024-05-15 09:08:41.634646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.078 qpair failed and we were unable to recover it. 00:42:47.078 [2024-05-15 09:08:41.634790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.078 [2024-05-15 09:08:41.634896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.078 [2024-05-15 09:08:41.634923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.078 qpair failed and we were unable to recover it. 00:42:47.078 [2024-05-15 09:08:41.635065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.078 [2024-05-15 09:08:41.635195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.078 [2024-05-15 09:08:41.635228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.078 qpair failed and we were unable to recover it. 00:42:47.078 [2024-05-15 09:08:41.635349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.078 [2024-05-15 09:08:41.635538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.078 [2024-05-15 09:08:41.635563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.078 qpair failed and we were unable to recover it. 00:42:47.078 [2024-05-15 09:08:41.635693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.078 [2024-05-15 09:08:41.635826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.078 [2024-05-15 09:08:41.635851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.078 qpair failed and we were unable to recover it. 00:42:47.078 [2024-05-15 09:08:41.636012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.078 [2024-05-15 09:08:41.636116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.078 [2024-05-15 09:08:41.636140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.078 qpair failed and we were unable to recover it. 00:42:47.078 [2024-05-15 09:08:41.636272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.078 [2024-05-15 09:08:41.636408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.078 [2024-05-15 09:08:41.636439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.078 qpair failed and we were unable to recover it. 00:42:47.078 [2024-05-15 09:08:41.636588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.078 [2024-05-15 09:08:41.636723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.078 [2024-05-15 09:08:41.636752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.078 qpair failed and we were unable to recover it. 00:42:47.078 [2024-05-15 09:08:41.636929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.078 [2024-05-15 09:08:41.637036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.078 [2024-05-15 09:08:41.637061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.078 qpair failed and we were unable to recover it. 00:42:47.078 [2024-05-15 09:08:41.637160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.078 [2024-05-15 09:08:41.637284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.078 [2024-05-15 09:08:41.637310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.078 qpair failed and we were unable to recover it. 00:42:47.078 [2024-05-15 09:08:41.637456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.078 [2024-05-15 09:08:41.637598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.078 [2024-05-15 09:08:41.637627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.078 qpair failed and we were unable to recover it. 00:42:47.078 [2024-05-15 09:08:41.637748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.078 [2024-05-15 09:08:41.637875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.078 [2024-05-15 09:08:41.637900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.078 qpair failed and we were unable to recover it. 00:42:47.078 [2024-05-15 09:08:41.638051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.078 [2024-05-15 09:08:41.638220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.078 [2024-05-15 09:08:41.638249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.078 qpair failed and we were unable to recover it. 00:42:47.078 [2024-05-15 09:08:41.638385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.078 [2024-05-15 09:08:41.638527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.078 [2024-05-15 09:08:41.638555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.078 qpair failed and we were unable to recover it. 00:42:47.078 [2024-05-15 09:08:41.638712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.078 [2024-05-15 09:08:41.638842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.078 [2024-05-15 09:08:41.638867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.078 qpair failed and we were unable to recover it. 00:42:47.078 [2024-05-15 09:08:41.638992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.078 [2024-05-15 09:08:41.639119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.079 [2024-05-15 09:08:41.639144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.079 qpair failed and we were unable to recover it. 00:42:47.079 [2024-05-15 09:08:41.639249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.079 [2024-05-15 09:08:41.639409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.079 [2024-05-15 09:08:41.639441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.079 qpair failed and we were unable to recover it. 00:42:47.079 [2024-05-15 09:08:41.639568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.079 [2024-05-15 09:08:41.639696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.079 [2024-05-15 09:08:41.639720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.079 qpair failed and we were unable to recover it. 00:42:47.079 [2024-05-15 09:08:41.639815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.079 [2024-05-15 09:08:41.639934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.079 [2024-05-15 09:08:41.639959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.079 qpair failed and we were unable to recover it. 00:42:47.079 [2024-05-15 09:08:41.640109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.079 [2024-05-15 09:08:41.640248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.079 [2024-05-15 09:08:41.640277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.079 qpair failed and we were unable to recover it. 00:42:47.079 [2024-05-15 09:08:41.640405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.079 [2024-05-15 09:08:41.640533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.079 [2024-05-15 09:08:41.640558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.079 qpair failed and we were unable to recover it. 00:42:47.079 [2024-05-15 09:08:41.640670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.079 [2024-05-15 09:08:41.640769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.079 [2024-05-15 09:08:41.640794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.079 qpair failed and we were unable to recover it. 00:42:47.079 [2024-05-15 09:08:41.640914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.079 [2024-05-15 09:08:41.641041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.079 [2024-05-15 09:08:41.641066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.079 qpair failed and we were unable to recover it. 00:42:47.079 [2024-05-15 09:08:41.641225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.079 [2024-05-15 09:08:41.641328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.079 [2024-05-15 09:08:41.641354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.079 qpair failed and we were unable to recover it. 00:42:47.079 [2024-05-15 09:08:41.641483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.079 [2024-05-15 09:08:41.641625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.079 [2024-05-15 09:08:41.641652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.079 qpair failed and we were unable to recover it. 00:42:47.079 [2024-05-15 09:08:41.641795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.079 [2024-05-15 09:08:41.641941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.079 [2024-05-15 09:08:41.641969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.079 qpair failed and we were unable to recover it. 00:42:47.079 [2024-05-15 09:08:41.642112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.079 [2024-05-15 09:08:41.642245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.079 [2024-05-15 09:08:41.642275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.079 qpair failed and we were unable to recover it. 00:42:47.079 [2024-05-15 09:08:41.642426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.079 [2024-05-15 09:08:41.642560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.079 [2024-05-15 09:08:41.642588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.079 qpair failed and we were unable to recover it. 00:42:47.079 [2024-05-15 09:08:41.642690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.079 [2024-05-15 09:08:41.642855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.079 [2024-05-15 09:08:41.642882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.079 qpair failed and we were unable to recover it. 00:42:47.079 [2024-05-15 09:08:41.643027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.079 [2024-05-15 09:08:41.643159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.079 [2024-05-15 09:08:41.643186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.079 qpair failed and we were unable to recover it. 00:42:47.079 [2024-05-15 09:08:41.643340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.079 [2024-05-15 09:08:41.643444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.079 [2024-05-15 09:08:41.643470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.079 qpair failed and we were unable to recover it. 00:42:47.079 [2024-05-15 09:08:41.643641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.079 [2024-05-15 09:08:41.643776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.079 [2024-05-15 09:08:41.643803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.079 qpair failed and we were unable to recover it. 00:42:47.079 [2024-05-15 09:08:41.643945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.079 [2024-05-15 09:08:41.644080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.079 [2024-05-15 09:08:41.644104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.079 qpair failed and we were unable to recover it. 00:42:47.079 [2024-05-15 09:08:41.644245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.079 [2024-05-15 09:08:41.644394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.079 [2024-05-15 09:08:41.644422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.079 qpair failed and we were unable to recover it. 00:42:47.079 [2024-05-15 09:08:41.644539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.079 [2024-05-15 09:08:41.644657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.079 [2024-05-15 09:08:41.644685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.079 qpair failed and we were unable to recover it. 00:42:47.079 [2024-05-15 09:08:41.644815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.079 [2024-05-15 09:08:41.644916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.079 [2024-05-15 09:08:41.644942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.079 qpair failed and we were unable to recover it. 00:42:47.079 [2024-05-15 09:08:41.645097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.079 [2024-05-15 09:08:41.645258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.079 [2024-05-15 09:08:41.645301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.079 qpair failed and we were unable to recover it. 00:42:47.079 [2024-05-15 09:08:41.645445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.079 [2024-05-15 09:08:41.645609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.079 [2024-05-15 09:08:41.645634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.079 qpair failed and we were unable to recover it. 00:42:47.079 [2024-05-15 09:08:41.645737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.079 [2024-05-15 09:08:41.645870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.079 [2024-05-15 09:08:41.645894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.079 qpair failed and we were unable to recover it. 00:42:47.079 [2024-05-15 09:08:41.646043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.079 [2024-05-15 09:08:41.646180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.079 [2024-05-15 09:08:41.646209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.079 qpair failed and we were unable to recover it. 00:42:47.079 [2024-05-15 09:08:41.646389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.079 [2024-05-15 09:08:41.646519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.079 [2024-05-15 09:08:41.646545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.079 qpair failed and we were unable to recover it. 00:42:47.079 [2024-05-15 09:08:41.646699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.079 [2024-05-15 09:08:41.646827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.079 [2024-05-15 09:08:41.646868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.079 qpair failed and we were unable to recover it. 00:42:47.079 [2024-05-15 09:08:41.646982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.079 [2024-05-15 09:08:41.647154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.079 [2024-05-15 09:08:41.647182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.079 qpair failed and we were unable to recover it. 00:42:47.079 [2024-05-15 09:08:41.647347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.079 [2024-05-15 09:08:41.647451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.079 [2024-05-15 09:08:41.647476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.079 qpair failed and we were unable to recover it. 00:42:47.079 [2024-05-15 09:08:41.647583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.080 [2024-05-15 09:08:41.647690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.080 [2024-05-15 09:08:41.647715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.080 qpair failed and we were unable to recover it. 00:42:47.080 [2024-05-15 09:08:41.647818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.080 [2024-05-15 09:08:41.647945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.080 [2024-05-15 09:08:41.647988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.080 qpair failed and we were unable to recover it. 00:42:47.080 [2024-05-15 09:08:41.648120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.080 [2024-05-15 09:08:41.648270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.080 [2024-05-15 09:08:41.648295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.080 qpair failed and we were unable to recover it. 00:42:47.080 [2024-05-15 09:08:41.648401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.080 [2024-05-15 09:08:41.648530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.080 [2024-05-15 09:08:41.648555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.080 qpair failed and we were unable to recover it. 00:42:47.080 [2024-05-15 09:08:41.648662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.080 [2024-05-15 09:08:41.648792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.080 [2024-05-15 09:08:41.648818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.080 qpair failed and we were unable to recover it. 00:42:47.080 [2024-05-15 09:08:41.649002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.080 [2024-05-15 09:08:41.649115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.080 [2024-05-15 09:08:41.649144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.080 qpair failed and we were unable to recover it. 00:42:47.080 [2024-05-15 09:08:41.649281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.080 [2024-05-15 09:08:41.649394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.080 [2024-05-15 09:08:41.649419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.080 qpair failed and we were unable to recover it. 00:42:47.080 [2024-05-15 09:08:41.649584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.080 [2024-05-15 09:08:41.649691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.080 [2024-05-15 09:08:41.649718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.080 qpair failed and we were unable to recover it. 00:42:47.080 [2024-05-15 09:08:41.649836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.080 [2024-05-15 09:08:41.649982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.080 [2024-05-15 09:08:41.650009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.080 qpair failed and we were unable to recover it. 00:42:47.080 [2024-05-15 09:08:41.650150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.080 [2024-05-15 09:08:41.650271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.080 [2024-05-15 09:08:41.650296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.080 qpair failed and we were unable to recover it. 00:42:47.080 [2024-05-15 09:08:41.650445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.080 [2024-05-15 09:08:41.650582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.080 [2024-05-15 09:08:41.650611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.080 qpair failed and we were unable to recover it. 00:42:47.080 [2024-05-15 09:08:41.650716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.080 [2024-05-15 09:08:41.650858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.080 [2024-05-15 09:08:41.650886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.080 qpair failed and we were unable to recover it. 00:42:47.080 [2024-05-15 09:08:41.651033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.080 [2024-05-15 09:08:41.651155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.080 [2024-05-15 09:08:41.651180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.080 qpair failed and we were unable to recover it. 00:42:47.080 [2024-05-15 09:08:41.651333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.080 [2024-05-15 09:08:41.651459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.080 [2024-05-15 09:08:41.651484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.080 qpair failed and we were unable to recover it. 00:42:47.080 [2024-05-15 09:08:41.651608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.080 [2024-05-15 09:08:41.651744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.080 [2024-05-15 09:08:41.651772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.080 qpair failed and we were unable to recover it. 00:42:47.080 [2024-05-15 09:08:41.651927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.080 [2024-05-15 09:08:41.652035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.080 [2024-05-15 09:08:41.652062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.080 qpair failed and we were unable to recover it. 00:42:47.080 [2024-05-15 09:08:41.652251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.080 [2024-05-15 09:08:41.652376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.080 [2024-05-15 09:08:41.652401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.080 qpair failed and we were unable to recover it. 00:42:47.080 [2024-05-15 09:08:41.652504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.080 [2024-05-15 09:08:41.652634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.080 [2024-05-15 09:08:41.652663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.080 qpair failed and we were unable to recover it. 00:42:47.080 [2024-05-15 09:08:41.652822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.080 [2024-05-15 09:08:41.652952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.080 [2024-05-15 09:08:41.652976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.080 qpair failed and we were unable to recover it. 00:42:47.080 [2024-05-15 09:08:41.653133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.080 [2024-05-15 09:08:41.653250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.080 [2024-05-15 09:08:41.653276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.080 qpair failed and we were unable to recover it. 00:42:47.080 [2024-05-15 09:08:41.653393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.080 [2024-05-15 09:08:41.653506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.080 [2024-05-15 09:08:41.653534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.080 qpair failed and we were unable to recover it. 00:42:47.080 [2024-05-15 09:08:41.653712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.080 [2024-05-15 09:08:41.653823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.080 [2024-05-15 09:08:41.653848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.080 qpair failed and we were unable to recover it. 00:42:47.080 [2024-05-15 09:08:41.654003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.080 [2024-05-15 09:08:41.654150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.080 [2024-05-15 09:08:41.654178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.080 qpair failed and we were unable to recover it. 00:42:47.080 [2024-05-15 09:08:41.654332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.080 [2024-05-15 09:08:41.654436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.080 [2024-05-15 09:08:41.654464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.080 qpair failed and we were unable to recover it. 00:42:47.080 [2024-05-15 09:08:41.654617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.080 [2024-05-15 09:08:41.654748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.080 [2024-05-15 09:08:41.654773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.080 qpair failed and we were unable to recover it. 00:42:47.080 [2024-05-15 09:08:41.654881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.080 [2024-05-15 09:08:41.655029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.080 [2024-05-15 09:08:41.655058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.080 qpair failed and we were unable to recover it. 00:42:47.080 [2024-05-15 09:08:41.655197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.080 [2024-05-15 09:08:41.655369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.080 [2024-05-15 09:08:41.655394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.080 qpair failed and we were unable to recover it. 00:42:47.080 [2024-05-15 09:08:41.655544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.080 [2024-05-15 09:08:41.655672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.080 [2024-05-15 09:08:41.655697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.080 qpair failed and we were unable to recover it. 00:42:47.080 [2024-05-15 09:08:41.655830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.080 [2024-05-15 09:08:41.655938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.080 [2024-05-15 09:08:41.655979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.080 qpair failed and we were unable to recover it. 00:42:47.081 [2024-05-15 09:08:41.656115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.081 [2024-05-15 09:08:41.656299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.081 [2024-05-15 09:08:41.656328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.081 qpair failed and we were unable to recover it. 00:42:47.081 [2024-05-15 09:08:41.656474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.081 [2024-05-15 09:08:41.656633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.081 [2024-05-15 09:08:41.656658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.081 qpair failed and we were unable to recover it. 00:42:47.081 [2024-05-15 09:08:41.656816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.081 [2024-05-15 09:08:41.656956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.081 [2024-05-15 09:08:41.656983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.081 qpair failed and we were unable to recover it. 00:42:47.081 [2024-05-15 09:08:41.657155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.081 [2024-05-15 09:08:41.657288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.081 [2024-05-15 09:08:41.657315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.081 qpair failed and we were unable to recover it. 00:42:47.081 [2024-05-15 09:08:41.657451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.081 [2024-05-15 09:08:41.657581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.081 [2024-05-15 09:08:41.657605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.081 qpair failed and we were unable to recover it. 00:42:47.081 [2024-05-15 09:08:41.657738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.081 [2024-05-15 09:08:41.657849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.081 [2024-05-15 09:08:41.657876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.081 qpair failed and we were unable to recover it. 00:42:47.081 [2024-05-15 09:08:41.658021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.081 [2024-05-15 09:08:41.658163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.081 [2024-05-15 09:08:41.658190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.081 qpair failed and we were unable to recover it. 00:42:47.081 [2024-05-15 09:08:41.658354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.081 [2024-05-15 09:08:41.658506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.081 [2024-05-15 09:08:41.658547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.081 qpair failed and we were unable to recover it. 00:42:47.081 [2024-05-15 09:08:41.658711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.081 [2024-05-15 09:08:41.658896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.081 [2024-05-15 09:08:41.658921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.081 qpair failed and we were unable to recover it. 00:42:47.081 [2024-05-15 09:08:41.659051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.081 [2024-05-15 09:08:41.659204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.081 [2024-05-15 09:08:41.659237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.081 qpair failed and we were unable to recover it. 00:42:47.081 [2024-05-15 09:08:41.659358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.081 [2024-05-15 09:08:41.659489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.081 [2024-05-15 09:08:41.659514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.081 qpair failed and we were unable to recover it. 00:42:47.081 [2024-05-15 09:08:41.659665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.081 [2024-05-15 09:08:41.659790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.081 [2024-05-15 09:08:41.659814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.081 qpair failed and we were unable to recover it. 00:42:47.081 [2024-05-15 09:08:41.659940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.081 [2024-05-15 09:08:41.660039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.081 [2024-05-15 09:08:41.660081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.081 qpair failed and we were unable to recover it. 00:42:47.081 [2024-05-15 09:08:41.660199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.081 [2024-05-15 09:08:41.660343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.081 [2024-05-15 09:08:41.660369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.081 qpair failed and we were unable to recover it. 00:42:47.081 [2024-05-15 09:08:41.660518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.081 [2024-05-15 09:08:41.660650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.081 [2024-05-15 09:08:41.660678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.081 qpair failed and we were unable to recover it. 00:42:47.081 [2024-05-15 09:08:41.660823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.081 [2024-05-15 09:08:41.660926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.081 [2024-05-15 09:08:41.660951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.081 qpair failed and we were unable to recover it. 00:42:47.081 [2024-05-15 09:08:41.661128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.081 [2024-05-15 09:08:41.661251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.081 [2024-05-15 09:08:41.661277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.081 qpair failed and we were unable to recover it. 00:42:47.081 [2024-05-15 09:08:41.661382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.081 [2024-05-15 09:08:41.661557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.081 [2024-05-15 09:08:41.661585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.081 qpair failed and we were unable to recover it. 00:42:47.081 [2024-05-15 09:08:41.661726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.081 [2024-05-15 09:08:41.661871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.081 [2024-05-15 09:08:41.661899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.081 qpair failed and we were unable to recover it. 00:42:47.081 [2024-05-15 09:08:41.662047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.081 [2024-05-15 09:08:41.662175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.081 [2024-05-15 09:08:41.662200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.081 qpair failed and we were unable to recover it. 00:42:47.081 [2024-05-15 09:08:41.662404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.081 [2024-05-15 09:08:41.662530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.081 [2024-05-15 09:08:41.662555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.081 qpair failed and we were unable to recover it. 00:42:47.081 [2024-05-15 09:08:41.662707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.081 [2024-05-15 09:08:41.662881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.081 [2024-05-15 09:08:41.662906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.081 qpair failed and we were unable to recover it. 00:42:47.081 [2024-05-15 09:08:41.663033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.081 [2024-05-15 09:08:41.663134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.081 [2024-05-15 09:08:41.663160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.081 qpair failed and we were unable to recover it. 00:42:47.081 [2024-05-15 09:08:41.663285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.081 [2024-05-15 09:08:41.663440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.081 [2024-05-15 09:08:41.663465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.081 qpair failed and we were unable to recover it. 00:42:47.081 [2024-05-15 09:08:41.663595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.081 [2024-05-15 09:08:41.663742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.081 [2024-05-15 09:08:41.663784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.081 qpair failed and we were unable to recover it. 00:42:47.081 [2024-05-15 09:08:41.663913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.081 [2024-05-15 09:08:41.664038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.081 [2024-05-15 09:08:41.664064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.081 qpair failed and we were unable to recover it. 00:42:47.081 [2024-05-15 09:08:41.664187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.081 [2024-05-15 09:08:41.664350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.081 [2024-05-15 09:08:41.664376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.081 qpair failed and we were unable to recover it. 00:42:47.081 [2024-05-15 09:08:41.664505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.081 [2024-05-15 09:08:41.664633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.081 [2024-05-15 09:08:41.664658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.082 qpair failed and we were unable to recover it. 00:42:47.082 [2024-05-15 09:08:41.664818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.082 [2024-05-15 09:08:41.664921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.082 [2024-05-15 09:08:41.664946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.082 qpair failed and we were unable to recover it. 00:42:47.082 [2024-05-15 09:08:41.665061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.082 [2024-05-15 09:08:41.665199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.082 [2024-05-15 09:08:41.665235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.082 qpair failed and we were unable to recover it. 00:42:47.082 [2024-05-15 09:08:41.665347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.082 [2024-05-15 09:08:41.665497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.082 [2024-05-15 09:08:41.665522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.082 qpair failed and we were unable to recover it. 00:42:47.082 [2024-05-15 09:08:41.665674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.082 [2024-05-15 09:08:41.665771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.082 [2024-05-15 09:08:41.665797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.082 qpair failed and we were unable to recover it. 00:42:47.082 [2024-05-15 09:08:41.665918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.082 [2024-05-15 09:08:41.666062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.082 [2024-05-15 09:08:41.666090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.082 qpair failed and we were unable to recover it. 00:42:47.082 [2024-05-15 09:08:41.666261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.082 [2024-05-15 09:08:41.666391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.082 [2024-05-15 09:08:41.666416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.082 qpair failed and we were unable to recover it. 00:42:47.082 [2024-05-15 09:08:41.666546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.082 [2024-05-15 09:08:41.666675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.082 [2024-05-15 09:08:41.666699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.082 qpair failed and we were unable to recover it. 00:42:47.082 [2024-05-15 09:08:41.666830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.082 [2024-05-15 09:08:41.666929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.082 [2024-05-15 09:08:41.666954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.082 qpair failed and we were unable to recover it. 00:42:47.082 [2024-05-15 09:08:41.667080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.082 [2024-05-15 09:08:41.667227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.082 [2024-05-15 09:08:41.667256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.082 qpair failed and we were unable to recover it. 00:42:47.082 [2024-05-15 09:08:41.667427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.082 [2024-05-15 09:08:41.667529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.082 [2024-05-15 09:08:41.667555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.082 qpair failed and we were unable to recover it. 00:42:47.082 [2024-05-15 09:08:41.667712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.082 [2024-05-15 09:08:41.667826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.082 [2024-05-15 09:08:41.667855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.082 qpair failed and we were unable to recover it. 00:42:47.082 [2024-05-15 09:08:41.667993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.082 [2024-05-15 09:08:41.668141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.082 [2024-05-15 09:08:41.668182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.082 qpair failed and we were unable to recover it. 00:42:47.082 [2024-05-15 09:08:41.668344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.082 [2024-05-15 09:08:41.668447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.082 [2024-05-15 09:08:41.668472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.082 qpair failed and we were unable to recover it. 00:42:47.082 [2024-05-15 09:08:41.668598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.082 [2024-05-15 09:08:41.668774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.082 [2024-05-15 09:08:41.668826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.082 qpair failed and we were unable to recover it. 00:42:47.082 [2024-05-15 09:08:41.669021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.082 [2024-05-15 09:08:41.669187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.082 [2024-05-15 09:08:41.669223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.082 qpair failed and we were unable to recover it. 00:42:47.082 [2024-05-15 09:08:41.669338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.082 [2024-05-15 09:08:41.669473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.082 [2024-05-15 09:08:41.669498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.082 qpair failed and we were unable to recover it. 00:42:47.082 [2024-05-15 09:08:41.669671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.082 [2024-05-15 09:08:41.669881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.082 [2024-05-15 09:08:41.669931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.082 qpair failed and we were unable to recover it. 00:42:47.082 [2024-05-15 09:08:41.670082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.082 [2024-05-15 09:08:41.670212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.082 [2024-05-15 09:08:41.670243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.082 qpair failed and we were unable to recover it. 00:42:47.082 [2024-05-15 09:08:41.670372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.082 [2024-05-15 09:08:41.670504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.082 [2024-05-15 09:08:41.670529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.082 qpair failed and we were unable to recover it. 00:42:47.082 [2024-05-15 09:08:41.670637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.082 [2024-05-15 09:08:41.670786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.082 [2024-05-15 09:08:41.670814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.082 qpair failed and we were unable to recover it. 00:42:47.082 [2024-05-15 09:08:41.670962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.082 [2024-05-15 09:08:41.671104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.082 [2024-05-15 09:08:41.671132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.082 qpair failed and we were unable to recover it. 00:42:47.082 [2024-05-15 09:08:41.671255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.082 [2024-05-15 09:08:41.671384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.082 [2024-05-15 09:08:41.671409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.082 qpair failed and we were unable to recover it. 00:42:47.082 [2024-05-15 09:08:41.671564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.082 [2024-05-15 09:08:41.671703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.082 [2024-05-15 09:08:41.671730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.082 qpair failed and we were unable to recover it. 00:42:47.082 [2024-05-15 09:08:41.671869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.082 [2024-05-15 09:08:41.672016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.082 [2024-05-15 09:08:41.672044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.082 qpair failed and we were unable to recover it. 00:42:47.082 [2024-05-15 09:08:41.672193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.082 [2024-05-15 09:08:41.672351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.083 [2024-05-15 09:08:41.672392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.083 qpair failed and we were unable to recover it. 00:42:47.083 [2024-05-15 09:08:41.672532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.083 [2024-05-15 09:08:41.672641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.083 [2024-05-15 09:08:41.672682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.083 qpair failed and we were unable to recover it. 00:42:47.083 [2024-05-15 09:08:41.672789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.083 [2024-05-15 09:08:41.672946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.083 [2024-05-15 09:08:41.672974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.083 qpair failed and we were unable to recover it. 00:42:47.083 [2024-05-15 09:08:41.673109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.083 [2024-05-15 09:08:41.673238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.083 [2024-05-15 09:08:41.673263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.083 qpair failed and we were unable to recover it. 00:42:47.083 [2024-05-15 09:08:41.673414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.083 [2024-05-15 09:08:41.673582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.083 [2024-05-15 09:08:41.673626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.083 qpair failed and we were unable to recover it. 00:42:47.083 [2024-05-15 09:08:41.673767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.083 [2024-05-15 09:08:41.673905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.083 [2024-05-15 09:08:41.673932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.083 qpair failed and we were unable to recover it. 00:42:47.083 [2024-05-15 09:08:41.674080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.083 [2024-05-15 09:08:41.674228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.083 [2024-05-15 09:08:41.674254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.083 qpair failed and we were unable to recover it. 00:42:47.083 [2024-05-15 09:08:41.674380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.083 [2024-05-15 09:08:41.674557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.083 [2024-05-15 09:08:41.674585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.083 qpair failed and we were unable to recover it. 00:42:47.083 [2024-05-15 09:08:41.674752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.083 [2024-05-15 09:08:41.674895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.083 [2024-05-15 09:08:41.674922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.083 qpair failed and we were unable to recover it. 00:42:47.083 [2024-05-15 09:08:41.675088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.083 [2024-05-15 09:08:41.675220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.083 [2024-05-15 09:08:41.675245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.083 qpair failed and we were unable to recover it. 00:42:47.083 [2024-05-15 09:08:41.675373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.083 [2024-05-15 09:08:41.675500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.083 [2024-05-15 09:08:41.675525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.083 qpair failed and we were unable to recover it. 00:42:47.083 [2024-05-15 09:08:41.675654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.083 [2024-05-15 09:08:41.675752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.083 [2024-05-15 09:08:41.675777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.083 qpair failed and we were unable to recover it. 00:42:47.083 [2024-05-15 09:08:41.675906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.083 [2024-05-15 09:08:41.676016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.083 [2024-05-15 09:08:41.676058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.083 qpair failed and we were unable to recover it. 00:42:47.083 [2024-05-15 09:08:41.676204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.083 [2024-05-15 09:08:41.676351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.083 [2024-05-15 09:08:41.676376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.083 qpair failed and we were unable to recover it. 00:42:47.083 [2024-05-15 09:08:41.676484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.083 [2024-05-15 09:08:41.676614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.083 [2024-05-15 09:08:41.676639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.083 qpair failed and we were unable to recover it. 00:42:47.083 [2024-05-15 09:08:41.676793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.083 [2024-05-15 09:08:41.676893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.083 [2024-05-15 09:08:41.676919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.083 qpair failed and we were unable to recover it. 00:42:47.083 [2024-05-15 09:08:41.677015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.083 [2024-05-15 09:08:41.677114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.083 [2024-05-15 09:08:41.677140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.083 qpair failed and we were unable to recover it. 00:42:47.083 [2024-05-15 09:08:41.677242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.083 [2024-05-15 09:08:41.677360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.083 [2024-05-15 09:08:41.677385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.083 qpair failed and we were unable to recover it. 00:42:47.083 [2024-05-15 09:08:41.677516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.083 [2024-05-15 09:08:41.677612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.083 [2024-05-15 09:08:41.677637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.083 qpair failed and we were unable to recover it. 00:42:47.083 [2024-05-15 09:08:41.677746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.083 [2024-05-15 09:08:41.677915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.083 [2024-05-15 09:08:41.677943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.083 qpair failed and we were unable to recover it. 00:42:47.083 [2024-05-15 09:08:41.678122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.083 [2024-05-15 09:08:41.678254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.083 [2024-05-15 09:08:41.678279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.083 qpair failed and we were unable to recover it. 00:42:47.083 [2024-05-15 09:08:41.678388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.083 [2024-05-15 09:08:41.678487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.083 [2024-05-15 09:08:41.678513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.083 qpair failed and we were unable to recover it. 00:42:47.083 [2024-05-15 09:08:41.678645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.083 [2024-05-15 09:08:41.678778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.083 [2024-05-15 09:08:41.678804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.083 qpair failed and we were unable to recover it. 00:42:47.083 [2024-05-15 09:08:41.678924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.083 [2024-05-15 09:08:41.679057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.083 [2024-05-15 09:08:41.679085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.083 qpair failed and we were unable to recover it. 00:42:47.083 [2024-05-15 09:08:41.679209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.083 [2024-05-15 09:08:41.679318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.083 [2024-05-15 09:08:41.679343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.083 qpair failed and we were unable to recover it. 00:42:47.083 [2024-05-15 09:08:41.679445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.083 [2024-05-15 09:08:41.679575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.083 [2024-05-15 09:08:41.679599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.083 qpair failed and we were unable to recover it. 00:42:47.083 [2024-05-15 09:08:41.679772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.083 [2024-05-15 09:08:41.679940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.083 [2024-05-15 09:08:41.679968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.083 qpair failed and we were unable to recover it. 00:42:47.083 [2024-05-15 09:08:41.680117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.083 [2024-05-15 09:08:41.680243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.083 [2024-05-15 09:08:41.680269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.083 qpair failed and we were unable to recover it. 00:42:47.083 [2024-05-15 09:08:41.680424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.083 [2024-05-15 09:08:41.680641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.083 [2024-05-15 09:08:41.680691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.083 qpair failed and we were unable to recover it. 00:42:47.084 [2024-05-15 09:08:41.680832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.084 [2024-05-15 09:08:41.680951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.084 [2024-05-15 09:08:41.680981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.084 qpair failed and we were unable to recover it. 00:42:47.084 [2024-05-15 09:08:41.681150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.084 [2024-05-15 09:08:41.681305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.084 [2024-05-15 09:08:41.681332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.084 qpair failed and we were unable to recover it. 00:42:47.084 [2024-05-15 09:08:41.681465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.084 [2024-05-15 09:08:41.681597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.084 [2024-05-15 09:08:41.681621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.084 qpair failed and we were unable to recover it. 00:42:47.084 [2024-05-15 09:08:41.681754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.084 [2024-05-15 09:08:41.681906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.084 [2024-05-15 09:08:41.681937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.084 qpair failed and we were unable to recover it. 00:42:47.084 [2024-05-15 09:08:41.682087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.084 [2024-05-15 09:08:41.682206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.084 [2024-05-15 09:08:41.682236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.084 qpair failed and we were unable to recover it. 00:42:47.084 [2024-05-15 09:08:41.682344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.084 [2024-05-15 09:08:41.682472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.084 [2024-05-15 09:08:41.682498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.084 qpair failed and we were unable to recover it. 00:42:47.084 [2024-05-15 09:08:41.682687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.084 [2024-05-15 09:08:41.682823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.084 [2024-05-15 09:08:41.682852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.084 qpair failed and we were unable to recover it. 00:42:47.084 [2024-05-15 09:08:41.682987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.084 [2024-05-15 09:08:41.683089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.084 [2024-05-15 09:08:41.683114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.084 qpair failed and we were unable to recover it. 00:42:47.084 [2024-05-15 09:08:41.683224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.084 [2024-05-15 09:08:41.683344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.084 [2024-05-15 09:08:41.683369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.084 qpair failed and we were unable to recover it. 00:42:47.084 [2024-05-15 09:08:41.683472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.084 [2024-05-15 09:08:41.683643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.084 [2024-05-15 09:08:41.683670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.084 qpair failed and we were unable to recover it. 00:42:47.084 [2024-05-15 09:08:41.683824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.084 [2024-05-15 09:08:41.683931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.084 [2024-05-15 09:08:41.683955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.084 qpair failed and we were unable to recover it. 00:42:47.084 [2024-05-15 09:08:41.684100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.084 [2024-05-15 09:08:41.684219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.084 [2024-05-15 09:08:41.684247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.084 qpair failed and we were unable to recover it. 00:42:47.084 [2024-05-15 09:08:41.684370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.084 [2024-05-15 09:08:41.684508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.084 [2024-05-15 09:08:41.684535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.084 qpair failed and we were unable to recover it. 00:42:47.084 [2024-05-15 09:08:41.684711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.084 [2024-05-15 09:08:41.684886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.084 [2024-05-15 09:08:41.684923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.084 qpair failed and we were unable to recover it. 00:42:47.084 [2024-05-15 09:08:41.685082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.084 [2024-05-15 09:08:41.685211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.084 [2024-05-15 09:08:41.685258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.084 qpair failed and we were unable to recover it. 00:42:47.084 [2024-05-15 09:08:41.685410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.084 [2024-05-15 09:08:41.685580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.084 [2024-05-15 09:08:41.685608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.084 qpair failed and we were unable to recover it. 00:42:47.084 [2024-05-15 09:08:41.685736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.084 [2024-05-15 09:08:41.685857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.084 [2024-05-15 09:08:41.685882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.084 qpair failed and we were unable to recover it. 00:42:47.084 [2024-05-15 09:08:41.686015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.084 [2024-05-15 09:08:41.686109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.084 [2024-05-15 09:08:41.686133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.084 qpair failed and we were unable to recover it. 00:42:47.084 [2024-05-15 09:08:41.686243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.084 [2024-05-15 09:08:41.686376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.084 [2024-05-15 09:08:41.686400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.084 qpair failed and we were unable to recover it. 00:42:47.084 [2024-05-15 09:08:41.686533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.084 [2024-05-15 09:08:41.686654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.084 [2024-05-15 09:08:41.686679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.084 qpair failed and we were unable to recover it. 00:42:47.084 [2024-05-15 09:08:41.686811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.084 [2024-05-15 09:08:41.686918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.084 [2024-05-15 09:08:41.686946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.084 qpair failed and we were unable to recover it. 00:42:47.084 [2024-05-15 09:08:41.687089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.084 [2024-05-15 09:08:41.687237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.084 [2024-05-15 09:08:41.687262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.084 qpair failed and we were unable to recover it. 00:42:47.084 [2024-05-15 09:08:41.687365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.084 [2024-05-15 09:08:41.687497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.084 [2024-05-15 09:08:41.687522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.084 qpair failed and we were unable to recover it. 00:42:47.084 [2024-05-15 09:08:41.687651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.084 [2024-05-15 09:08:41.687749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.084 [2024-05-15 09:08:41.687778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.084 qpair failed and we were unable to recover it. 00:42:47.084 [2024-05-15 09:08:41.687910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.084 [2024-05-15 09:08:41.688042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.084 [2024-05-15 09:08:41.688069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.084 qpair failed and we were unable to recover it. 00:42:47.084 [2024-05-15 09:08:41.688266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.084 [2024-05-15 09:08:41.688369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.084 [2024-05-15 09:08:41.688395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.084 qpair failed and we were unable to recover it. 00:42:47.084 [2024-05-15 09:08:41.688502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.084 [2024-05-15 09:08:41.688606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.084 [2024-05-15 09:08:41.688632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.084 qpair failed and we were unable to recover it. 00:42:47.084 [2024-05-15 09:08:41.688736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.084 [2024-05-15 09:08:41.688909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.084 [2024-05-15 09:08:41.688937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.084 qpair failed and we were unable to recover it. 00:42:47.084 [2024-05-15 09:08:41.689056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.084 [2024-05-15 09:08:41.689155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.085 [2024-05-15 09:08:41.689182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.085 qpair failed and we were unable to recover it. 00:42:47.085 [2024-05-15 09:08:41.689290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.085 [2024-05-15 09:08:41.689423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.085 [2024-05-15 09:08:41.689448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.085 qpair failed and we were unable to recover it. 00:42:47.085 [2024-05-15 09:08:41.689634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.085 [2024-05-15 09:08:41.689761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.085 [2024-05-15 09:08:41.689785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.085 qpair failed and we were unable to recover it. 00:42:47.085 [2024-05-15 09:08:41.689891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.085 [2024-05-15 09:08:41.690017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.085 [2024-05-15 09:08:41.690043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.085 qpair failed and we were unable to recover it. 00:42:47.085 [2024-05-15 09:08:41.690236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.085 [2024-05-15 09:08:41.690365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.085 [2024-05-15 09:08:41.690390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.085 qpair failed and we were unable to recover it. 00:42:47.085 [2024-05-15 09:08:41.690532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.085 [2024-05-15 09:08:41.690640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.085 [2024-05-15 09:08:41.690673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.085 qpair failed and we were unable to recover it. 00:42:47.085 [2024-05-15 09:08:41.690835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.085 [2024-05-15 09:08:41.690943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.085 [2024-05-15 09:08:41.690968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.085 qpair failed and we were unable to recover it. 00:42:47.085 [2024-05-15 09:08:41.691114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.085 [2024-05-15 09:08:41.691288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.085 [2024-05-15 09:08:41.691313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.085 qpair failed and we were unable to recover it. 00:42:47.085 [2024-05-15 09:08:41.691449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.085 [2024-05-15 09:08:41.691581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.085 [2024-05-15 09:08:41.691610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.085 qpair failed and we were unable to recover it. 00:42:47.085 [2024-05-15 09:08:41.691764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.085 [2024-05-15 09:08:41.691895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.085 [2024-05-15 09:08:41.691920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.085 qpair failed and we were unable to recover it. 00:42:47.085 [2024-05-15 09:08:41.692072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.085 [2024-05-15 09:08:41.692242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.085 [2024-05-15 09:08:41.692271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.085 qpair failed and we were unable to recover it. 00:42:47.085 [2024-05-15 09:08:41.692410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.085 [2024-05-15 09:08:41.692547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.085 [2024-05-15 09:08:41.692575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.085 qpair failed and we were unable to recover it. 00:42:47.085 [2024-05-15 09:08:41.692764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.085 [2024-05-15 09:08:41.692867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.085 [2024-05-15 09:08:41.692894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.085 qpair failed and we were unable to recover it. 00:42:47.085 [2024-05-15 09:08:41.693025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.085 [2024-05-15 09:08:41.693141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.085 [2024-05-15 09:08:41.693170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.085 qpair failed and we were unable to recover it. 00:42:47.085 [2024-05-15 09:08:41.693321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.085 [2024-05-15 09:08:41.693440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.085 [2024-05-15 09:08:41.693469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.085 qpair failed and we were unable to recover it. 00:42:47.085 [2024-05-15 09:08:41.693607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.085 [2024-05-15 09:08:41.693731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.085 [2024-05-15 09:08:41.693755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.085 qpair failed and we were unable to recover it. 00:42:47.085 [2024-05-15 09:08:41.693897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.085 [2024-05-15 09:08:41.694075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.085 [2024-05-15 09:08:41.694103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.085 qpair failed and we were unable to recover it. 00:42:47.085 [2024-05-15 09:08:41.694255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.085 [2024-05-15 09:08:41.694397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.085 [2024-05-15 09:08:41.694425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.085 qpair failed and we were unable to recover it. 00:42:47.085 [2024-05-15 09:08:41.694580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.085 [2024-05-15 09:08:41.694704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.085 [2024-05-15 09:08:41.694730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.085 qpair failed and we were unable to recover it. 00:42:47.085 [2024-05-15 09:08:41.694912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.085 [2024-05-15 09:08:41.695053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.085 [2024-05-15 09:08:41.695082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.085 qpair failed and we were unable to recover it. 00:42:47.085 [2024-05-15 09:08:41.695255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.085 [2024-05-15 09:08:41.695391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.085 [2024-05-15 09:08:41.695420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.085 qpair failed and we were unable to recover it. 00:42:47.085 [2024-05-15 09:08:41.695571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.085 [2024-05-15 09:08:41.695661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.085 [2024-05-15 09:08:41.695685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.085 qpair failed and we were unable to recover it. 00:42:47.085 [2024-05-15 09:08:41.695837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.085 [2024-05-15 09:08:41.695946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.085 [2024-05-15 09:08:41.695974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.085 qpair failed and we were unable to recover it. 00:42:47.085 [2024-05-15 09:08:41.696110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.085 [2024-05-15 09:08:41.696254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.085 [2024-05-15 09:08:41.696282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.085 qpair failed and we were unable to recover it. 00:42:47.085 [2024-05-15 09:08:41.696419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.085 [2024-05-15 09:08:41.696529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.085 [2024-05-15 09:08:41.696555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.085 qpair failed and we were unable to recover it. 00:42:47.085 [2024-05-15 09:08:41.696659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.085 [2024-05-15 09:08:41.696783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.085 [2024-05-15 09:08:41.696809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.085 qpair failed and we were unable to recover it. 00:42:47.085 [2024-05-15 09:08:41.696905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.085 [2024-05-15 09:08:41.697028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.085 [2024-05-15 09:08:41.697053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.085 qpair failed and we were unable to recover it. 00:42:47.085 [2024-05-15 09:08:41.697149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.085 [2024-05-15 09:08:41.697283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.085 [2024-05-15 09:08:41.697309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.085 qpair failed and we were unable to recover it. 00:42:47.085 [2024-05-15 09:08:41.697439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.085 [2024-05-15 09:08:41.697539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.085 [2024-05-15 09:08:41.697564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.085 qpair failed and we were unable to recover it. 00:42:47.085 [2024-05-15 09:08:41.697664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.086 [2024-05-15 09:08:41.697811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.086 [2024-05-15 09:08:41.697839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.086 qpair failed and we were unable to recover it. 00:42:47.086 [2024-05-15 09:08:41.697963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.086 [2024-05-15 09:08:41.698130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.086 [2024-05-15 09:08:41.698156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.086 qpair failed and we were unable to recover it. 00:42:47.086 [2024-05-15 09:08:41.698353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.086 [2024-05-15 09:08:41.698481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.086 [2024-05-15 09:08:41.698527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.086 qpair failed and we were unable to recover it. 00:42:47.086 [2024-05-15 09:08:41.698680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.086 [2024-05-15 09:08:41.698816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.086 [2024-05-15 09:08:41.698841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.086 qpair failed and we were unable to recover it. 00:42:47.086 [2024-05-15 09:08:41.698969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.086 [2024-05-15 09:08:41.699067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.086 [2024-05-15 09:08:41.699093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.086 qpair failed and we were unable to recover it. 00:42:47.086 [2024-05-15 09:08:41.699245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.086 [2024-05-15 09:08:41.699363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.086 [2024-05-15 09:08:41.699390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.086 qpair failed and we were unable to recover it. 00:42:47.086 [2024-05-15 09:08:41.699578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.086 [2024-05-15 09:08:41.699713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.086 [2024-05-15 09:08:41.699738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.086 qpair failed and we were unable to recover it. 00:42:47.086 [2024-05-15 09:08:41.699873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.086 [2024-05-15 09:08:41.699969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.086 [2024-05-15 09:08:41.699996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.086 qpair failed and we were unable to recover it. 00:42:47.086 [2024-05-15 09:08:41.700151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.086 [2024-05-15 09:08:41.700315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.086 [2024-05-15 09:08:41.700344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.086 qpair failed and we were unable to recover it. 00:42:47.086 [2024-05-15 09:08:41.700477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.086 [2024-05-15 09:08:41.700588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.086 [2024-05-15 09:08:41.700617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.086 qpair failed and we were unable to recover it. 00:42:47.086 [2024-05-15 09:08:41.700736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.086 [2024-05-15 09:08:41.700844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.086 [2024-05-15 09:08:41.700869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.086 qpair failed and we were unable to recover it. 00:42:47.086 [2024-05-15 09:08:41.700978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.086 [2024-05-15 09:08:41.701087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.086 [2024-05-15 09:08:41.701112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.086 qpair failed and we were unable to recover it. 00:42:47.086 [2024-05-15 09:08:41.701244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.086 [2024-05-15 09:08:41.701348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.086 [2024-05-15 09:08:41.701374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.086 qpair failed and we were unable to recover it. 00:42:47.086 [2024-05-15 09:08:41.701500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.086 [2024-05-15 09:08:41.701637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.086 [2024-05-15 09:08:41.701662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.086 qpair failed and we were unable to recover it. 00:42:47.086 [2024-05-15 09:08:41.701758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.086 [2024-05-15 09:08:41.701904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.086 [2024-05-15 09:08:41.701932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.086 qpair failed and we were unable to recover it. 00:42:47.086 [2024-05-15 09:08:41.702109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.086 [2024-05-15 09:08:41.702249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.086 [2024-05-15 09:08:41.702292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.086 qpair failed and we were unable to recover it. 00:42:47.086 [2024-05-15 09:08:41.702448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.086 [2024-05-15 09:08:41.702577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.086 [2024-05-15 09:08:41.702618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.086 qpair failed and we were unable to recover it. 00:42:47.086 [2024-05-15 09:08:41.702759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.086 [2024-05-15 09:08:41.702901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.086 [2024-05-15 09:08:41.702928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.086 qpair failed and we were unable to recover it. 00:42:47.086 [2024-05-15 09:08:41.703041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.086 [2024-05-15 09:08:41.703176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.086 [2024-05-15 09:08:41.703204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.086 qpair failed and we were unable to recover it. 00:42:47.086 [2024-05-15 09:08:41.703356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.086 [2024-05-15 09:08:41.703512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.086 [2024-05-15 09:08:41.703553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.086 qpair failed and we were unable to recover it. 00:42:47.086 [2024-05-15 09:08:41.703689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.086 [2024-05-15 09:08:41.703864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.086 [2024-05-15 09:08:41.703891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.086 qpair failed and we were unable to recover it. 00:42:47.086 [2024-05-15 09:08:41.703995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.086 [2024-05-15 09:08:41.704136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.086 [2024-05-15 09:08:41.704178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.086 qpair failed and we were unable to recover it. 00:42:47.086 [2024-05-15 09:08:41.704342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.086 [2024-05-15 09:08:41.704513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.086 [2024-05-15 09:08:41.704541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.086 qpair failed and we were unable to recover it. 00:42:47.086 [2024-05-15 09:08:41.704677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.086 [2024-05-15 09:08:41.704811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.086 [2024-05-15 09:08:41.704838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.086 qpair failed and we were unable to recover it. 00:42:47.086 [2024-05-15 09:08:41.705002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.086 [2024-05-15 09:08:41.705172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.087 [2024-05-15 09:08:41.705200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.087 qpair failed and we were unable to recover it. 00:42:47.087 [2024-05-15 09:08:41.705356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.087 [2024-05-15 09:08:41.705460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.087 [2024-05-15 09:08:41.705485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.087 qpair failed and we were unable to recover it. 00:42:47.087 [2024-05-15 09:08:41.705669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.087 [2024-05-15 09:08:41.705796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.087 [2024-05-15 09:08:41.705821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.087 qpair failed and we were unable to recover it. 00:42:47.087 [2024-05-15 09:08:41.705975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.087 [2024-05-15 09:08:41.706120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.087 [2024-05-15 09:08:41.706148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.087 qpair failed and we were unable to recover it. 00:42:47.087 [2024-05-15 09:08:41.706298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.087 [2024-05-15 09:08:41.706428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.087 [2024-05-15 09:08:41.706453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.087 qpair failed and we were unable to recover it. 00:42:47.087 [2024-05-15 09:08:41.706615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.087 [2024-05-15 09:08:41.706750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.087 [2024-05-15 09:08:41.706775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.087 qpair failed and we were unable to recover it. 00:42:47.087 [2024-05-15 09:08:41.706908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.087 [2024-05-15 09:08:41.707085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.087 [2024-05-15 09:08:41.707113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.087 qpair failed and we were unable to recover it. 00:42:47.087 [2024-05-15 09:08:41.707238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.087 [2024-05-15 09:08:41.707368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.087 [2024-05-15 09:08:41.707394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.087 qpair failed and we were unable to recover it. 00:42:47.087 [2024-05-15 09:08:41.707541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.087 [2024-05-15 09:08:41.707683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.087 [2024-05-15 09:08:41.707725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.087 qpair failed and we were unable to recover it. 00:42:47.087 [2024-05-15 09:08:41.707832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.087 [2024-05-15 09:08:41.707965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.087 [2024-05-15 09:08:41.707990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.087 qpair failed and we were unable to recover it. 00:42:47.087 [2024-05-15 09:08:41.708115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.087 [2024-05-15 09:08:41.708245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.087 [2024-05-15 09:08:41.708271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.087 qpair failed and we were unable to recover it. 00:42:47.087 [2024-05-15 09:08:41.708398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.087 [2024-05-15 09:08:41.708529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.087 [2024-05-15 09:08:41.708557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.087 qpair failed and we were unable to recover it. 00:42:47.087 [2024-05-15 09:08:41.708675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.087 [2024-05-15 09:08:41.708797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.087 [2024-05-15 09:08:41.708826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.087 qpair failed and we were unable to recover it. 00:42:47.087 [2024-05-15 09:08:41.708987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.087 [2024-05-15 09:08:41.709116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.087 [2024-05-15 09:08:41.709141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.087 qpair failed and we were unable to recover it. 00:42:47.087 [2024-05-15 09:08:41.709308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.087 [2024-05-15 09:08:41.709450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.087 [2024-05-15 09:08:41.709479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.087 qpair failed and we were unable to recover it. 00:42:47.087 [2024-05-15 09:08:41.709598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.087 [2024-05-15 09:08:41.709744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.087 [2024-05-15 09:08:41.709772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.087 qpair failed and we were unable to recover it. 00:42:47.087 [2024-05-15 09:08:41.709926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.087 [2024-05-15 09:08:41.710027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.087 [2024-05-15 09:08:41.710052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.087 qpair failed and we were unable to recover it. 00:42:47.087 [2024-05-15 09:08:41.710183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.087 [2024-05-15 09:08:41.710343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.087 [2024-05-15 09:08:41.710371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.087 qpair failed and we were unable to recover it. 00:42:47.087 [2024-05-15 09:08:41.710552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.087 [2024-05-15 09:08:41.710661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.087 [2024-05-15 09:08:41.710685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.087 qpair failed and we were unable to recover it. 00:42:47.087 [2024-05-15 09:08:41.710784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.087 [2024-05-15 09:08:41.710879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.087 [2024-05-15 09:08:41.710905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.087 qpair failed and we were unable to recover it. 00:42:47.087 [2024-05-15 09:08:41.711059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.087 [2024-05-15 09:08:41.711202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.087 [2024-05-15 09:08:41.711237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.087 qpair failed and we were unable to recover it. 00:42:47.087 [2024-05-15 09:08:41.711348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.087 [2024-05-15 09:08:41.711493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.087 [2024-05-15 09:08:41.711521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.087 qpair failed and we were unable to recover it. 00:42:47.087 [2024-05-15 09:08:41.711665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.087 [2024-05-15 09:08:41.711791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.087 [2024-05-15 09:08:41.711816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.087 qpair failed and we were unable to recover it. 00:42:47.087 [2024-05-15 09:08:41.711937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.087 [2024-05-15 09:08:41.712083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.087 [2024-05-15 09:08:41.712108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.087 qpair failed and we were unable to recover it. 00:42:47.087 [2024-05-15 09:08:41.712248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.087 [2024-05-15 09:08:41.712350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.087 [2024-05-15 09:08:41.712375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.087 qpair failed and we were unable to recover it. 00:42:47.087 [2024-05-15 09:08:41.712501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.087 [2024-05-15 09:08:41.712635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.087 [2024-05-15 09:08:41.712660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.087 qpair failed and we were unable to recover it. 00:42:47.087 [2024-05-15 09:08:41.712803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.087 [2024-05-15 09:08:41.712939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.087 [2024-05-15 09:08:41.712967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.087 qpair failed and we were unable to recover it. 00:42:47.087 [2024-05-15 09:08:41.713080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.087 [2024-05-15 09:08:41.713225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.087 [2024-05-15 09:08:41.713253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.087 qpair failed and we were unable to recover it. 00:42:47.087 [2024-05-15 09:08:41.713376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.087 [2024-05-15 09:08:41.713530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.087 [2024-05-15 09:08:41.713555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.087 qpair failed and we were unable to recover it. 00:42:47.088 [2024-05-15 09:08:41.713712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.088 [2024-05-15 09:08:41.713890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.088 [2024-05-15 09:08:41.713915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.088 qpair failed and we were unable to recover it. 00:42:47.088 [2024-05-15 09:08:41.714067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.088 [2024-05-15 09:08:41.714222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.088 [2024-05-15 09:08:41.714251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.088 qpair failed and we were unable to recover it. 00:42:47.088 [2024-05-15 09:08:41.714401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.088 [2024-05-15 09:08:41.714525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.088 [2024-05-15 09:08:41.714550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.088 qpair failed and we were unable to recover it. 00:42:47.088 [2024-05-15 09:08:41.714674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.088 [2024-05-15 09:08:41.714818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.088 [2024-05-15 09:08:41.714846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.088 qpair failed and we were unable to recover it. 00:42:47.088 [2024-05-15 09:08:41.715024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.088 [2024-05-15 09:08:41.715161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.088 [2024-05-15 09:08:41.715190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.088 qpair failed and we were unable to recover it. 00:42:47.088 [2024-05-15 09:08:41.715329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.088 [2024-05-15 09:08:41.715426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.088 [2024-05-15 09:08:41.715451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.088 qpair failed and we were unable to recover it. 00:42:47.088 [2024-05-15 09:08:41.715543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.088 [2024-05-15 09:08:41.715674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.088 [2024-05-15 09:08:41.715717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.088 qpair failed and we were unable to recover it. 00:42:47.088 [2024-05-15 09:08:41.715883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.088 [2024-05-15 09:08:41.715993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.088 [2024-05-15 09:08:41.716021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.088 qpair failed and we were unable to recover it. 00:42:47.088 [2024-05-15 09:08:41.716164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.088 [2024-05-15 09:08:41.716291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.088 [2024-05-15 09:08:41.716317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.088 qpair failed and we were unable to recover it. 00:42:47.088 [2024-05-15 09:08:41.716428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.088 [2024-05-15 09:08:41.716539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.088 [2024-05-15 09:08:41.716567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.088 qpair failed and we were unable to recover it. 00:42:47.088 [2024-05-15 09:08:41.716679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.088 [2024-05-15 09:08:41.716815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.088 [2024-05-15 09:08:41.716844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.088 qpair failed and we were unable to recover it. 00:42:47.088 [2024-05-15 09:08:41.716993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.088 [2024-05-15 09:08:41.717104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.088 [2024-05-15 09:08:41.717130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.088 qpair failed and we were unable to recover it. 00:42:47.088 [2024-05-15 09:08:41.717283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.088 [2024-05-15 09:08:41.717385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.088 [2024-05-15 09:08:41.717411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.088 qpair failed and we were unable to recover it. 00:42:47.088 [2024-05-15 09:08:41.717522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.088 [2024-05-15 09:08:41.717626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.088 [2024-05-15 09:08:41.717651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.088 qpair failed and we were unable to recover it. 00:42:47.088 [2024-05-15 09:08:41.717752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.088 [2024-05-15 09:08:41.717876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.088 [2024-05-15 09:08:41.717901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.088 qpair failed and we were unable to recover it. 00:42:47.088 [2024-05-15 09:08:41.718016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.088 [2024-05-15 09:08:41.718130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.088 [2024-05-15 09:08:41.718159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.088 qpair failed and we were unable to recover it. 00:42:47.088 [2024-05-15 09:08:41.718279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.088 [2024-05-15 09:08:41.718445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.088 [2024-05-15 09:08:41.718473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.088 qpair failed and we were unable to recover it. 00:42:47.088 [2024-05-15 09:08:41.718603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.088 [2024-05-15 09:08:41.718733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.088 [2024-05-15 09:08:41.718758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.088 qpair failed and we were unable to recover it. 00:42:47.088 [2024-05-15 09:08:41.718877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.088 [2024-05-15 09:08:41.718973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.088 [2024-05-15 09:08:41.718998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.088 qpair failed and we were unable to recover it. 00:42:47.088 [2024-05-15 09:08:41.719103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.088 [2024-05-15 09:08:41.719233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.088 [2024-05-15 09:08:41.719286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.088 qpair failed and we were unable to recover it. 00:42:47.088 [2024-05-15 09:08:41.719445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.088 [2024-05-15 09:08:41.719546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.088 [2024-05-15 09:08:41.719573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.088 qpair failed and we were unable to recover it. 00:42:47.088 [2024-05-15 09:08:41.719703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.088 [2024-05-15 09:08:41.719871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.088 [2024-05-15 09:08:41.719899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.088 qpair failed and we were unable to recover it. 00:42:47.088 [2024-05-15 09:08:41.720079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.088 [2024-05-15 09:08:41.720236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.088 [2024-05-15 09:08:41.720263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.088 qpair failed and we were unable to recover it. 00:42:47.088 [2024-05-15 09:08:41.720395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.088 [2024-05-15 09:08:41.720524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.088 [2024-05-15 09:08:41.720549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.088 qpair failed and we were unable to recover it. 00:42:47.088 [2024-05-15 09:08:41.720702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.088 [2024-05-15 09:08:41.720831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.088 [2024-05-15 09:08:41.720856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.088 qpair failed and we were unable to recover it. 00:42:47.088 [2024-05-15 09:08:41.721004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.088 [2024-05-15 09:08:41.721114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.088 [2024-05-15 09:08:41.721143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.088 qpair failed and we were unable to recover it. 00:42:47.088 [2024-05-15 09:08:41.721323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.088 [2024-05-15 09:08:41.721455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.088 [2024-05-15 09:08:41.721498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.088 qpair failed and we were unable to recover it. 00:42:47.088 [2024-05-15 09:08:41.721646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.088 [2024-05-15 09:08:41.721779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.088 [2024-05-15 09:08:41.721804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.088 qpair failed and we were unable to recover it. 00:42:47.088 [2024-05-15 09:08:41.721937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.088 [2024-05-15 09:08:41.722086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.089 [2024-05-15 09:08:41.722114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.089 qpair failed and we were unable to recover it. 00:42:47.089 [2024-05-15 09:08:41.722292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.089 [2024-05-15 09:08:41.722425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.089 [2024-05-15 09:08:41.722450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.089 qpair failed and we were unable to recover it. 00:42:47.089 [2024-05-15 09:08:41.722624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.089 [2024-05-15 09:08:41.722758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.089 [2024-05-15 09:08:41.722783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.089 qpair failed and we were unable to recover it. 00:42:47.089 [2024-05-15 09:08:41.722922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.089 [2024-05-15 09:08:41.723031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.089 [2024-05-15 09:08:41.723060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.089 qpair failed and we were unable to recover it. 00:42:47.089 [2024-05-15 09:08:41.723240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.089 [2024-05-15 09:08:41.723349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.089 [2024-05-15 09:08:41.723377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.089 qpair failed and we were unable to recover it. 00:42:47.089 [2024-05-15 09:08:41.723492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.089 [2024-05-15 09:08:41.723624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.089 [2024-05-15 09:08:41.723650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.089 qpair failed and we were unable to recover it. 00:42:47.089 [2024-05-15 09:08:41.723743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.089 [2024-05-15 09:08:41.723916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.089 [2024-05-15 09:08:41.723944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.089 qpair failed and we were unable to recover it. 00:42:47.089 [2024-05-15 09:08:41.724094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.089 [2024-05-15 09:08:41.724197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.089 [2024-05-15 09:08:41.724227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.089 qpair failed and we were unable to recover it. 00:42:47.089 [2024-05-15 09:08:41.724379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.089 [2024-05-15 09:08:41.724520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.089 [2024-05-15 09:08:41.724548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.089 qpair failed and we were unable to recover it. 00:42:47.089 [2024-05-15 09:08:41.724688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.089 [2024-05-15 09:08:41.724827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.089 [2024-05-15 09:08:41.724854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.089 qpair failed and we were unable to recover it. 00:42:47.089 [2024-05-15 09:08:41.725001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.089 [2024-05-15 09:08:41.725095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.089 [2024-05-15 09:08:41.725120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.089 qpair failed and we were unable to recover it. 00:42:47.089 [2024-05-15 09:08:41.725241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.089 [2024-05-15 09:08:41.725389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.089 [2024-05-15 09:08:41.725417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.089 qpair failed and we were unable to recover it. 00:42:47.089 [2024-05-15 09:08:41.725539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.089 [2024-05-15 09:08:41.725654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.089 [2024-05-15 09:08:41.725683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.089 qpair failed and we were unable to recover it. 00:42:47.089 [2024-05-15 09:08:41.725802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.089 [2024-05-15 09:08:41.725930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.089 [2024-05-15 09:08:41.725956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.089 qpair failed and we were unable to recover it. 00:42:47.089 [2024-05-15 09:08:41.726094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.089 [2024-05-15 09:08:41.726262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.089 [2024-05-15 09:08:41.726290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.089 qpair failed and we were unable to recover it. 00:42:47.089 [2024-05-15 09:08:41.726439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.089 [2024-05-15 09:08:41.726572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.089 [2024-05-15 09:08:41.726600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.089 qpair failed and we were unable to recover it. 00:42:47.089 [2024-05-15 09:08:41.726756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.089 [2024-05-15 09:08:41.726913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.089 [2024-05-15 09:08:41.726962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.089 qpair failed and we were unable to recover it. 00:42:47.089 [2024-05-15 09:08:41.727116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.089 [2024-05-15 09:08:41.727253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.089 [2024-05-15 09:08:41.727279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.089 qpair failed and we were unable to recover it. 00:42:47.089 [2024-05-15 09:08:41.727412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.089 [2024-05-15 09:08:41.727571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.089 [2024-05-15 09:08:41.727596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.089 qpair failed and we were unable to recover it. 00:42:47.089 [2024-05-15 09:08:41.727699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.089 [2024-05-15 09:08:41.727852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.089 [2024-05-15 09:08:41.727877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.089 qpair failed and we were unable to recover it. 00:42:47.089 [2024-05-15 09:08:41.728002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.089 [2024-05-15 09:08:41.728100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.089 [2024-05-15 09:08:41.728126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.089 qpair failed and we were unable to recover it. 00:42:47.089 [2024-05-15 09:08:41.728273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.089 [2024-05-15 09:08:41.728411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.089 [2024-05-15 09:08:41.728439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.089 qpair failed and we were unable to recover it. 00:42:47.089 [2024-05-15 09:08:41.728605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.089 [2024-05-15 09:08:41.728754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.089 [2024-05-15 09:08:41.728795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.089 qpair failed and we were unable to recover it. 00:42:47.089 [2024-05-15 09:08:41.728949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.089 [2024-05-15 09:08:41.729040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.089 [2024-05-15 09:08:41.729065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.089 qpair failed and we were unable to recover it. 00:42:47.089 [2024-05-15 09:08:41.729193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.089 [2024-05-15 09:08:41.729345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.089 [2024-05-15 09:08:41.729374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.089 qpair failed and we were unable to recover it. 00:42:47.089 [2024-05-15 09:08:41.729523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.089 [2024-05-15 09:08:41.729621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.089 [2024-05-15 09:08:41.729646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.089 qpair failed and we were unable to recover it. 00:42:47.089 [2024-05-15 09:08:41.729782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.089 [2024-05-15 09:08:41.729878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.089 [2024-05-15 09:08:41.729926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.089 qpair failed and we were unable to recover it. 00:42:47.089 [2024-05-15 09:08:41.730071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.089 [2024-05-15 09:08:41.730226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.089 [2024-05-15 09:08:41.730251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.089 qpair failed and we were unable to recover it. 00:42:47.089 [2024-05-15 09:08:41.730360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.089 [2024-05-15 09:08:41.730458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.089 [2024-05-15 09:08:41.730483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.090 qpair failed and we were unable to recover it. 00:42:47.090 [2024-05-15 09:08:41.730584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.090 [2024-05-15 09:08:41.730711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.090 [2024-05-15 09:08:41.730736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.090 qpair failed and we were unable to recover it. 00:42:47.090 [2024-05-15 09:08:41.730879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.090 [2024-05-15 09:08:41.731024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.090 [2024-05-15 09:08:41.731052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.090 qpair failed and we were unable to recover it. 00:42:47.090 [2024-05-15 09:08:41.731223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.090 [2024-05-15 09:08:41.731344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.090 [2024-05-15 09:08:41.731369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.090 qpair failed and we were unable to recover it. 00:42:47.090 [2024-05-15 09:08:41.731504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.090 [2024-05-15 09:08:41.731608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.090 [2024-05-15 09:08:41.731633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.090 qpair failed and we were unable to recover it. 00:42:47.090 [2024-05-15 09:08:41.731758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.090 [2024-05-15 09:08:41.731930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.090 [2024-05-15 09:08:41.731954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.090 qpair failed and we were unable to recover it. 00:42:47.090 [2024-05-15 09:08:41.732083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.090 [2024-05-15 09:08:41.732184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.090 [2024-05-15 09:08:41.732208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.090 qpair failed and we were unable to recover it. 00:42:47.090 [2024-05-15 09:08:41.732338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.090 [2024-05-15 09:08:41.732491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.090 [2024-05-15 09:08:41.732516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.090 qpair failed and we were unable to recover it. 00:42:47.090 [2024-05-15 09:08:41.732664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.090 [2024-05-15 09:08:41.732817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.090 [2024-05-15 09:08:41.732846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.090 qpair failed and we were unable to recover it. 00:42:47.090 [2024-05-15 09:08:41.732969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.090 [2024-05-15 09:08:41.733125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.090 [2024-05-15 09:08:41.733166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.090 qpair failed and we were unable to recover it. 00:42:47.090 [2024-05-15 09:08:41.733331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.090 [2024-05-15 09:08:41.733491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.090 [2024-05-15 09:08:41.733533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.090 qpair failed and we were unable to recover it. 00:42:47.090 [2024-05-15 09:08:41.733677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.090 [2024-05-15 09:08:41.733820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.090 [2024-05-15 09:08:41.733847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.090 qpair failed and we were unable to recover it. 00:42:47.090 [2024-05-15 09:08:41.733965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.090 [2024-05-15 09:08:41.734098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.090 [2024-05-15 09:08:41.734123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.090 qpair failed and we were unable to recover it. 00:42:47.090 [2024-05-15 09:08:41.734252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.090 [2024-05-15 09:08:41.734417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.090 [2024-05-15 09:08:41.734442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.090 qpair failed and we were unable to recover it. 00:42:47.090 [2024-05-15 09:08:41.734575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.090 [2024-05-15 09:08:41.734757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.090 [2024-05-15 09:08:41.734785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.090 qpair failed and we were unable to recover it. 00:42:47.090 [2024-05-15 09:08:41.734942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.090 [2024-05-15 09:08:41.735070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.090 [2024-05-15 09:08:41.735110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.090 qpair failed and we were unable to recover it. 00:42:47.090 [2024-05-15 09:08:41.735298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.090 [2024-05-15 09:08:41.735418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.090 [2024-05-15 09:08:41.735444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.090 qpair failed and we were unable to recover it. 00:42:47.090 [2024-05-15 09:08:41.735590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.090 [2024-05-15 09:08:41.735760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.090 [2024-05-15 09:08:41.735785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.090 qpair failed and we were unable to recover it. 00:42:47.090 [2024-05-15 09:08:41.735914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.090 [2024-05-15 09:08:41.736006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.090 [2024-05-15 09:08:41.736035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.090 qpair failed and we were unable to recover it. 00:42:47.090 [2024-05-15 09:08:41.736161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.090 [2024-05-15 09:08:41.736316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.090 [2024-05-15 09:08:41.736345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.090 qpair failed and we were unable to recover it. 00:42:47.090 [2024-05-15 09:08:41.736493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.090 [2024-05-15 09:08:41.736679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.090 [2024-05-15 09:08:41.736736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.090 qpair failed and we were unable to recover it. 00:42:47.090 [2024-05-15 09:08:41.736876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.090 [2024-05-15 09:08:41.737007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.090 [2024-05-15 09:08:41.737032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.090 qpair failed and we were unable to recover it. 00:42:47.090 [2024-05-15 09:08:41.737212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.090 [2024-05-15 09:08:41.737382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.090 [2024-05-15 09:08:41.737410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.090 qpair failed and we were unable to recover it. 00:42:47.090 [2024-05-15 09:08:41.737553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.090 [2024-05-15 09:08:41.737672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.090 [2024-05-15 09:08:41.737700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.090 qpair failed and we were unable to recover it. 00:42:47.090 [2024-05-15 09:08:41.737849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.090 [2024-05-15 09:08:41.737971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.090 [2024-05-15 09:08:41.737996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.090 qpair failed and we were unable to recover it. 00:42:47.090 [2024-05-15 09:08:41.738167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.090 [2024-05-15 09:08:41.738309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.090 [2024-05-15 09:08:41.738338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.090 qpair failed and we were unable to recover it. 00:42:47.090 [2024-05-15 09:08:41.738452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.090 [2024-05-15 09:08:41.738583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.090 [2024-05-15 09:08:41.738608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.090 qpair failed and we were unable to recover it. 00:42:47.090 [2024-05-15 09:08:41.738736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.090 [2024-05-15 09:08:41.738866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.090 [2024-05-15 09:08:41.738892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.090 qpair failed and we were unable to recover it. 00:42:47.090 [2024-05-15 09:08:41.739052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.090 [2024-05-15 09:08:41.739224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.090 [2024-05-15 09:08:41.739267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.090 qpair failed and we were unable to recover it. 00:42:47.090 [2024-05-15 09:08:41.739379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.090 [2024-05-15 09:08:41.739478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.091 [2024-05-15 09:08:41.739503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.091 qpair failed and we were unable to recover it. 00:42:47.091 [2024-05-15 09:08:41.739628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.091 [2024-05-15 09:08:41.739750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.091 [2024-05-15 09:08:41.739775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.091 qpair failed and we were unable to recover it. 00:42:47.091 [2024-05-15 09:08:41.739939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.091 [2024-05-15 09:08:41.740055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.091 [2024-05-15 09:08:41.740083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.091 qpair failed and we were unable to recover it. 00:42:47.091 [2024-05-15 09:08:41.740201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.091 [2024-05-15 09:08:41.740378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.091 [2024-05-15 09:08:41.740406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.091 qpair failed and we were unable to recover it. 00:42:47.091 [2024-05-15 09:08:41.740563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.091 [2024-05-15 09:08:41.740682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.091 [2024-05-15 09:08:41.740707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.091 qpair failed and we were unable to recover it. 00:42:47.091 [2024-05-15 09:08:41.740868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.091 [2024-05-15 09:08:41.741001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.091 [2024-05-15 09:08:41.741026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.091 qpair failed and we were unable to recover it. 00:42:47.091 [2024-05-15 09:08:41.741130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.091 [2024-05-15 09:08:41.741281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.091 [2024-05-15 09:08:41.741307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.091 qpair failed and we were unable to recover it. 00:42:47.091 [2024-05-15 09:08:41.741404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.091 [2024-05-15 09:08:41.741541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.091 [2024-05-15 09:08:41.741566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.091 qpair failed and we were unable to recover it. 00:42:47.091 [2024-05-15 09:08:41.741694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.091 [2024-05-15 09:08:41.741818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.091 [2024-05-15 09:08:41.741843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.091 qpair failed and we were unable to recover it. 00:42:47.091 [2024-05-15 09:08:41.741977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.091 [2024-05-15 09:08:41.742104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.091 [2024-05-15 09:08:41.742129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.091 qpair failed and we were unable to recover it. 00:42:47.091 [2024-05-15 09:08:41.742244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.091 [2024-05-15 09:08:41.742373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.091 [2024-05-15 09:08:41.742399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.091 qpair failed and we were unable to recover it. 00:42:47.091 [2024-05-15 09:08:41.742520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.091 [2024-05-15 09:08:41.742663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.091 [2024-05-15 09:08:41.742691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.091 qpair failed and we were unable to recover it. 00:42:47.091 [2024-05-15 09:08:41.742856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.091 [2024-05-15 09:08:41.743022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.091 [2024-05-15 09:08:41.743050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.091 qpair failed and we were unable to recover it. 00:42:47.091 [2024-05-15 09:08:41.743197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.091 [2024-05-15 09:08:41.743341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.091 [2024-05-15 09:08:41.743367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.091 qpair failed and we were unable to recover it. 00:42:47.091 [2024-05-15 09:08:41.743511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.091 [2024-05-15 09:08:41.743675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.091 [2024-05-15 09:08:41.743702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.091 qpair failed and we were unable to recover it. 00:42:47.091 [2024-05-15 09:08:41.743822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.091 [2024-05-15 09:08:41.743975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.091 [2024-05-15 09:08:41.744000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.091 qpair failed and we were unable to recover it. 00:42:47.091 [2024-05-15 09:08:41.744151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.091 [2024-05-15 09:08:41.744305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.091 [2024-05-15 09:08:41.744331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.091 qpair failed and we were unable to recover it. 00:42:47.091 [2024-05-15 09:08:41.744518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.091 [2024-05-15 09:08:41.744636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.091 [2024-05-15 09:08:41.744661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.091 qpair failed and we were unable to recover it. 00:42:47.091 [2024-05-15 09:08:41.744790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.091 [2024-05-15 09:08:41.744925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.091 [2024-05-15 09:08:41.744951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.091 qpair failed and we were unable to recover it. 00:42:47.091 [2024-05-15 09:08:41.745049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.091 [2024-05-15 09:08:41.745198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.091 [2024-05-15 09:08:41.745229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.091 qpair failed and we were unable to recover it. 00:42:47.091 [2024-05-15 09:08:41.745359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.091 [2024-05-15 09:08:41.745500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.091 [2024-05-15 09:08:41.745528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.091 qpair failed and we were unable to recover it. 00:42:47.091 [2024-05-15 09:08:41.745666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.091 [2024-05-15 09:08:41.745826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.091 [2024-05-15 09:08:41.745851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.091 qpair failed and we were unable to recover it. 00:42:47.091 [2024-05-15 09:08:41.745947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.091 [2024-05-15 09:08:41.746045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.091 [2024-05-15 09:08:41.746070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.091 qpair failed and we were unable to recover it. 00:42:47.091 [2024-05-15 09:08:41.746231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.091 [2024-05-15 09:08:41.746366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.091 [2024-05-15 09:08:41.746394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.091 qpair failed and we were unable to recover it. 00:42:47.091 [2024-05-15 09:08:41.746548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.091 [2024-05-15 09:08:41.746641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.092 [2024-05-15 09:08:41.746667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.092 qpair failed and we were unable to recover it. 00:42:47.092 [2024-05-15 09:08:41.746799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.092 [2024-05-15 09:08:41.746950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.092 [2024-05-15 09:08:41.746994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.092 qpair failed and we were unable to recover it. 00:42:47.092 [2024-05-15 09:08:41.747182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.092 [2024-05-15 09:08:41.747289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.092 [2024-05-15 09:08:41.747315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.092 qpair failed and we were unable to recover it. 00:42:47.092 [2024-05-15 09:08:41.747468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.092 [2024-05-15 09:08:41.747588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.092 [2024-05-15 09:08:41.747616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.092 qpair failed and we were unable to recover it. 00:42:47.092 [2024-05-15 09:08:41.747730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.092 [2024-05-15 09:08:41.747866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.092 [2024-05-15 09:08:41.747890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.092 qpair failed and we were unable to recover it. 00:42:47.092 [2024-05-15 09:08:41.748042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.092 [2024-05-15 09:08:41.748161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.092 [2024-05-15 09:08:41.748189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.092 qpair failed and we were unable to recover it. 00:42:47.092 [2024-05-15 09:08:41.748376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.092 [2024-05-15 09:08:41.748519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.092 [2024-05-15 09:08:41.748547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.092 qpair failed and we were unable to recover it. 00:42:47.092 [2024-05-15 09:08:41.748674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.092 [2024-05-15 09:08:41.748770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.092 [2024-05-15 09:08:41.748795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.092 qpair failed and we were unable to recover it. 00:42:47.092 [2024-05-15 09:08:41.748947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.092 [2024-05-15 09:08:41.749083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.092 [2024-05-15 09:08:41.749110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.092 qpair failed and we were unable to recover it. 00:42:47.092 [2024-05-15 09:08:41.749231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.092 [2024-05-15 09:08:41.749405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.092 [2024-05-15 09:08:41.749430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.092 qpair failed and we were unable to recover it. 00:42:47.092 [2024-05-15 09:08:41.749550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.092 [2024-05-15 09:08:41.749649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.092 [2024-05-15 09:08:41.749675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.092 qpair failed and we were unable to recover it. 00:42:47.092 [2024-05-15 09:08:41.749850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.092 [2024-05-15 09:08:41.749970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.092 [2024-05-15 09:08:41.749997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.092 qpair failed and we were unable to recover it. 00:42:47.092 [2024-05-15 09:08:41.750103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.092 [2024-05-15 09:08:41.750286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.092 [2024-05-15 09:08:41.750311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.092 qpair failed and we were unable to recover it. 00:42:47.092 [2024-05-15 09:08:41.750413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.092 [2024-05-15 09:08:41.750516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.092 [2024-05-15 09:08:41.750541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.092 qpair failed and we were unable to recover it. 00:42:47.092 [2024-05-15 09:08:41.750635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.092 [2024-05-15 09:08:41.750736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.092 [2024-05-15 09:08:41.750762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.092 qpair failed and we were unable to recover it. 00:42:47.092 [2024-05-15 09:08:41.750922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.092 [2024-05-15 09:08:41.751077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.092 [2024-05-15 09:08:41.751104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.092 qpair failed and we were unable to recover it. 00:42:47.092 [2024-05-15 09:08:41.751286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.092 [2024-05-15 09:08:41.751415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.092 [2024-05-15 09:08:41.751442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.092 qpair failed and we were unable to recover it. 00:42:47.092 [2024-05-15 09:08:41.751581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.092 [2024-05-15 09:08:41.751722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.092 [2024-05-15 09:08:41.751750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.092 qpair failed and we were unable to recover it. 00:42:47.092 [2024-05-15 09:08:41.751924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.092 [2024-05-15 09:08:41.752077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.092 [2024-05-15 09:08:41.752103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.092 qpair failed and we were unable to recover it. 00:42:47.092 [2024-05-15 09:08:41.752205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.092 [2024-05-15 09:08:41.752344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.092 [2024-05-15 09:08:41.752370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.092 qpair failed and we were unable to recover it. 00:42:47.092 [2024-05-15 09:08:41.752528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.092 [2024-05-15 09:08:41.752664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.092 [2024-05-15 09:08:41.752689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.092 qpair failed and we were unable to recover it. 00:42:47.092 [2024-05-15 09:08:41.752794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.092 [2024-05-15 09:08:41.752919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.092 [2024-05-15 09:08:41.752945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.092 qpair failed and we were unable to recover it. 00:42:47.092 [2024-05-15 09:08:41.753069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.092 [2024-05-15 09:08:41.753165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.092 [2024-05-15 09:08:41.753190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.092 qpair failed and we were unable to recover it. 00:42:47.092 [2024-05-15 09:08:41.753304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.092 [2024-05-15 09:08:41.753433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.092 [2024-05-15 09:08:41.753458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.092 qpair failed and we were unable to recover it. 00:42:47.092 [2024-05-15 09:08:41.753605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.092 [2024-05-15 09:08:41.753748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.092 [2024-05-15 09:08:41.753776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.092 qpair failed and we were unable to recover it. 00:42:47.092 [2024-05-15 09:08:41.753891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.092 [2024-05-15 09:08:41.754055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.092 [2024-05-15 09:08:41.754096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.092 qpair failed and we were unable to recover it. 00:42:47.092 [2024-05-15 09:08:41.754211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.092 [2024-05-15 09:08:41.754339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.092 [2024-05-15 09:08:41.754366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.092 qpair failed and we were unable to recover it. 00:42:47.092 [2024-05-15 09:08:41.754496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.092 [2024-05-15 09:08:41.754588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.092 [2024-05-15 09:08:41.754613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.092 qpair failed and we were unable to recover it. 00:42:47.092 [2024-05-15 09:08:41.754712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.092 [2024-05-15 09:08:41.754843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.092 [2024-05-15 09:08:41.754868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.093 qpair failed and we were unable to recover it. 00:42:47.093 [2024-05-15 09:08:41.754979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.093 [2024-05-15 09:08:41.755085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.093 [2024-05-15 09:08:41.755110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.093 qpair failed and we were unable to recover it. 00:42:47.093 [2024-05-15 09:08:41.755230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.093 [2024-05-15 09:08:41.755386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.093 [2024-05-15 09:08:41.755414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.093 qpair failed and we were unable to recover it. 00:42:47.093 [2024-05-15 09:08:41.755563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.093 [2024-05-15 09:08:41.755696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.093 [2024-05-15 09:08:41.755721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.093 qpair failed and we were unable to recover it. 00:42:47.093 [2024-05-15 09:08:41.755901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.093 [2024-05-15 09:08:41.756069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.093 [2024-05-15 09:08:41.756096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.093 qpair failed and we were unable to recover it. 00:42:47.093 [2024-05-15 09:08:41.756234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.093 [2024-05-15 09:08:41.756358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.093 [2024-05-15 09:08:41.756386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.093 qpair failed and we were unable to recover it. 00:42:47.093 [2024-05-15 09:08:41.756511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.093 [2024-05-15 09:08:41.756640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.093 [2024-05-15 09:08:41.756665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.093 qpair failed and we were unable to recover it. 00:42:47.093 [2024-05-15 09:08:41.756786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.093 [2024-05-15 09:08:41.756897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.093 [2024-05-15 09:08:41.756927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.093 qpair failed and we were unable to recover it. 00:42:47.093 [2024-05-15 09:08:41.757074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.093 [2024-05-15 09:08:41.757258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.093 [2024-05-15 09:08:41.757291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.093 qpair failed and we were unable to recover it. 00:42:47.093 [2024-05-15 09:08:41.757456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.093 [2024-05-15 09:08:41.757587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.093 [2024-05-15 09:08:41.757614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.093 qpair failed and we were unable to recover it. 00:42:47.093 [2024-05-15 09:08:41.757793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.093 [2024-05-15 09:08:41.757935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.093 [2024-05-15 09:08:41.757964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.093 qpair failed and we were unable to recover it. 00:42:47.093 [2024-05-15 09:08:41.758132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.093 [2024-05-15 09:08:41.758249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.093 [2024-05-15 09:08:41.758279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.093 qpair failed and we were unable to recover it. 00:42:47.093 [2024-05-15 09:08:41.758405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.093 [2024-05-15 09:08:41.758562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.093 [2024-05-15 09:08:41.758588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.093 qpair failed and we were unable to recover it. 00:42:47.093 [2024-05-15 09:08:41.758717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.093 [2024-05-15 09:08:41.758862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.093 [2024-05-15 09:08:41.758889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.093 qpair failed and we were unable to recover it. 00:42:47.093 [2024-05-15 09:08:41.759006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.093 [2024-05-15 09:08:41.759152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.093 [2024-05-15 09:08:41.759177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.093 qpair failed and we were unable to recover it. 00:42:47.093 [2024-05-15 09:08:41.759288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.093 [2024-05-15 09:08:41.759399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.093 [2024-05-15 09:08:41.759425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.093 qpair failed and we were unable to recover it. 00:42:47.093 [2024-05-15 09:08:41.759575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.093 [2024-05-15 09:08:41.759752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.093 [2024-05-15 09:08:41.759777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.093 qpair failed and we were unable to recover it. 00:42:47.093 [2024-05-15 09:08:41.759883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.093 [2024-05-15 09:08:41.760025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.093 [2024-05-15 09:08:41.760053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.093 qpair failed and we were unable to recover it. 00:42:47.093 [2024-05-15 09:08:41.760207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.093 [2024-05-15 09:08:41.760342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.093 [2024-05-15 09:08:41.760384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.093 qpair failed and we were unable to recover it. 00:42:47.093 [2024-05-15 09:08:41.760540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.093 [2024-05-15 09:08:41.760644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.093 [2024-05-15 09:08:41.760669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.093 qpair failed and we were unable to recover it. 00:42:47.093 [2024-05-15 09:08:41.760800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.093 [2024-05-15 09:08:41.760943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.093 [2024-05-15 09:08:41.760971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.093 qpair failed and we were unable to recover it. 00:42:47.093 [2024-05-15 09:08:41.761082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.093 [2024-05-15 09:08:41.761181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.093 [2024-05-15 09:08:41.761206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.093 qpair failed and we were unable to recover it. 00:42:47.093 [2024-05-15 09:08:41.761314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.093 [2024-05-15 09:08:41.761444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.093 [2024-05-15 09:08:41.761469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.093 qpair failed and we were unable to recover it. 00:42:47.093 [2024-05-15 09:08:41.761625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.093 [2024-05-15 09:08:41.761767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.093 [2024-05-15 09:08:41.761795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.093 qpair failed and we were unable to recover it. 00:42:47.093 [2024-05-15 09:08:41.761973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.093 [2024-05-15 09:08:41.762078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.093 [2024-05-15 09:08:41.762103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.093 qpair failed and we were unable to recover it. 00:42:47.093 [2024-05-15 09:08:41.762209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.093 [2024-05-15 09:08:41.762325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.093 [2024-05-15 09:08:41.762352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.093 qpair failed and we were unable to recover it. 00:42:47.093 [2024-05-15 09:08:41.762487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.093 [2024-05-15 09:08:41.762646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.093 [2024-05-15 09:08:41.762674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.093 qpair failed and we were unable to recover it. 00:42:47.093 [2024-05-15 09:08:41.762797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.093 [2024-05-15 09:08:41.762903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.093 [2024-05-15 09:08:41.762929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.093 qpair failed and we were unable to recover it. 00:42:47.093 [2024-05-15 09:08:41.763068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.093 [2024-05-15 09:08:41.763221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.093 [2024-05-15 09:08:41.763250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.093 qpair failed and we were unable to recover it. 00:42:47.093 [2024-05-15 09:08:41.763398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.093 [2024-05-15 09:08:41.763510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.094 [2024-05-15 09:08:41.763538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.094 qpair failed and we were unable to recover it. 00:42:47.094 [2024-05-15 09:08:41.763661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.094 [2024-05-15 09:08:41.763795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.094 [2024-05-15 09:08:41.763821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.094 qpair failed and we were unable to recover it. 00:42:47.094 [2024-05-15 09:08:41.763953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.094 [2024-05-15 09:08:41.764058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.094 [2024-05-15 09:08:41.764101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.094 qpair failed and we were unable to recover it. 00:42:47.094 [2024-05-15 09:08:41.764221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.094 [2024-05-15 09:08:41.764345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.094 [2024-05-15 09:08:41.764375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.094 qpair failed and we were unable to recover it. 00:42:47.094 [2024-05-15 09:08:41.764557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.094 [2024-05-15 09:08:41.764669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.094 [2024-05-15 09:08:41.764695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.094 qpair failed and we were unable to recover it. 00:42:47.094 [2024-05-15 09:08:41.764841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.094 [2024-05-15 09:08:41.764983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.094 [2024-05-15 09:08:41.765011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.094 qpair failed and we were unable to recover it. 00:42:47.094 [2024-05-15 09:08:41.765195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.094 [2024-05-15 09:08:41.765329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.094 [2024-05-15 09:08:41.765355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.094 qpair failed and we were unable to recover it. 00:42:47.094 [2024-05-15 09:08:41.765467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.094 [2024-05-15 09:08:41.765603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.094 [2024-05-15 09:08:41.765628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.094 qpair failed and we were unable to recover it. 00:42:47.094 [2024-05-15 09:08:41.765766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.094 [2024-05-15 09:08:41.765905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.094 [2024-05-15 09:08:41.765934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.094 qpair failed and we were unable to recover it. 00:42:47.094 [2024-05-15 09:08:41.766085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.094 [2024-05-15 09:08:41.766186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.094 [2024-05-15 09:08:41.766211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.094 qpair failed and we were unable to recover it. 00:42:47.094 [2024-05-15 09:08:41.766339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.094 [2024-05-15 09:08:41.766443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.094 [2024-05-15 09:08:41.766468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.094 qpair failed and we were unable to recover it. 00:42:47.094 [2024-05-15 09:08:41.766596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.094 [2024-05-15 09:08:41.766722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.094 [2024-05-15 09:08:41.766750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.094 qpair failed and we were unable to recover it. 00:42:47.094 [2024-05-15 09:08:41.766896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.094 [2024-05-15 09:08:41.767000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.094 [2024-05-15 09:08:41.767028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.094 qpair failed and we were unable to recover it. 00:42:47.094 [2024-05-15 09:08:41.767176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.094 [2024-05-15 09:08:41.767306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.094 [2024-05-15 09:08:41.767332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.094 qpair failed and we were unable to recover it. 00:42:47.094 [2024-05-15 09:08:41.767516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.094 [2024-05-15 09:08:41.767644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.094 [2024-05-15 09:08:41.767669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.094 qpair failed and we were unable to recover it. 00:42:47.094 [2024-05-15 09:08:41.767794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.094 [2024-05-15 09:08:41.767925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.094 [2024-05-15 09:08:41.767954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.094 qpair failed and we were unable to recover it. 00:42:47.094 [2024-05-15 09:08:41.768090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.094 [2024-05-15 09:08:41.768191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.094 [2024-05-15 09:08:41.768224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.094 qpair failed and we were unable to recover it. 00:42:47.094 [2024-05-15 09:08:41.768349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.094 [2024-05-15 09:08:41.768495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.094 [2024-05-15 09:08:41.768523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.094 qpair failed and we were unable to recover it. 00:42:47.094 [2024-05-15 09:08:41.768633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.094 [2024-05-15 09:08:41.768803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.094 [2024-05-15 09:08:41.768832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.094 qpair failed and we were unable to recover it. 00:42:47.094 [2024-05-15 09:08:41.768979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.094 [2024-05-15 09:08:41.769081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.094 [2024-05-15 09:08:41.769108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.094 qpair failed and we were unable to recover it. 00:42:47.094 [2024-05-15 09:08:41.769256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.094 [2024-05-15 09:08:41.769375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.094 [2024-05-15 09:08:41.769403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.094 qpair failed and we were unable to recover it. 00:42:47.094 [2024-05-15 09:08:41.769521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.094 [2024-05-15 09:08:41.769656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.094 [2024-05-15 09:08:41.769683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.094 qpair failed and we were unable to recover it. 00:42:47.094 [2024-05-15 09:08:41.769819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.094 [2024-05-15 09:08:41.769950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.094 [2024-05-15 09:08:41.769975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.094 qpair failed and we were unable to recover it. 00:42:47.094 [2024-05-15 09:08:41.770105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.094 [2024-05-15 09:08:41.770206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.094 [2024-05-15 09:08:41.770242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.094 qpair failed and we were unable to recover it. 00:42:47.094 [2024-05-15 09:08:41.770341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.094 [2024-05-15 09:08:41.770468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.094 [2024-05-15 09:08:41.770493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.094 qpair failed and we were unable to recover it. 00:42:47.094 [2024-05-15 09:08:41.770631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.094 [2024-05-15 09:08:41.770743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.094 [2024-05-15 09:08:41.770768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.094 qpair failed and we were unable to recover it. 00:42:47.094 [2024-05-15 09:08:41.770927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.094 [2024-05-15 09:08:41.771036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.094 [2024-05-15 09:08:41.771061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.094 qpair failed and we were unable to recover it. 00:42:47.094 [2024-05-15 09:08:41.771195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.094 [2024-05-15 09:08:41.771376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.094 [2024-05-15 09:08:41.771402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.094 qpair failed and we were unable to recover it. 00:42:47.094 [2024-05-15 09:08:41.771550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.094 [2024-05-15 09:08:41.771696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.094 [2024-05-15 09:08:41.771726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.094 qpair failed and we were unable to recover it. 00:42:47.095 [2024-05-15 09:08:41.771874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.095 [2024-05-15 09:08:41.771994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.095 [2024-05-15 09:08:41.772024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.095 qpair failed and we were unable to recover it. 00:42:47.095 [2024-05-15 09:08:41.772152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.095 [2024-05-15 09:08:41.772281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.095 [2024-05-15 09:08:41.772307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.095 qpair failed and we were unable to recover it. 00:42:47.095 [2024-05-15 09:08:41.772440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.095 [2024-05-15 09:08:41.772556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.095 [2024-05-15 09:08:41.772584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.095 qpair failed and we were unable to recover it. 00:42:47.095 [2024-05-15 09:08:41.772728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.095 [2024-05-15 09:08:41.772866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.095 [2024-05-15 09:08:41.772891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.095 qpair failed and we were unable to recover it. 00:42:47.095 [2024-05-15 09:08:41.772997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.095 [2024-05-15 09:08:41.773178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.095 [2024-05-15 09:08:41.773206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.095 qpair failed and we were unable to recover it. 00:42:47.095 [2024-05-15 09:08:41.773351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.095 [2024-05-15 09:08:41.773503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.095 [2024-05-15 09:08:41.773528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.095 qpair failed and we were unable to recover it. 00:42:47.095 [2024-05-15 09:08:41.773659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.095 [2024-05-15 09:08:41.773776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.095 [2024-05-15 09:08:41.773801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.095 qpair failed and we were unable to recover it. 00:42:47.095 [2024-05-15 09:08:41.773950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.095 [2024-05-15 09:08:41.774096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.095 [2024-05-15 09:08:41.774124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.095 qpair failed and we were unable to recover it. 00:42:47.095 [2024-05-15 09:08:41.774235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.095 [2024-05-15 09:08:41.774376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.095 [2024-05-15 09:08:41.774405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.095 qpair failed and we were unable to recover it. 00:42:47.095 [2024-05-15 09:08:41.774534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.095 [2024-05-15 09:08:41.774635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.095 [2024-05-15 09:08:41.774660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.095 qpair failed and we were unable to recover it. 00:42:47.095 [2024-05-15 09:08:41.774843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.095 [2024-05-15 09:08:41.774947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.095 [2024-05-15 09:08:41.774981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.095 qpair failed and we were unable to recover it. 00:42:47.095 [2024-05-15 09:08:41.775129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.095 [2024-05-15 09:08:41.775272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.095 [2024-05-15 09:08:41.775302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.095 qpair failed and we were unable to recover it. 00:42:47.095 [2024-05-15 09:08:41.775446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.095 [2024-05-15 09:08:41.775610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.095 [2024-05-15 09:08:41.775634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.095 qpair failed and we were unable to recover it. 00:42:47.095 [2024-05-15 09:08:41.775742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.095 [2024-05-15 09:08:41.775897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.095 [2024-05-15 09:08:41.775922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.095 qpair failed and we were unable to recover it. 00:42:47.095 [2024-05-15 09:08:41.776072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.095 [2024-05-15 09:08:41.776192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.095 [2024-05-15 09:08:41.776226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.095 qpair failed and we were unable to recover it. 00:42:47.095 [2024-05-15 09:08:41.776400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.095 [2024-05-15 09:08:41.776521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.095 [2024-05-15 09:08:41.776546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.095 qpair failed and we were unable to recover it. 00:42:47.095 [2024-05-15 09:08:41.776658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.095 [2024-05-15 09:08:41.776784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.095 [2024-05-15 09:08:41.776812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.095 qpair failed and we were unable to recover it. 00:42:47.095 [2024-05-15 09:08:41.776940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.095 [2024-05-15 09:08:41.777068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.095 [2024-05-15 09:08:41.777095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.095 qpair failed and we were unable to recover it. 00:42:47.095 [2024-05-15 09:08:41.777208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.095 [2024-05-15 09:08:41.777381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.095 [2024-05-15 09:08:41.777409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.095 qpair failed and we were unable to recover it. 00:42:47.095 [2024-05-15 09:08:41.777551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.095 [2024-05-15 09:08:41.777678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.095 [2024-05-15 09:08:41.777703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.095 qpair failed and we were unable to recover it. 00:42:47.095 [2024-05-15 09:08:41.777818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.095 [2024-05-15 09:08:41.777921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.095 [2024-05-15 09:08:41.777968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.095 qpair failed and we were unable to recover it. 00:42:47.095 [2024-05-15 09:08:41.778118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.095 [2024-05-15 09:08:41.778224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.095 [2024-05-15 09:08:41.778251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.095 qpair failed and we were unable to recover it. 00:42:47.095 [2024-05-15 09:08:41.778401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.095 [2024-05-15 09:08:41.778509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.095 [2024-05-15 09:08:41.778539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.095 qpair failed and we were unable to recover it. 00:42:47.095 [2024-05-15 09:08:41.778655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.095 [2024-05-15 09:08:41.778802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.095 [2024-05-15 09:08:41.778831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.095 qpair failed and we were unable to recover it. 00:42:47.095 [2024-05-15 09:08:41.778947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.095 [2024-05-15 09:08:41.779059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.095 [2024-05-15 09:08:41.779087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.095 qpair failed and we were unable to recover it. 00:42:47.095 [2024-05-15 09:08:41.779213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.096 [2024-05-15 09:08:41.779318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.096 [2024-05-15 09:08:41.779343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.096 qpair failed and we were unable to recover it. 00:42:47.096 [2024-05-15 09:08:41.779443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.096 [2024-05-15 09:08:41.779615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.096 [2024-05-15 09:08:41.779644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.096 qpair failed and we were unable to recover it. 00:42:47.096 [2024-05-15 09:08:41.779796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.096 [2024-05-15 09:08:41.779901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.096 [2024-05-15 09:08:41.779926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.096 qpair failed and we were unable to recover it. 00:42:47.096 [2024-05-15 09:08:41.780074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.096 [2024-05-15 09:08:41.780247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.096 [2024-05-15 09:08:41.780276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.096 qpair failed and we were unable to recover it. 00:42:47.096 [2024-05-15 09:08:41.780412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.096 [2024-05-15 09:08:41.780518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.096 [2024-05-15 09:08:41.780543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.096 qpair failed and we were unable to recover it. 00:42:47.096 [2024-05-15 09:08:41.780693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.096 [2024-05-15 09:08:41.780837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.096 [2024-05-15 09:08:41.780870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.096 qpair failed and we were unable to recover it. 00:42:47.096 [2024-05-15 09:08:41.780976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.096 [2024-05-15 09:08:41.781128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.096 [2024-05-15 09:08:41.781153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.096 qpair failed and we were unable to recover it. 00:42:47.096 [2024-05-15 09:08:41.781329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.096 [2024-05-15 09:08:41.781461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.096 [2024-05-15 09:08:41.781486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.096 qpair failed and we were unable to recover it. 00:42:47.096 [2024-05-15 09:08:41.781628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.096 [2024-05-15 09:08:41.781789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.096 [2024-05-15 09:08:41.781815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.096 qpair failed and we were unable to recover it. 00:42:47.096 [2024-05-15 09:08:41.781981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.096 [2024-05-15 09:08:41.782133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.096 [2024-05-15 09:08:41.782158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.096 qpair failed and we were unable to recover it. 00:42:47.096 [2024-05-15 09:08:41.782269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.096 [2024-05-15 09:08:41.782402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.096 [2024-05-15 09:08:41.782427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.096 qpair failed and we were unable to recover it. 00:42:47.096 [2024-05-15 09:08:41.782533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.096 [2024-05-15 09:08:41.782639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.096 [2024-05-15 09:08:41.782665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.096 qpair failed and we were unable to recover it. 00:42:47.096 [2024-05-15 09:08:41.782799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.096 [2024-05-15 09:08:41.782912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.096 [2024-05-15 09:08:41.782937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.096 qpair failed and we were unable to recover it. 00:42:47.096 [2024-05-15 09:08:41.783049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.096 [2024-05-15 09:08:41.783150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.096 [2024-05-15 09:08:41.783175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.096 qpair failed and we were unable to recover it. 00:42:47.096 [2024-05-15 09:08:41.783291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.096 [2024-05-15 09:08:41.783394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.096 [2024-05-15 09:08:41.783419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.096 qpair failed and we were unable to recover it. 00:42:47.096 [2024-05-15 09:08:41.783514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.096 [2024-05-15 09:08:41.783646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.096 [2024-05-15 09:08:41.783677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.096 qpair failed and we were unable to recover it. 00:42:47.096 [2024-05-15 09:08:41.783803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.096 [2024-05-15 09:08:41.783908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.096 [2024-05-15 09:08:41.783934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.096 qpair failed and we were unable to recover it. 00:42:47.096 [2024-05-15 09:08:41.784086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.096 [2024-05-15 09:08:41.784243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.096 [2024-05-15 09:08:41.784269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.096 qpair failed and we were unable to recover it. 00:42:47.096 [2024-05-15 09:08:41.784373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.096 [2024-05-15 09:08:41.784533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.096 [2024-05-15 09:08:41.784561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.096 qpair failed and we were unable to recover it. 00:42:47.096 [2024-05-15 09:08:41.784663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.096 [2024-05-15 09:08:41.784807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.096 [2024-05-15 09:08:41.784837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.096 qpair failed and we were unable to recover it. 00:42:47.096 [2024-05-15 09:08:41.784986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.096 [2024-05-15 09:08:41.785090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.096 [2024-05-15 09:08:41.785115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.096 qpair failed and we were unable to recover it. 00:42:47.096 [2024-05-15 09:08:41.785248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.096 [2024-05-15 09:08:41.785392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.096 [2024-05-15 09:08:41.785418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.096 qpair failed and we were unable to recover it. 00:42:47.096 [2024-05-15 09:08:41.785545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.096 [2024-05-15 09:08:41.785696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.096 [2024-05-15 09:08:41.785723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.096 qpair failed and we were unable to recover it. 00:42:47.096 [2024-05-15 09:08:41.785858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.096 [2024-05-15 09:08:41.785987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.096 [2024-05-15 09:08:41.786015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.096 qpair failed and we were unable to recover it. 00:42:47.096 [2024-05-15 09:08:41.786166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.096 [2024-05-15 09:08:41.786268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.096 [2024-05-15 09:08:41.786294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.096 qpair failed and we were unable to recover it. 00:42:47.096 [2024-05-15 09:08:41.786414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.096 [2024-05-15 09:08:41.786513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.096 [2024-05-15 09:08:41.786539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.096 qpair failed and we were unable to recover it. 00:42:47.096 [2024-05-15 09:08:41.786645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.096 [2024-05-15 09:08:41.786773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.096 [2024-05-15 09:08:41.786798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.096 qpair failed and we were unable to recover it. 00:42:47.096 [2024-05-15 09:08:41.786920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.096 [2024-05-15 09:08:41.787078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.096 [2024-05-15 09:08:41.787103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.096 qpair failed and we were unable to recover it. 00:42:47.096 [2024-05-15 09:08:41.787211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.096 [2024-05-15 09:08:41.787347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.096 [2024-05-15 09:08:41.787372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.096 qpair failed and we were unable to recover it. 00:42:47.096 [2024-05-15 09:08:41.787515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.096 [2024-05-15 09:08:41.787610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.096 [2024-05-15 09:08:41.787634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.096 qpair failed and we were unable to recover it. 00:42:47.096 [2024-05-15 09:08:41.787792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.096 [2024-05-15 09:08:41.787915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.096 [2024-05-15 09:08:41.787943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.096 qpair failed and we were unable to recover it. 00:42:47.096 [2024-05-15 09:08:41.788062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.097 [2024-05-15 09:08:41.788174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.097 [2024-05-15 09:08:41.788201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.097 qpair failed and we were unable to recover it. 00:42:47.097 [2024-05-15 09:08:41.788334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.097 [2024-05-15 09:08:41.788465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.097 [2024-05-15 09:08:41.788491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.097 qpair failed and we were unable to recover it. 00:42:47.097 [2024-05-15 09:08:41.788662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.097 [2024-05-15 09:08:41.788786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.097 [2024-05-15 09:08:41.788812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.097 qpair failed and we were unable to recover it. 00:42:47.097 [2024-05-15 09:08:41.788966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.097 [2024-05-15 09:08:41.789086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.097 [2024-05-15 09:08:41.789114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.097 qpair failed and we were unable to recover it. 00:42:47.097 [2024-05-15 09:08:41.789268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.097 [2024-05-15 09:08:41.789419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.097 [2024-05-15 09:08:41.789445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.097 qpair failed and we were unable to recover it. 00:42:47.097 [2024-05-15 09:08:41.789592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.097 [2024-05-15 09:08:41.789727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.097 [2024-05-15 09:08:41.789753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.097 qpair failed and we were unable to recover it. 00:42:47.097 [2024-05-15 09:08:41.789881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.097 [2024-05-15 09:08:41.790020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.097 [2024-05-15 09:08:41.790048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.097 qpair failed and we were unable to recover it. 00:42:47.097 [2024-05-15 09:08:41.790154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.097 [2024-05-15 09:08:41.790292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.097 [2024-05-15 09:08:41.790321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.097 qpair failed and we were unable to recover it. 00:42:47.097 [2024-05-15 09:08:41.790436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.097 [2024-05-15 09:08:41.790584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.097 [2024-05-15 09:08:41.790626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.097 qpair failed and we were unable to recover it. 00:42:47.097 [2024-05-15 09:08:41.790732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.097 [2024-05-15 09:08:41.790835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.097 [2024-05-15 09:08:41.790862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.097 qpair failed and we were unable to recover it. 00:42:47.097 [2024-05-15 09:08:41.790989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.097 [2024-05-15 09:08:41.791108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.097 [2024-05-15 09:08:41.791138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.097 qpair failed and we were unable to recover it. 00:42:47.097 [2024-05-15 09:08:41.791256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.097 [2024-05-15 09:08:41.791387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.097 [2024-05-15 09:08:41.791413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.097 qpair failed and we were unable to recover it. 00:42:47.097 [2024-05-15 09:08:41.791543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.097 [2024-05-15 09:08:41.791711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.097 [2024-05-15 09:08:41.791739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.097 qpair failed and we were unable to recover it. 00:42:47.097 [2024-05-15 09:08:41.791886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.097 [2024-05-15 09:08:41.792023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.097 [2024-05-15 09:08:41.792049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.097 qpair failed and we were unable to recover it. 00:42:47.097 [2024-05-15 09:08:41.792195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.097 [2024-05-15 09:08:41.792325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.097 [2024-05-15 09:08:41.792353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.097 qpair failed and we were unable to recover it. 00:42:47.097 [2024-05-15 09:08:41.792493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.097 [2024-05-15 09:08:41.792596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.097 [2024-05-15 09:08:41.792625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.097 qpair failed and we were unable to recover it. 00:42:47.097 [2024-05-15 09:08:41.792785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.097 [2024-05-15 09:08:41.792905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.097 [2024-05-15 09:08:41.792930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.097 qpair failed and we were unable to recover it. 00:42:47.097 [2024-05-15 09:08:41.793033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.097 [2024-05-15 09:08:41.793186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.097 [2024-05-15 09:08:41.793211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.097 qpair failed and we were unable to recover it. 00:42:47.097 [2024-05-15 09:08:41.793422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.097 [2024-05-15 09:08:41.793553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.097 [2024-05-15 09:08:41.793579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.097 qpair failed and we were unable to recover it. 00:42:47.097 [2024-05-15 09:08:41.793758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.097 [2024-05-15 09:08:41.793911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.097 [2024-05-15 09:08:41.793936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.097 qpair failed and we were unable to recover it. 00:42:47.097 [2024-05-15 09:08:41.794045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.097 [2024-05-15 09:08:41.794195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.097 [2024-05-15 09:08:41.794228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.097 qpair failed and we were unable to recover it. 00:42:47.097 [2024-05-15 09:08:41.794359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.097 [2024-05-15 09:08:41.794471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.097 [2024-05-15 09:08:41.794497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.097 qpair failed and we were unable to recover it. 00:42:47.097 [2024-05-15 09:08:41.794602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.097 [2024-05-15 09:08:41.794708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.097 [2024-05-15 09:08:41.794733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.097 qpair failed and we were unable to recover it. 00:42:47.097 [2024-05-15 09:08:41.794861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.097 [2024-05-15 09:08:41.794963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.097 [2024-05-15 09:08:41.795007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.097 qpair failed and we were unable to recover it. 00:42:47.097 [2024-05-15 09:08:41.795147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.097 [2024-05-15 09:08:41.795299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.097 [2024-05-15 09:08:41.795325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.097 qpair failed and we were unable to recover it. 00:42:47.097 [2024-05-15 09:08:41.795433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.097 [2024-05-15 09:08:41.795561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.097 [2024-05-15 09:08:41.795586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.097 qpair failed and we were unable to recover it. 00:42:47.097 [2024-05-15 09:08:41.795714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.097 [2024-05-15 09:08:41.795828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.097 [2024-05-15 09:08:41.795874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.097 qpair failed and we were unable to recover it. 00:42:47.097 [2024-05-15 09:08:41.795978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.097 [2024-05-15 09:08:41.796096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.097 [2024-05-15 09:08:41.796124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.097 qpair failed and we were unable to recover it. 00:42:47.097 [2024-05-15 09:08:41.796267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.097 [2024-05-15 09:08:41.796422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.097 [2024-05-15 09:08:41.796448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.097 qpair failed and we were unable to recover it. 00:42:47.097 [2024-05-15 09:08:41.796555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.097 [2024-05-15 09:08:41.796681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.097 [2024-05-15 09:08:41.796707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.097 qpair failed and we were unable to recover it. 00:42:47.097 [2024-05-15 09:08:41.796807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.097 [2024-05-15 09:08:41.796940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.097 [2024-05-15 09:08:41.796967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.097 qpair failed and we were unable to recover it. 00:42:47.098 [2024-05-15 09:08:41.797103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.098 [2024-05-15 09:08:41.797227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.098 [2024-05-15 09:08:41.797256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.098 qpair failed and we were unable to recover it. 00:42:47.098 [2024-05-15 09:08:41.797421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.098 [2024-05-15 09:08:41.797536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.098 [2024-05-15 09:08:41.797564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.098 qpair failed and we were unable to recover it. 00:42:47.098 [2024-05-15 09:08:41.797702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.098 [2024-05-15 09:08:41.797810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.098 [2024-05-15 09:08:41.797836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.098 qpair failed and we were unable to recover it. 00:42:47.098 [2024-05-15 09:08:41.797988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.098 [2024-05-15 09:08:41.798136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.098 [2024-05-15 09:08:41.798162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.098 qpair failed and we were unable to recover it. 00:42:47.098 [2024-05-15 09:08:41.798326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.098 [2024-05-15 09:08:41.798474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.098 [2024-05-15 09:08:41.798504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.098 qpair failed and we were unable to recover it. 00:42:47.098 [2024-05-15 09:08:41.798621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.098 [2024-05-15 09:08:41.798788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.098 [2024-05-15 09:08:41.798817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.098 qpair failed and we were unable to recover it. 00:42:47.098 [2024-05-15 09:08:41.798941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.098 [2024-05-15 09:08:41.799097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.098 [2024-05-15 09:08:41.799122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.098 qpair failed and we were unable to recover it. 00:42:47.098 [2024-05-15 09:08:41.799236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.098 [2024-05-15 09:08:41.799344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.098 [2024-05-15 09:08:41.799373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.098 qpair failed and we were unable to recover it. 00:42:47.098 [2024-05-15 09:08:41.799488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.098 [2024-05-15 09:08:41.799630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.098 [2024-05-15 09:08:41.799658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.098 qpair failed and we were unable to recover it. 00:42:47.098 [2024-05-15 09:08:41.799815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.098 [2024-05-15 09:08:41.799944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.098 [2024-05-15 09:08:41.799969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.098 qpair failed and we were unable to recover it. 00:42:47.098 [2024-05-15 09:08:41.800079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.098 [2024-05-15 09:08:41.800187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.098 [2024-05-15 09:08:41.800213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.098 qpair failed and we were unable to recover it. 00:42:47.098 [2024-05-15 09:08:41.800374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.098 [2024-05-15 09:08:41.800546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.098 [2024-05-15 09:08:41.800571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.098 qpair failed and we were unable to recover it. 00:42:47.098 [2024-05-15 09:08:41.800680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.098 [2024-05-15 09:08:41.800792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.098 [2024-05-15 09:08:41.800818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.098 qpair failed and we were unable to recover it. 00:42:47.098 [2024-05-15 09:08:41.800951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.098 [2024-05-15 09:08:41.801054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.098 [2024-05-15 09:08:41.801081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.098 qpair failed and we were unable to recover it. 00:42:47.098 [2024-05-15 09:08:41.801206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.098 [2024-05-15 09:08:41.801370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.098 [2024-05-15 09:08:41.801396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.098 qpair failed and we were unable to recover it. 00:42:47.098 [2024-05-15 09:08:41.801549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.098 [2024-05-15 09:08:41.801667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.098 [2024-05-15 09:08:41.801696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.098 qpair failed and we were unable to recover it. 00:42:47.098 [2024-05-15 09:08:41.801841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.098 [2024-05-15 09:08:41.801979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.098 [2024-05-15 09:08:41.802007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.098 qpair failed and we were unable to recover it. 00:42:47.098 [2024-05-15 09:08:41.802122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.098 [2024-05-15 09:08:41.802253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.098 [2024-05-15 09:08:41.802295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.098 qpair failed and we were unable to recover it. 00:42:47.098 [2024-05-15 09:08:41.802394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.098 [2024-05-15 09:08:41.802497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.098 [2024-05-15 09:08:41.802524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.098 qpair failed and we were unable to recover it. 00:42:47.098 [2024-05-15 09:08:41.802678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.098 [2024-05-15 09:08:41.802795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.098 [2024-05-15 09:08:41.802824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.098 qpair failed and we were unable to recover it. 00:42:47.098 [2024-05-15 09:08:41.802937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.098 [2024-05-15 09:08:41.803055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.098 [2024-05-15 09:08:41.803084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.098 qpair failed and we were unable to recover it. 00:42:47.098 [2024-05-15 09:08:41.803204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.098 [2024-05-15 09:08:41.803331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.098 [2024-05-15 09:08:41.803357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.098 qpair failed and we were unable to recover it. 00:42:47.098 [2024-05-15 09:08:41.803469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.098 [2024-05-15 09:08:41.803575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.098 [2024-05-15 09:08:41.803601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.098 qpair failed and we were unable to recover it. 00:42:47.098 [2024-05-15 09:08:41.803709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.098 [2024-05-15 09:08:41.803809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.098 [2024-05-15 09:08:41.803834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.098 qpair failed and we were unable to recover it. 00:42:47.098 [2024-05-15 09:08:41.803941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.098 [2024-05-15 09:08:41.804066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.098 [2024-05-15 09:08:41.804094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.098 qpair failed and we were unable to recover it. 00:42:47.098 [2024-05-15 09:08:41.804231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.098 [2024-05-15 09:08:41.804345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.098 [2024-05-15 09:08:41.804373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.098 qpair failed and we were unable to recover it. 00:42:47.098 [2024-05-15 09:08:41.804503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.098 [2024-05-15 09:08:41.804605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.098 [2024-05-15 09:08:41.804630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.098 qpair failed and we were unable to recover it. 00:42:47.098 [2024-05-15 09:08:41.804802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.098 [2024-05-15 09:08:41.804945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.098 [2024-05-15 09:08:41.804973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.098 qpair failed and we were unable to recover it. 00:42:47.098 [2024-05-15 09:08:41.805115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.098 [2024-05-15 09:08:41.805224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.098 [2024-05-15 09:08:41.805253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.098 qpair failed and we were unable to recover it. 00:42:47.098 [2024-05-15 09:08:41.805369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.098 [2024-05-15 09:08:41.805483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.098 [2024-05-15 09:08:41.805511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.098 qpair failed and we were unable to recover it. 00:42:47.098 [2024-05-15 09:08:41.805631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.098 [2024-05-15 09:08:41.805738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.098 [2024-05-15 09:08:41.805763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.098 qpair failed and we were unable to recover it. 00:42:47.098 [2024-05-15 09:08:41.805911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.098 [2024-05-15 09:08:41.806041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.098 [2024-05-15 09:08:41.806070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.098 qpair failed and we were unable to recover it. 00:42:47.098 [2024-05-15 09:08:41.806197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.098 [2024-05-15 09:08:41.806354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.099 [2024-05-15 09:08:41.806380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.099 qpair failed and we were unable to recover it. 00:42:47.099 [2024-05-15 09:08:41.806487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.099 [2024-05-15 09:08:41.806640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.099 [2024-05-15 09:08:41.806668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.099 qpair failed and we were unable to recover it. 00:42:47.099 [2024-05-15 09:08:41.806851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.099 [2024-05-15 09:08:41.807019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.099 [2024-05-15 09:08:41.807048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.099 qpair failed and we were unable to recover it. 00:42:47.099 [2024-05-15 09:08:41.807163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.099 [2024-05-15 09:08:41.807276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.099 [2024-05-15 09:08:41.807304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.099 qpair failed and we were unable to recover it. 00:42:47.099 [2024-05-15 09:08:41.807452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.099 [2024-05-15 09:08:41.807616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.099 [2024-05-15 09:08:41.807641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.099 qpair failed and we were unable to recover it. 00:42:47.099 [2024-05-15 09:08:41.807797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.099 [2024-05-15 09:08:41.807928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.099 [2024-05-15 09:08:41.807953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.099 qpair failed and we were unable to recover it. 00:42:47.099 [2024-05-15 09:08:41.808104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.099 [2024-05-15 09:08:41.808231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.099 [2024-05-15 09:08:41.808257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.099 qpair failed and we were unable to recover it. 00:42:47.099 [2024-05-15 09:08:41.808360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.099 [2024-05-15 09:08:41.808463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.099 [2024-05-15 09:08:41.808505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.099 qpair failed and we were unable to recover it. 00:42:47.099 [2024-05-15 09:08:41.808643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.099 [2024-05-15 09:08:41.808767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.099 [2024-05-15 09:08:41.808792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.099 qpair failed and we were unable to recover it. 00:42:47.099 [2024-05-15 09:08:41.808889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.099 [2024-05-15 09:08:41.808988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.099 [2024-05-15 09:08:41.809013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.099 qpair failed and we were unable to recover it. 00:42:47.099 [2024-05-15 09:08:41.809140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.099 [2024-05-15 09:08:41.809252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.099 [2024-05-15 09:08:41.809278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.099 qpair failed and we were unable to recover it. 00:42:47.099 [2024-05-15 09:08:41.809412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.099 [2024-05-15 09:08:41.809590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.099 [2024-05-15 09:08:41.809618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.099 qpair failed and we were unable to recover it. 00:42:47.099 [2024-05-15 09:08:41.809750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.099 [2024-05-15 09:08:41.809880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.099 [2024-05-15 09:08:41.809905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.099 qpair failed and we were unable to recover it. 00:42:47.099 [2024-05-15 09:08:41.810038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.099 [2024-05-15 09:08:41.810180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.099 [2024-05-15 09:08:41.810208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.099 qpair failed and we were unable to recover it. 00:42:47.099 [2024-05-15 09:08:41.810372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.099 [2024-05-15 09:08:41.810478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.099 [2024-05-15 09:08:41.810503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.099 qpair failed and we were unable to recover it. 00:42:47.099 [2024-05-15 09:08:41.810606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.099 [2024-05-15 09:08:41.810748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.099 [2024-05-15 09:08:41.810773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.099 qpair failed and we were unable to recover it. 00:42:47.099 [2024-05-15 09:08:41.810927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.099 [2024-05-15 09:08:41.811070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.099 [2024-05-15 09:08:41.811098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.099 qpair failed and we were unable to recover it. 00:42:47.099 [2024-05-15 09:08:41.811243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.099 [2024-05-15 09:08:41.811381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.099 [2024-05-15 09:08:41.811410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.099 qpair failed and we were unable to recover it. 00:42:47.099 [2024-05-15 09:08:41.811562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.099 [2024-05-15 09:08:41.811668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.099 [2024-05-15 09:08:41.811695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.099 qpair failed and we were unable to recover it. 00:42:47.099 [2024-05-15 09:08:41.811819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.099 [2024-05-15 09:08:41.811960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.099 [2024-05-15 09:08:41.811989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.099 qpair failed and we were unable to recover it. 00:42:47.099 [2024-05-15 09:08:41.812141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.099 [2024-05-15 09:08:41.812290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.099 [2024-05-15 09:08:41.812317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.099 qpair failed and we were unable to recover it. 00:42:47.099 [2024-05-15 09:08:41.812427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.099 [2024-05-15 09:08:41.812545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.099 [2024-05-15 09:08:41.812573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.099 qpair failed and we were unable to recover it. 00:42:47.099 [2024-05-15 09:08:41.812727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.099 [2024-05-15 09:08:41.812831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.099 [2024-05-15 09:08:41.812856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.099 qpair failed and we were unable to recover it. 00:42:47.099 [2024-05-15 09:08:41.812963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.099 [2024-05-15 09:08:41.813061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.099 [2024-05-15 09:08:41.813088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.099 qpair failed and we were unable to recover it. 00:42:47.099 [2024-05-15 09:08:41.813224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.099 [2024-05-15 09:08:41.813357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.099 [2024-05-15 09:08:41.813386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.099 qpair failed and we were unable to recover it. 00:42:47.099 [2024-05-15 09:08:41.813526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.099 [2024-05-15 09:08:41.813647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.099 [2024-05-15 09:08:41.813675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.099 qpair failed and we were unable to recover it. 00:42:47.099 [2024-05-15 09:08:41.813820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.099 [2024-05-15 09:08:41.813942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.099 [2024-05-15 09:08:41.813968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.099 qpair failed and we were unable to recover it. 00:42:47.099 [2024-05-15 09:08:41.814128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.099 [2024-05-15 09:08:41.814285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.099 [2024-05-15 09:08:41.814311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.099 qpair failed and we were unable to recover it. 00:42:47.099 [2024-05-15 09:08:41.814421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.099 [2024-05-15 09:08:41.814513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.099 [2024-05-15 09:08:41.814538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.099 qpair failed and we were unable to recover it. 00:42:47.099 [2024-05-15 09:08:41.814662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.099 [2024-05-15 09:08:41.814791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.099 [2024-05-15 09:08:41.814820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.099 qpair failed and we were unable to recover it. 00:42:47.099 [2024-05-15 09:08:41.814952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.099 [2024-05-15 09:08:41.815076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.099 [2024-05-15 09:08:41.815101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.100 qpair failed and we were unable to recover it. 00:42:47.100 [2024-05-15 09:08:41.815226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.100 [2024-05-15 09:08:41.815344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.100 [2024-05-15 09:08:41.815372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.100 qpair failed and we were unable to recover it. 00:42:47.100 [2024-05-15 09:08:41.815491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.100 [2024-05-15 09:08:41.815615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.100 [2024-05-15 09:08:41.815643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.100 qpair failed and we were unable to recover it. 00:42:47.100 [2024-05-15 09:08:41.815785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.100 [2024-05-15 09:08:41.815932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.100 [2024-05-15 09:08:41.815957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.100 qpair failed and we were unable to recover it. 00:42:47.100 [2024-05-15 09:08:41.816084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.100 [2024-05-15 09:08:41.816184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.100 [2024-05-15 09:08:41.816209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.100 qpair failed and we were unable to recover it. 00:42:47.100 [2024-05-15 09:08:41.816347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.100 [2024-05-15 09:08:41.816507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.100 [2024-05-15 09:08:41.816535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.100 qpair failed and we were unable to recover it. 00:42:47.100 [2024-05-15 09:08:41.816686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.100 [2024-05-15 09:08:41.816843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.100 [2024-05-15 09:08:41.816868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.100 qpair failed and we were unable to recover it. 00:42:47.100 [2024-05-15 09:08:41.816992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.100 [2024-05-15 09:08:41.817128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.100 [2024-05-15 09:08:41.817158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.100 qpair failed and we were unable to recover it. 00:42:47.100 [2024-05-15 09:08:41.817295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.100 [2024-05-15 09:08:41.817421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.100 [2024-05-15 09:08:41.817446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.100 qpair failed and we were unable to recover it. 00:42:47.100 [2024-05-15 09:08:41.817558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.100 [2024-05-15 09:08:41.817711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.100 [2024-05-15 09:08:41.817736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.100 qpair failed and we were unable to recover it. 00:42:47.100 [2024-05-15 09:08:41.817860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.100 [2024-05-15 09:08:41.817962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.100 [2024-05-15 09:08:41.817986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.100 qpair failed and we were unable to recover it. 00:42:47.100 [2024-05-15 09:08:41.818113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.100 [2024-05-15 09:08:41.818240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.100 [2024-05-15 09:08:41.818269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.100 qpair failed and we were unable to recover it. 00:42:47.100 [2024-05-15 09:08:41.818426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.100 [2024-05-15 09:08:41.818540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.100 [2024-05-15 09:08:41.818567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.100 qpair failed and we were unable to recover it. 00:42:47.100 [2024-05-15 09:08:41.818698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.100 [2024-05-15 09:08:41.818825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.100 [2024-05-15 09:08:41.818850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.100 qpair failed and we were unable to recover it. 00:42:47.100 [2024-05-15 09:08:41.818957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.100 [2024-05-15 09:08:41.819110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.100 [2024-05-15 09:08:41.819137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.100 qpair failed and we were unable to recover it. 00:42:47.100 [2024-05-15 09:08:41.819283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.100 [2024-05-15 09:08:41.819397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.100 [2024-05-15 09:08:41.819425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.100 qpair failed and we were unable to recover it. 00:42:47.100 [2024-05-15 09:08:41.819573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.100 [2024-05-15 09:08:41.819676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.100 [2024-05-15 09:08:41.819701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.100 qpair failed and we were unable to recover it. 00:42:47.100 [2024-05-15 09:08:41.819794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.100 [2024-05-15 09:08:41.819917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.100 [2024-05-15 09:08:41.819958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.100 qpair failed and we were unable to recover it. 00:42:47.100 [2024-05-15 09:08:41.820099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.100 [2024-05-15 09:08:41.820208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.100 [2024-05-15 09:08:41.820259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.100 qpair failed and we were unable to recover it. 00:42:47.100 [2024-05-15 09:08:41.820373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.100 [2024-05-15 09:08:41.820508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.100 [2024-05-15 09:08:41.820533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.100 qpair failed and we were unable to recover it. 00:42:47.100 [2024-05-15 09:08:41.820659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.100 [2024-05-15 09:08:41.820791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.100 [2024-05-15 09:08:41.820816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.100 qpair failed and we were unable to recover it. 00:42:47.100 [2024-05-15 09:08:41.820940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.100 [2024-05-15 09:08:41.821062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.100 [2024-05-15 09:08:41.821089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.100 qpair failed and we were unable to recover it. 00:42:47.100 [2024-05-15 09:08:41.821232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.100 [2024-05-15 09:08:41.821354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.100 [2024-05-15 09:08:41.821384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.100 qpair failed and we were unable to recover it. 00:42:47.100 [2024-05-15 09:08:41.821494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.100 [2024-05-15 09:08:41.821604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.100 [2024-05-15 09:08:41.821630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.100 qpair failed and we were unable to recover it. 00:42:47.100 [2024-05-15 09:08:41.821726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.100 [2024-05-15 09:08:41.821834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.100 [2024-05-15 09:08:41.821861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.100 qpair failed and we were unable to recover it. 00:42:47.100 [2024-05-15 09:08:41.821990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.100 [2024-05-15 09:08:41.822130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.100 [2024-05-15 09:08:41.822171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.100 qpair failed and we were unable to recover it. 00:42:47.100 [2024-05-15 09:08:41.822331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.100 [2024-05-15 09:08:41.822458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.100 [2024-05-15 09:08:41.822486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.100 qpair failed and we were unable to recover it. 00:42:47.100 [2024-05-15 09:08:41.822628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.100 [2024-05-15 09:08:41.822757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.100 [2024-05-15 09:08:41.822782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.100 qpair failed and we were unable to recover it. 00:42:47.100 [2024-05-15 09:08:41.822887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.100 [2024-05-15 09:08:41.823017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.100 [2024-05-15 09:08:41.823043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.100 qpair failed and we were unable to recover it. 00:42:47.100 [2024-05-15 09:08:41.823162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.100 [2024-05-15 09:08:41.823343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.100 [2024-05-15 09:08:41.823372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.100 qpair failed and we were unable to recover it. 00:42:47.100 [2024-05-15 09:08:41.823502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.100 [2024-05-15 09:08:41.823631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.100 [2024-05-15 09:08:41.823657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.100 qpair failed and we were unable to recover it. 00:42:47.100 [2024-05-15 09:08:41.823805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.100 [2024-05-15 09:08:41.823915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.100 [2024-05-15 09:08:41.823943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.100 qpair failed and we were unable to recover it. 00:42:47.100 [2024-05-15 09:08:41.824089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.100 [2024-05-15 09:08:41.824226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.101 [2024-05-15 09:08:41.824257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.101 qpair failed and we were unable to recover it. 00:42:47.101 [2024-05-15 09:08:41.824459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.101 [2024-05-15 09:08:41.824569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.101 [2024-05-15 09:08:41.824594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.101 qpair failed and we were unable to recover it. 00:42:47.101 [2024-05-15 09:08:41.824702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.101 [2024-05-15 09:08:41.824833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.101 [2024-05-15 09:08:41.824858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.101 qpair failed and we were unable to recover it. 00:42:47.101 [2024-05-15 09:08:41.824962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.101 [2024-05-15 09:08:41.825087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.101 [2024-05-15 09:08:41.825112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.101 qpair failed and we were unable to recover it. 00:42:47.101 [2024-05-15 09:08:41.825253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.101 [2024-05-15 09:08:41.825354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.101 [2024-05-15 09:08:41.825379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.101 qpair failed and we were unable to recover it. 00:42:47.101 [2024-05-15 09:08:41.825485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.101 [2024-05-15 09:08:41.825635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.101 [2024-05-15 09:08:41.825663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.101 qpair failed and we were unable to recover it. 00:42:47.101 [2024-05-15 09:08:41.825805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.101 [2024-05-15 09:08:41.825919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.101 [2024-05-15 09:08:41.825948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.101 qpair failed and we were unable to recover it. 00:42:47.101 [2024-05-15 09:08:41.826089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.101 [2024-05-15 09:08:41.826243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.101 [2024-05-15 09:08:41.826269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.101 qpair failed and we were unable to recover it. 00:42:47.101 [2024-05-15 09:08:41.826372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.101 [2024-05-15 09:08:41.826495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.101 [2024-05-15 09:08:41.826521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.101 qpair failed and we were unable to recover it. 00:42:47.101 [2024-05-15 09:08:41.826648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.101 [2024-05-15 09:08:41.826773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.101 [2024-05-15 09:08:41.826801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.101 qpair failed and we were unable to recover it. 00:42:47.101 [2024-05-15 09:08:41.826910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.101 [2024-05-15 09:08:41.827051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.101 [2024-05-15 09:08:41.827083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.101 qpair failed and we were unable to recover it. 00:42:47.101 [2024-05-15 09:08:41.827244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.101 [2024-05-15 09:08:41.827370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.101 [2024-05-15 09:08:41.827395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.101 qpair failed and we were unable to recover it. 00:42:47.101 [2024-05-15 09:08:41.827497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.101 [2024-05-15 09:08:41.827596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.101 [2024-05-15 09:08:41.827622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.101 qpair failed and we were unable to recover it. 00:42:47.101 [2024-05-15 09:08:41.827746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.101 [2024-05-15 09:08:41.827879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.101 [2024-05-15 09:08:41.827904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.101 qpair failed and we were unable to recover it. 00:42:47.101 [2024-05-15 09:08:41.828015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.101 [2024-05-15 09:08:41.828123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.101 [2024-05-15 09:08:41.828150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.101 qpair failed and we were unable to recover it. 00:42:47.101 [2024-05-15 09:08:41.828301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.101 [2024-05-15 09:08:41.828416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.101 [2024-05-15 09:08:41.828445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.101 qpair failed and we were unable to recover it. 00:42:47.101 [2024-05-15 09:08:41.828597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.101 [2024-05-15 09:08:41.828728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.101 [2024-05-15 09:08:41.828753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.101 qpair failed and we were unable to recover it. 00:42:47.101 [2024-05-15 09:08:41.828858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.101 [2024-05-15 09:08:41.828988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.101 [2024-05-15 09:08:41.829015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.101 qpair failed and we were unable to recover it. 00:42:47.101 [2024-05-15 09:08:41.829162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.101 [2024-05-15 09:08:41.829289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.101 [2024-05-15 09:08:41.829316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.101 qpair failed and we were unable to recover it. 00:42:47.101 [2024-05-15 09:08:41.829470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.101 [2024-05-15 09:08:41.829610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.101 [2024-05-15 09:08:41.829636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.101 qpair failed and we were unable to recover it. 00:42:47.101 [2024-05-15 09:08:41.829736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.101 [2024-05-15 09:08:41.829840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.101 [2024-05-15 09:08:41.829869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.101 qpair failed and we were unable to recover it. 00:42:47.101 [2024-05-15 09:08:41.830024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.101 [2024-05-15 09:08:41.830176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.101 [2024-05-15 09:08:41.830202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.101 qpair failed and we were unable to recover it. 00:42:47.101 [2024-05-15 09:08:41.830340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.101 [2024-05-15 09:08:41.830463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.101 [2024-05-15 09:08:41.830491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.101 qpair failed and we were unable to recover it. 00:42:47.101 [2024-05-15 09:08:41.830608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.101 [2024-05-15 09:08:41.830753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.101 [2024-05-15 09:08:41.830782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.101 qpair failed and we were unable to recover it. 00:42:47.101 [2024-05-15 09:08:41.830958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.101 [2024-05-15 09:08:41.831066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.101 [2024-05-15 09:08:41.831091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.101 qpair failed and we were unable to recover it. 00:42:47.101 [2024-05-15 09:08:41.831195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.101 [2024-05-15 09:08:41.831310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.101 [2024-05-15 09:08:41.831336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.101 qpair failed and we were unable to recover it. 00:42:47.101 [2024-05-15 09:08:41.831444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.101 [2024-05-15 09:08:41.831590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.101 [2024-05-15 09:08:41.831617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.380 qpair failed and we were unable to recover it. 00:42:47.380 [2024-05-15 09:08:41.831765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.380 [2024-05-15 09:08:41.831896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.380 [2024-05-15 09:08:41.831922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.380 qpair failed and we were unable to recover it. 00:42:47.380 [2024-05-15 09:08:41.832031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.380 [2024-05-15 09:08:41.832163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.380 [2024-05-15 09:08:41.832188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.380 qpair failed and we were unable to recover it. 00:42:47.380 [2024-05-15 09:08:41.832297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.380 [2024-05-15 09:08:41.832456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.380 [2024-05-15 09:08:41.832483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.380 qpair failed and we were unable to recover it. 00:42:47.380 [2024-05-15 09:08:41.832637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.380 [2024-05-15 09:08:41.832741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.380 [2024-05-15 09:08:41.832766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.380 qpair failed and we were unable to recover it. 00:42:47.380 [2024-05-15 09:08:41.832898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.380 [2024-05-15 09:08:41.833023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.380 [2024-05-15 09:08:41.833050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.380 qpair failed and we were unable to recover it. 00:42:47.380 [2024-05-15 09:08:41.833183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.380 [2024-05-15 09:08:41.833319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.380 [2024-05-15 09:08:41.833344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.380 qpair failed and we were unable to recover it. 00:42:47.380 [2024-05-15 09:08:41.833517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.380 [2024-05-15 09:08:41.833673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.380 [2024-05-15 09:08:41.833698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.380 qpair failed and we were unable to recover it. 00:42:47.380 [2024-05-15 09:08:41.833818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.380 [2024-05-15 09:08:41.833924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.380 [2024-05-15 09:08:41.833949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.380 qpair failed and we were unable to recover it. 00:42:47.380 [2024-05-15 09:08:41.834101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.380 [2024-05-15 09:08:41.834271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.380 [2024-05-15 09:08:41.834300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.380 qpair failed and we were unable to recover it. 00:42:47.380 [2024-05-15 09:08:41.834450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.380 [2024-05-15 09:08:41.834548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.380 [2024-05-15 09:08:41.834573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.380 qpair failed and we were unable to recover it. 00:42:47.380 [2024-05-15 09:08:41.834672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.380 [2024-05-15 09:08:41.834770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.380 [2024-05-15 09:08:41.834811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.380 qpair failed and we were unable to recover it. 00:42:47.380 [2024-05-15 09:08:41.834931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.380 [2024-05-15 09:08:41.835072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.380 [2024-05-15 09:08:41.835114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.380 qpair failed and we were unable to recover it. 00:42:47.380 [2024-05-15 09:08:41.835227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.380 [2024-05-15 09:08:41.835333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.380 [2024-05-15 09:08:41.835358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.380 qpair failed and we were unable to recover it. 00:42:47.380 [2024-05-15 09:08:41.835455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.380 [2024-05-15 09:08:41.835551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.380 [2024-05-15 09:08:41.835576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.380 qpair failed and we were unable to recover it. 00:42:47.380 [2024-05-15 09:08:41.835730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.380 [2024-05-15 09:08:41.835871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.380 [2024-05-15 09:08:41.835899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.380 qpair failed and we were unable to recover it. 00:42:47.380 [2024-05-15 09:08:41.836081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.380 [2024-05-15 09:08:41.836180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.380 [2024-05-15 09:08:41.836205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.380 qpair failed and we were unable to recover it. 00:42:47.380 [2024-05-15 09:08:41.836318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.380 [2024-05-15 09:08:41.836433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.380 [2024-05-15 09:08:41.836462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.380 qpair failed and we were unable to recover it. 00:42:47.380 [2024-05-15 09:08:41.836611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.380 [2024-05-15 09:08:41.836720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.381 [2024-05-15 09:08:41.836747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.381 qpair failed and we were unable to recover it. 00:42:47.381 [2024-05-15 09:08:41.836886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.381 [2024-05-15 09:08:41.837025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.381 [2024-05-15 09:08:41.837067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.381 qpair failed and we were unable to recover it. 00:42:47.381 [2024-05-15 09:08:41.837163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.381 [2024-05-15 09:08:41.837268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.381 [2024-05-15 09:08:41.837313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.381 qpair failed and we were unable to recover it. 00:42:47.381 [2024-05-15 09:08:41.837460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.381 [2024-05-15 09:08:41.837581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.381 [2024-05-15 09:08:41.837609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.381 qpair failed and we were unable to recover it. 00:42:47.381 [2024-05-15 09:08:41.837738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.381 [2024-05-15 09:08:41.837842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.381 [2024-05-15 09:08:41.837868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.381 qpair failed and we were unable to recover it. 00:42:47.381 [2024-05-15 09:08:41.837996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.381 [2024-05-15 09:08:41.838136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.381 [2024-05-15 09:08:41.838164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.381 qpair failed and we were unable to recover it. 00:42:47.381 [2024-05-15 09:08:41.838295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.381 [2024-05-15 09:08:41.838435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.381 [2024-05-15 09:08:41.838460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.381 qpair failed and we were unable to recover it. 00:42:47.381 [2024-05-15 09:08:41.838612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.381 [2024-05-15 09:08:41.838763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.381 [2024-05-15 09:08:41.838792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.381 qpair failed and we were unable to recover it. 00:42:47.381 [2024-05-15 09:08:41.838932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.381 [2024-05-15 09:08:41.839039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.381 [2024-05-15 09:08:41.839065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.381 qpair failed and we were unable to recover it. 00:42:47.381 [2024-05-15 09:08:41.839196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.381 [2024-05-15 09:08:41.839329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.381 [2024-05-15 09:08:41.839358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.381 qpair failed and we were unable to recover it. 00:42:47.381 [2024-05-15 09:08:41.839476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.381 [2024-05-15 09:08:41.839643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.381 [2024-05-15 09:08:41.839671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.381 qpair failed and we were unable to recover it. 00:42:47.381 [2024-05-15 09:08:41.839805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.381 [2024-05-15 09:08:41.839911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.381 [2024-05-15 09:08:41.839936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.381 qpair failed and we were unable to recover it. 00:42:47.381 [2024-05-15 09:08:41.840098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.381 [2024-05-15 09:08:41.840227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.381 [2024-05-15 09:08:41.840269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.381 qpair failed and we were unable to recover it. 00:42:47.381 [2024-05-15 09:08:41.840386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.381 [2024-05-15 09:08:41.840564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.381 [2024-05-15 09:08:41.840593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.381 qpair failed and we were unable to recover it. 00:42:47.381 [2024-05-15 09:08:41.840700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.381 [2024-05-15 09:08:41.840842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.381 [2024-05-15 09:08:41.840870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.381 qpair failed and we were unable to recover it. 00:42:47.381 [2024-05-15 09:08:41.840981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.381 [2024-05-15 09:08:41.841089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.381 [2024-05-15 09:08:41.841117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.381 qpair failed and we were unable to recover it. 00:42:47.381 [2024-05-15 09:08:41.841242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.381 [2024-05-15 09:08:41.841376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.381 [2024-05-15 09:08:41.841401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.381 qpair failed and we were unable to recover it. 00:42:47.381 [2024-05-15 09:08:41.841541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.381 [2024-05-15 09:08:41.841640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.381 [2024-05-15 09:08:41.841665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.381 qpair failed and we were unable to recover it. 00:42:47.381 [2024-05-15 09:08:41.841772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.381 [2024-05-15 09:08:41.841896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.381 [2024-05-15 09:08:41.841921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.381 qpair failed and we were unable to recover it. 00:42:47.381 [2024-05-15 09:08:41.842052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.381 [2024-05-15 09:08:41.842204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.381 [2024-05-15 09:08:41.842238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.381 qpair failed and we were unable to recover it. 00:42:47.381 [2024-05-15 09:08:41.842384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.381 [2024-05-15 09:08:41.842494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.381 [2024-05-15 09:08:41.842520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.381 qpair failed and we were unable to recover it. 00:42:47.381 [2024-05-15 09:08:41.842645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.381 [2024-05-15 09:08:41.842758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.381 [2024-05-15 09:08:41.842787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.381 qpair failed and we were unable to recover it. 00:42:47.381 [2024-05-15 09:08:41.842930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.381 [2024-05-15 09:08:41.843083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.381 [2024-05-15 09:08:41.843109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.381 qpair failed and we were unable to recover it. 00:42:47.381 [2024-05-15 09:08:41.843253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.381 [2024-05-15 09:08:41.843389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.381 [2024-05-15 09:08:41.843430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.381 qpair failed and we were unable to recover it. 00:42:47.381 [2024-05-15 09:08:41.843557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.381 [2024-05-15 09:08:41.843661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.381 [2024-05-15 09:08:41.843686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.381 qpair failed and we were unable to recover it. 00:42:47.381 [2024-05-15 09:08:41.843795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.381 [2024-05-15 09:08:41.843958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.381 [2024-05-15 09:08:41.843983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.381 qpair failed and we were unable to recover it. 00:42:47.381 [2024-05-15 09:08:41.844124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.381 [2024-05-15 09:08:41.844309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.381 [2024-05-15 09:08:41.844335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.381 qpair failed and we were unable to recover it. 00:42:47.381 [2024-05-15 09:08:41.844446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.381 [2024-05-15 09:08:41.844635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.381 [2024-05-15 09:08:41.844663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.381 qpair failed and we were unable to recover it. 00:42:47.381 [2024-05-15 09:08:41.844805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.381 [2024-05-15 09:08:41.844956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.381 [2024-05-15 09:08:41.844981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.382 qpair failed and we were unable to recover it. 00:42:47.382 [2024-05-15 09:08:41.845085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.382 [2024-05-15 09:08:41.845213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.382 [2024-05-15 09:08:41.845247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.382 qpair failed and we were unable to recover it. 00:42:47.382 [2024-05-15 09:08:41.845387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.382 [2024-05-15 09:08:41.845531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.382 [2024-05-15 09:08:41.845559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.382 qpair failed and we were unable to recover it. 00:42:47.382 [2024-05-15 09:08:41.845675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.382 [2024-05-15 09:08:41.845839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.382 [2024-05-15 09:08:41.845865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.382 qpair failed and we were unable to recover it. 00:42:47.382 [2024-05-15 09:08:41.845996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.382 [2024-05-15 09:08:41.846124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.382 [2024-05-15 09:08:41.846153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.382 qpair failed and we were unable to recover it. 00:42:47.382 [2024-05-15 09:08:41.846312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.382 [2024-05-15 09:08:41.846420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.382 [2024-05-15 09:08:41.846446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.382 qpair failed and we were unable to recover it. 00:42:47.382 [2024-05-15 09:08:41.846598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.382 [2024-05-15 09:08:41.846742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.382 [2024-05-15 09:08:41.846770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.382 qpair failed and we were unable to recover it. 00:42:47.382 [2024-05-15 09:08:41.846889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.382 [2024-05-15 09:08:41.846999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.382 [2024-05-15 09:08:41.847027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.382 qpair failed and we were unable to recover it. 00:42:47.382 [2024-05-15 09:08:41.847159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.382 [2024-05-15 09:08:41.847272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.382 [2024-05-15 09:08:41.847297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.382 qpair failed and we were unable to recover it. 00:42:47.382 [2024-05-15 09:08:41.847434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.382 [2024-05-15 09:08:41.847546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.382 [2024-05-15 09:08:41.847572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.382 qpair failed and we were unable to recover it. 00:42:47.382 [2024-05-15 09:08:41.847699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.382 [2024-05-15 09:08:41.847850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.382 [2024-05-15 09:08:41.847878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.382 qpair failed and we were unable to recover it. 00:42:47.382 [2024-05-15 09:08:41.848030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.382 [2024-05-15 09:08:41.848136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.382 [2024-05-15 09:08:41.848161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.382 qpair failed and we were unable to recover it. 00:42:47.382 [2024-05-15 09:08:41.848289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.382 [2024-05-15 09:08:41.848418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.382 [2024-05-15 09:08:41.848444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.382 qpair failed and we were unable to recover it. 00:42:47.382 [2024-05-15 09:08:41.848594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.382 [2024-05-15 09:08:41.848709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.382 [2024-05-15 09:08:41.848737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.382 qpair failed and we were unable to recover it. 00:42:47.382 [2024-05-15 09:08:41.848924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.382 [2024-05-15 09:08:41.849032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.382 [2024-05-15 09:08:41.849057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.382 qpair failed and we were unable to recover it. 00:42:47.382 [2024-05-15 09:08:41.849159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.382 [2024-05-15 09:08:41.849334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.382 [2024-05-15 09:08:41.849363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.382 qpair failed and we were unable to recover it. 00:42:47.382 [2024-05-15 09:08:41.849498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.382 [2024-05-15 09:08:41.849598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.382 [2024-05-15 09:08:41.849622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.382 qpair failed and we were unable to recover it. 00:42:47.382 [2024-05-15 09:08:41.849751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.382 [2024-05-15 09:08:41.849881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.382 [2024-05-15 09:08:41.849908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.382 qpair failed and we were unable to recover it. 00:42:47.382 [2024-05-15 09:08:41.850076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.382 [2024-05-15 09:08:41.850221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.382 [2024-05-15 09:08:41.850249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.382 qpair failed and we were unable to recover it. 00:42:47.382 [2024-05-15 09:08:41.850405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.382 [2024-05-15 09:08:41.850560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.382 [2024-05-15 09:08:41.850585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.382 qpair failed and we were unable to recover it. 00:42:47.382 [2024-05-15 09:08:41.850738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.382 [2024-05-15 09:08:41.850838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.382 [2024-05-15 09:08:41.850864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.382 qpair failed and we were unable to recover it. 00:42:47.382 [2024-05-15 09:08:41.850998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.382 [2024-05-15 09:08:41.851113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.382 [2024-05-15 09:08:41.851141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.382 qpair failed and we were unable to recover it. 00:42:47.382 [2024-05-15 09:08:41.851283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.382 [2024-05-15 09:08:41.851451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.382 [2024-05-15 09:08:41.851479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.382 qpair failed and we were unable to recover it. 00:42:47.382 [2024-05-15 09:08:41.851629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.382 [2024-05-15 09:08:41.851739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.382 [2024-05-15 09:08:41.851766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.382 qpair failed and we were unable to recover it. 00:42:47.382 [2024-05-15 09:08:41.851887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.382 [2024-05-15 09:08:41.851985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.382 [2024-05-15 09:08:41.852011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.382 qpair failed and we were unable to recover it. 00:42:47.382 [2024-05-15 09:08:41.852144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.382 [2024-05-15 09:08:41.852254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.382 [2024-05-15 09:08:41.852283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.382 qpair failed and we were unable to recover it. 00:42:47.382 [2024-05-15 09:08:41.852427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.382 [2024-05-15 09:08:41.852582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.382 [2024-05-15 09:08:41.852610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.382 qpair failed and we were unable to recover it. 00:42:47.382 [2024-05-15 09:08:41.852721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.382 [2024-05-15 09:08:41.852856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.382 [2024-05-15 09:08:41.852881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.382 qpair failed and we were unable to recover it. 00:42:47.382 [2024-05-15 09:08:41.853012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.382 [2024-05-15 09:08:41.853118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.382 [2024-05-15 09:08:41.853143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f30000b90 with addr=10.0.0.2, port=4420 00:42:47.382 qpair failed and we were unable to recover it. 00:42:47.382 [2024-05-15 09:08:41.853289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.382 [2024-05-15 09:08:41.853480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.383 [2024-05-15 09:08:41.853512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.383 qpair failed and we were unable to recover it. 00:42:47.383 [2024-05-15 09:08:41.853674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.383 [2024-05-15 09:08:41.853792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.383 [2024-05-15 09:08:41.853820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.383 qpair failed and we were unable to recover it. 00:42:47.383 [2024-05-15 09:08:41.853991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.383 [2024-05-15 09:08:41.854135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.383 [2024-05-15 09:08:41.854165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.383 qpair failed and we were unable to recover it. 00:42:47.383 [2024-05-15 09:08:41.854298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.383 [2024-05-15 09:08:41.854403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.383 [2024-05-15 09:08:41.854429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.383 qpair failed and we were unable to recover it. 00:42:47.383 [2024-05-15 09:08:41.854548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.383 [2024-05-15 09:08:41.854720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.383 [2024-05-15 09:08:41.854748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.383 qpair failed and we were unable to recover it. 00:42:47.383 [2024-05-15 09:08:41.854866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.383 [2024-05-15 09:08:41.855037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.383 [2024-05-15 09:08:41.855062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.383 qpair failed and we were unable to recover it. 00:42:47.383 [2024-05-15 09:08:41.855189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.383 [2024-05-15 09:08:41.855347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.383 [2024-05-15 09:08:41.855375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.383 qpair failed and we were unable to recover it. 00:42:47.383 [2024-05-15 09:08:41.855498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.383 [2024-05-15 09:08:41.855599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.383 [2024-05-15 09:08:41.855623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.383 qpair failed and we were unable to recover it. 00:42:47.383 [2024-05-15 09:08:41.855731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.383 [2024-05-15 09:08:41.855956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.383 [2024-05-15 09:08:41.856004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.383 qpair failed and we were unable to recover it. 00:42:47.383 [2024-05-15 09:08:41.856141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.383 [2024-05-15 09:08:41.856299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.383 [2024-05-15 09:08:41.856324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.383 qpair failed and we were unable to recover it. 00:42:47.383 [2024-05-15 09:08:41.856456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.383 [2024-05-15 09:08:41.856559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.383 [2024-05-15 09:08:41.856584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.383 qpair failed and we were unable to recover it. 00:42:47.383 [2024-05-15 09:08:41.856712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.383 [2024-05-15 09:08:41.856821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.383 [2024-05-15 09:08:41.856846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.383 qpair failed and we were unable to recover it. 00:42:47.383 [2024-05-15 09:08:41.856947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.383 [2024-05-15 09:08:41.857047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.383 [2024-05-15 09:08:41.857071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.383 qpair failed and we were unable to recover it. 00:42:47.383 [2024-05-15 09:08:41.857221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.383 [2024-05-15 09:08:41.857350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.383 [2024-05-15 09:08:41.857375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.383 qpair failed and we were unable to recover it. 00:42:47.383 [2024-05-15 09:08:41.857543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.383 [2024-05-15 09:08:41.857696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.383 [2024-05-15 09:08:41.857721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.383 qpair failed and we were unable to recover it. 00:42:47.383 [2024-05-15 09:08:41.857818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.383 [2024-05-15 09:08:41.857940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.383 [2024-05-15 09:08:41.857965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.383 qpair failed and we were unable to recover it. 00:42:47.383 [2024-05-15 09:08:41.858090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.383 [2024-05-15 09:08:41.858232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.383 [2024-05-15 09:08:41.858260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.383 qpair failed and we were unable to recover it. 00:42:47.383 [2024-05-15 09:08:41.858376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.383 [2024-05-15 09:08:41.858490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.383 [2024-05-15 09:08:41.858519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.383 qpair failed and we were unable to recover it. 00:42:47.383 [2024-05-15 09:08:41.858664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.383 [2024-05-15 09:08:41.858815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.383 [2024-05-15 09:08:41.858842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.383 qpair failed and we were unable to recover it. 00:42:47.383 [2024-05-15 09:08:41.858955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.383 [2024-05-15 09:08:41.859085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.383 [2024-05-15 09:08:41.859109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.383 qpair failed and we were unable to recover it. 00:42:47.383 [2024-05-15 09:08:41.859245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.383 [2024-05-15 09:08:41.859408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.383 [2024-05-15 09:08:41.859441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.383 qpair failed and we were unable to recover it. 00:42:47.383 [2024-05-15 09:08:41.859557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.383 [2024-05-15 09:08:41.859701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.383 [2024-05-15 09:08:41.859728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.383 qpair failed and we were unable to recover it. 00:42:47.383 [2024-05-15 09:08:41.859945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.383 [2024-05-15 09:08:41.860091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.383 [2024-05-15 09:08:41.860119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.383 qpair failed and we were unable to recover it. 00:42:47.383 [2024-05-15 09:08:41.860305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.383 [2024-05-15 09:08:41.860436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.383 [2024-05-15 09:08:41.860461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.383 qpair failed and we were unable to recover it. 00:42:47.383 [2024-05-15 09:08:41.860563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.383 [2024-05-15 09:08:41.860719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.383 [2024-05-15 09:08:41.860760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.383 qpair failed and we were unable to recover it. 00:42:47.383 [2024-05-15 09:08:41.860862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.383 [2024-05-15 09:08:41.860971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.383 [2024-05-15 09:08:41.860996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.383 qpair failed and we were unable to recover it. 00:42:47.383 [2024-05-15 09:08:41.861121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.383 [2024-05-15 09:08:41.861280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.383 [2024-05-15 09:08:41.861308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.383 qpair failed and we were unable to recover it. 00:42:47.383 [2024-05-15 09:08:41.861467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.383 [2024-05-15 09:08:41.861572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.383 [2024-05-15 09:08:41.861598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.383 qpair failed and we were unable to recover it. 00:42:47.383 [2024-05-15 09:08:41.861715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.383 [2024-05-15 09:08:41.861835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.383 [2024-05-15 09:08:41.861864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.383 qpair failed and we were unable to recover it. 00:42:47.383 [2024-05-15 09:08:41.862003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.384 [2024-05-15 09:08:41.862134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.384 [2024-05-15 09:08:41.862158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.384 qpair failed and we were unable to recover it. 00:42:47.384 [2024-05-15 09:08:41.862262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.384 [2024-05-15 09:08:41.862381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.384 [2024-05-15 09:08:41.862406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.384 qpair failed and we were unable to recover it. 00:42:47.384 [2024-05-15 09:08:41.862618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.384 [2024-05-15 09:08:41.862787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.384 [2024-05-15 09:08:41.862814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.384 qpair failed and we were unable to recover it. 00:42:47.384 [2024-05-15 09:08:41.862960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.384 [2024-05-15 09:08:41.863093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.384 [2024-05-15 09:08:41.863120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.384 qpair failed and we were unable to recover it. 00:42:47.384 [2024-05-15 09:08:41.863265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.384 [2024-05-15 09:08:41.863369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.384 [2024-05-15 09:08:41.863394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.384 qpair failed and we were unable to recover it. 00:42:47.384 [2024-05-15 09:08:41.863500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.384 [2024-05-15 09:08:41.863594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.384 [2024-05-15 09:08:41.863618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.384 qpair failed and we were unable to recover it. 00:42:47.384 [2024-05-15 09:08:41.863749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.384 [2024-05-15 09:08:41.863876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.384 [2024-05-15 09:08:41.863901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.384 qpair failed and we were unable to recover it. 00:42:47.384 [2024-05-15 09:08:41.864001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.384 [2024-05-15 09:08:41.864107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.384 [2024-05-15 09:08:41.864132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.384 qpair failed and we were unable to recover it. 00:42:47.384 [2024-05-15 09:08:41.864234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.384 [2024-05-15 09:08:41.864413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.384 [2024-05-15 09:08:41.864441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.384 qpair failed and we were unable to recover it. 00:42:47.384 [2024-05-15 09:08:41.864595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.384 [2024-05-15 09:08:41.864727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.384 [2024-05-15 09:08:41.864751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.384 qpair failed and we were unable to recover it. 00:42:47.384 [2024-05-15 09:08:41.864881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.384 [2024-05-15 09:08:41.865007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.384 [2024-05-15 09:08:41.865032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.384 qpair failed and we were unable to recover it. 00:42:47.384 [2024-05-15 09:08:41.865157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.384 [2024-05-15 09:08:41.865282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.384 [2024-05-15 09:08:41.865307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.384 qpair failed and we were unable to recover it. 00:42:47.384 [2024-05-15 09:08:41.865412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.384 [2024-05-15 09:08:41.865522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.384 [2024-05-15 09:08:41.865547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.384 qpair failed and we were unable to recover it. 00:42:47.384 [2024-05-15 09:08:41.865680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.384 [2024-05-15 09:08:41.865803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.384 [2024-05-15 09:08:41.865830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.384 qpair failed and we were unable to recover it. 00:42:47.384 [2024-05-15 09:08:41.865952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.384 [2024-05-15 09:08:41.866105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.384 [2024-05-15 09:08:41.866129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.384 qpair failed and we were unable to recover it. 00:42:47.384 [2024-05-15 09:08:41.866338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.384 [2024-05-15 09:08:41.866470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.384 [2024-05-15 09:08:41.866496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.384 qpair failed and we were unable to recover it. 00:42:47.384 [2024-05-15 09:08:41.866657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.384 [2024-05-15 09:08:41.866823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.384 [2024-05-15 09:08:41.866847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.384 qpair failed and we were unable to recover it. 00:42:47.384 [2024-05-15 09:08:41.866981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.384 [2024-05-15 09:08:41.867100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.384 [2024-05-15 09:08:41.867127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.384 qpair failed and we were unable to recover it. 00:42:47.384 [2024-05-15 09:08:41.867339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.384 [2024-05-15 09:08:41.867488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.384 [2024-05-15 09:08:41.867516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.384 qpair failed and we were unable to recover it. 00:42:47.384 [2024-05-15 09:08:41.867620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.384 [2024-05-15 09:08:41.867744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.384 [2024-05-15 09:08:41.867772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.384 qpair failed and we were unable to recover it. 00:42:47.384 [2024-05-15 09:08:41.867945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.384 [2024-05-15 09:08:41.868061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.384 [2024-05-15 09:08:41.868089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.384 qpair failed and we were unable to recover it. 00:42:47.384 [2024-05-15 09:08:41.868228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.384 [2024-05-15 09:08:41.868365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.384 [2024-05-15 09:08:41.868392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.384 qpair failed and we were unable to recover it. 00:42:47.384 [2024-05-15 09:08:41.868548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.384 [2024-05-15 09:08:41.868680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.384 [2024-05-15 09:08:41.868705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.384 qpair failed and we were unable to recover it. 00:42:47.384 [2024-05-15 09:08:41.868935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.384 [2024-05-15 09:08:41.869073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.384 [2024-05-15 09:08:41.869100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.384 qpair failed and we were unable to recover it. 00:42:47.384 [2024-05-15 09:08:41.869227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.384 [2024-05-15 09:08:41.869363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.384 [2024-05-15 09:08:41.869390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.385 qpair failed and we were unable to recover it. 00:42:47.385 [2024-05-15 09:08:41.869530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.385 [2024-05-15 09:08:41.869727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.385 [2024-05-15 09:08:41.869755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.385 qpair failed and we were unable to recover it. 00:42:47.385 [2024-05-15 09:08:41.869909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.385 [2024-05-15 09:08:41.870036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.385 [2024-05-15 09:08:41.870060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.385 qpair failed and we were unable to recover it. 00:42:47.385 [2024-05-15 09:08:41.870209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.385 [2024-05-15 09:08:41.870336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.385 [2024-05-15 09:08:41.870364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.385 qpair failed and we were unable to recover it. 00:42:47.385 [2024-05-15 09:08:41.870504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.385 [2024-05-15 09:08:41.870629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.385 [2024-05-15 09:08:41.870653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.385 qpair failed and we were unable to recover it. 00:42:47.385 [2024-05-15 09:08:41.870875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.385 [2024-05-15 09:08:41.870985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.385 [2024-05-15 09:08:41.871012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.385 qpair failed and we were unable to recover it. 00:42:47.385 [2024-05-15 09:08:41.871158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.385 [2024-05-15 09:08:41.871266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.385 [2024-05-15 09:08:41.871292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.385 qpair failed and we were unable to recover it. 00:42:47.385 [2024-05-15 09:08:41.871471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.385 [2024-05-15 09:08:41.871603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.385 [2024-05-15 09:08:41.871627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.385 qpair failed and we were unable to recover it. 00:42:47.385 [2024-05-15 09:08:41.871757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.385 [2024-05-15 09:08:41.871867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.385 [2024-05-15 09:08:41.871892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.385 qpair failed and we were unable to recover it. 00:42:47.385 [2024-05-15 09:08:41.872038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.385 [2024-05-15 09:08:41.872175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.385 [2024-05-15 09:08:41.872202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.385 qpair failed and we were unable to recover it. 00:42:47.385 [2024-05-15 09:08:41.872356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.385 [2024-05-15 09:08:41.872488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.385 [2024-05-15 09:08:41.872513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.385 qpair failed and we were unable to recover it. 00:42:47.385 [2024-05-15 09:08:41.872640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.385 [2024-05-15 09:08:41.872745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.385 [2024-05-15 09:08:41.872770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.385 qpair failed and we were unable to recover it. 00:42:47.385 [2024-05-15 09:08:41.872900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.385 [2024-05-15 09:08:41.873050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.385 [2024-05-15 09:08:41.873078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.385 qpair failed and we were unable to recover it. 00:42:47.385 [2024-05-15 09:08:41.873233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.385 [2024-05-15 09:08:41.873365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.385 [2024-05-15 09:08:41.873391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.385 qpair failed and we were unable to recover it. 00:42:47.385 [2024-05-15 09:08:41.873522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.385 [2024-05-15 09:08:41.873618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.385 [2024-05-15 09:08:41.873643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.385 qpair failed and we were unable to recover it. 00:42:47.385 [2024-05-15 09:08:41.873738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.385 [2024-05-15 09:08:41.873885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.385 [2024-05-15 09:08:41.873912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.385 qpair failed and we were unable to recover it. 00:42:47.385 [2024-05-15 09:08:41.874128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.385 [2024-05-15 09:08:41.874268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.385 [2024-05-15 09:08:41.874296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.385 qpair failed and we were unable to recover it. 00:42:47.385 [2024-05-15 09:08:41.874421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.385 [2024-05-15 09:08:41.874570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.385 [2024-05-15 09:08:41.874598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.385 qpair failed and we were unable to recover it. 00:42:47.385 [2024-05-15 09:08:41.874742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.385 [2024-05-15 09:08:41.874846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.385 [2024-05-15 09:08:41.874876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.385 qpair failed and we were unable to recover it. 00:42:47.385 [2024-05-15 09:08:41.874994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.385 [2024-05-15 09:08:41.875145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.385 [2024-05-15 09:08:41.875170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.385 qpair failed and we were unable to recover it. 00:42:47.385 [2024-05-15 09:08:41.875280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.385 [2024-05-15 09:08:41.875410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.385 [2024-05-15 09:08:41.875434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.385 qpair failed and we were unable to recover it. 00:42:47.385 [2024-05-15 09:08:41.875542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.385 [2024-05-15 09:08:41.875657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.385 [2024-05-15 09:08:41.875685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.385 qpair failed and we were unable to recover it. 00:42:47.385 [2024-05-15 09:08:41.875839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.385 [2024-05-15 09:08:41.875937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.385 [2024-05-15 09:08:41.875962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.385 qpair failed and we were unable to recover it. 00:42:47.385 [2024-05-15 09:08:41.876109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.385 [2024-05-15 09:08:41.876237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.385 [2024-05-15 09:08:41.876281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.385 qpair failed and we were unable to recover it. 00:42:47.385 [2024-05-15 09:08:41.876398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.385 [2024-05-15 09:08:41.876541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.385 [2024-05-15 09:08:41.876572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.385 qpair failed and we were unable to recover it. 00:42:47.385 [2024-05-15 09:08:41.876622] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16af0f0 (9): Bad file descriptor 00:42:47.385 [2024-05-15 09:08:41.876905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.385 [2024-05-15 09:08:41.877043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.385 [2024-05-15 09:08:41.877094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.385 qpair failed and we were unable to recover it. 00:42:47.385 [2024-05-15 09:08:41.877238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.385 [2024-05-15 09:08:41.877377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.385 [2024-05-15 09:08:41.877405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.385 qpair failed and we were unable to recover it. 00:42:47.385 [2024-05-15 09:08:41.877532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.385 [2024-05-15 09:08:41.877683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.386 [2024-05-15 09:08:41.877710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.386 qpair failed and we were unable to recover it. 00:42:47.386 [2024-05-15 09:08:41.877866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.386 [2024-05-15 09:08:41.878009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.386 [2024-05-15 09:08:41.878035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.386 qpair failed and we were unable to recover it. 00:42:47.386 [2024-05-15 09:08:41.878164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.386 [2024-05-15 09:08:41.878263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.386 [2024-05-15 09:08:41.878289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.386 qpair failed and we were unable to recover it. 00:42:47.386 [2024-05-15 09:08:41.878407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.386 [2024-05-15 09:08:41.878578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.386 [2024-05-15 09:08:41.878626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.386 qpair failed and we were unable to recover it. 00:42:47.386 [2024-05-15 09:08:41.878750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.386 [2024-05-15 09:08:41.878900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.386 [2024-05-15 09:08:41.878926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.386 qpair failed and we were unable to recover it. 00:42:47.386 [2024-05-15 09:08:41.879037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.386 [2024-05-15 09:08:41.879160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.386 [2024-05-15 09:08:41.879186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.386 qpair failed and we were unable to recover it. 00:42:47.386 [2024-05-15 09:08:41.879333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.386 [2024-05-15 09:08:41.879463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.386 [2024-05-15 09:08:41.879491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.386 qpair failed and we were unable to recover it. 00:42:47.386 [2024-05-15 09:08:41.879642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.386 [2024-05-15 09:08:41.879784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.386 [2024-05-15 09:08:41.879828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.386 qpair failed and we were unable to recover it. 00:42:47.386 [2024-05-15 09:08:41.879940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.386 [2024-05-15 09:08:41.880073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.386 [2024-05-15 09:08:41.880098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.386 qpair failed and we were unable to recover it. 00:42:47.386 [2024-05-15 09:08:41.880249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.386 [2024-05-15 09:08:41.880373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.386 [2024-05-15 09:08:41.880398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.386 qpair failed and we were unable to recover it. 00:42:47.386 [2024-05-15 09:08:41.880498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.386 [2024-05-15 09:08:41.880630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.386 [2024-05-15 09:08:41.880656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.386 qpair failed and we were unable to recover it. 00:42:47.386 [2024-05-15 09:08:41.880759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.386 [2024-05-15 09:08:41.880861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.386 [2024-05-15 09:08:41.880887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.386 qpair failed and we were unable to recover it. 00:42:47.386 [2024-05-15 09:08:41.881013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.386 [2024-05-15 09:08:41.881141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.386 [2024-05-15 09:08:41.881166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.386 qpair failed and we were unable to recover it. 00:42:47.386 [2024-05-15 09:08:41.881289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.386 [2024-05-15 09:08:41.881396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.386 [2024-05-15 09:08:41.881423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.386 qpair failed and we were unable to recover it. 00:42:47.386 [2024-05-15 09:08:41.881522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.386 [2024-05-15 09:08:41.881662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.386 [2024-05-15 09:08:41.881688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.386 qpair failed and we were unable to recover it. 00:42:47.386 [2024-05-15 09:08:41.881844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.386 [2024-05-15 09:08:41.881943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.386 [2024-05-15 09:08:41.881969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.386 qpair failed and we were unable to recover it. 00:42:47.386 [2024-05-15 09:08:41.882124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.386 [2024-05-15 09:08:41.882272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.386 [2024-05-15 09:08:41.882301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.386 qpair failed and we were unable to recover it. 00:42:47.386 [2024-05-15 09:08:41.882478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.386 [2024-05-15 09:08:41.882607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.386 [2024-05-15 09:08:41.882634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.386 qpair failed and we were unable to recover it. 00:42:47.386 [2024-05-15 09:08:41.882760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.386 [2024-05-15 09:08:41.882891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.386 [2024-05-15 09:08:41.882916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.386 qpair failed and we were unable to recover it. 00:42:47.386 [2024-05-15 09:08:41.883023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.386 [2024-05-15 09:08:41.883164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.386 [2024-05-15 09:08:41.883190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.386 qpair failed and we were unable to recover it. 00:42:47.386 [2024-05-15 09:08:41.883329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.386 [2024-05-15 09:08:41.883457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.386 [2024-05-15 09:08:41.883483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.386 qpair failed and we were unable to recover it. 00:42:47.386 [2024-05-15 09:08:41.883640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.386 [2024-05-15 09:08:41.883754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.386 [2024-05-15 09:08:41.883780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.386 qpair failed and we were unable to recover it. 00:42:47.386 [2024-05-15 09:08:41.883907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.386 [2024-05-15 09:08:41.884039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.386 [2024-05-15 09:08:41.884065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.386 qpair failed and we were unable to recover it. 00:42:47.386 [2024-05-15 09:08:41.884174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.386 [2024-05-15 09:08:41.884281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.386 [2024-05-15 09:08:41.884308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.386 qpair failed and we were unable to recover it. 00:42:47.386 [2024-05-15 09:08:41.884444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.386 [2024-05-15 09:08:41.884551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.386 [2024-05-15 09:08:41.884577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.386 qpair failed and we were unable to recover it. 00:42:47.386 [2024-05-15 09:08:41.884708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.386 [2024-05-15 09:08:41.884833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.386 [2024-05-15 09:08:41.884858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.386 qpair failed and we were unable to recover it. 00:42:47.386 [2024-05-15 09:08:41.884982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.386 [2024-05-15 09:08:41.885117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.386 [2024-05-15 09:08:41.885142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.386 qpair failed and we were unable to recover it. 00:42:47.386 [2024-05-15 09:08:41.885284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.386 [2024-05-15 09:08:41.885456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.386 [2024-05-15 09:08:41.885502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.386 qpair failed and we were unable to recover it. 00:42:47.386 [2024-05-15 09:08:41.885625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.386 [2024-05-15 09:08:41.885771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.386 [2024-05-15 09:08:41.885796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.386 qpair failed and we were unable to recover it. 00:42:47.386 [2024-05-15 09:08:41.885923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.387 [2024-05-15 09:08:41.886084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.387 [2024-05-15 09:08:41.886110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.387 qpair failed and we were unable to recover it. 00:42:47.387 [2024-05-15 09:08:41.886227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.387 [2024-05-15 09:08:41.886360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.387 [2024-05-15 09:08:41.886384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.387 qpair failed and we were unable to recover it. 00:42:47.387 [2024-05-15 09:08:41.886492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.387 [2024-05-15 09:08:41.886618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.387 [2024-05-15 09:08:41.886647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.387 qpair failed and we were unable to recover it. 00:42:47.387 [2024-05-15 09:08:41.886754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.387 [2024-05-15 09:08:41.886890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.387 [2024-05-15 09:08:41.886915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.387 qpair failed and we were unable to recover it. 00:42:47.387 [2024-05-15 09:08:41.887046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.387 [2024-05-15 09:08:41.887180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.387 [2024-05-15 09:08:41.887205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.387 qpair failed and we were unable to recover it. 00:42:47.387 [2024-05-15 09:08:41.887338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.387 [2024-05-15 09:08:41.887492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.387 [2024-05-15 09:08:41.887536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.387 qpair failed and we were unable to recover it. 00:42:47.387 [2024-05-15 09:08:41.887683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.387 [2024-05-15 09:08:41.887835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.387 [2024-05-15 09:08:41.887861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.387 qpair failed and we were unable to recover it. 00:42:47.387 [2024-05-15 09:08:41.887968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.387 [2024-05-15 09:08:41.888070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.387 [2024-05-15 09:08:41.888095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.387 qpair failed and we were unable to recover it. 00:42:47.387 [2024-05-15 09:08:41.888234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.387 [2024-05-15 09:08:41.888344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.387 [2024-05-15 09:08:41.888369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.387 qpair failed and we were unable to recover it. 00:42:47.387 [2024-05-15 09:08:41.888469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.387 [2024-05-15 09:08:41.888561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.387 [2024-05-15 09:08:41.888587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.387 qpair failed and we were unable to recover it. 00:42:47.387 [2024-05-15 09:08:41.888717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.387 [2024-05-15 09:08:41.888849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.387 [2024-05-15 09:08:41.888874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.387 qpair failed and we were unable to recover it. 00:42:47.387 [2024-05-15 09:08:41.889001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.387 [2024-05-15 09:08:41.889155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.387 [2024-05-15 09:08:41.889180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.387 qpair failed and we were unable to recover it. 00:42:47.387 [2024-05-15 09:08:41.889327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.387 [2024-05-15 09:08:41.889456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.387 [2024-05-15 09:08:41.889485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.387 qpair failed and we were unable to recover it. 00:42:47.387 [2024-05-15 09:08:41.889617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.387 [2024-05-15 09:08:41.889737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.387 [2024-05-15 09:08:41.889763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.387 qpair failed and we were unable to recover it. 00:42:47.387 [2024-05-15 09:08:41.889903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.387 [2024-05-15 09:08:41.890031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.387 [2024-05-15 09:08:41.890055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.387 qpair failed and we were unable to recover it. 00:42:47.387 [2024-05-15 09:08:41.890161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.387 [2024-05-15 09:08:41.890270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.387 [2024-05-15 09:08:41.890296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.387 qpair failed and we were unable to recover it. 00:42:47.387 [2024-05-15 09:08:41.890421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.387 [2024-05-15 09:08:41.890601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.387 [2024-05-15 09:08:41.890629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.387 qpair failed and we were unable to recover it. 00:42:47.387 [2024-05-15 09:08:41.890773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.387 [2024-05-15 09:08:41.890947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.387 [2024-05-15 09:08:41.890973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.387 qpair failed and we were unable to recover it. 00:42:47.387 [2024-05-15 09:08:41.891079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.387 [2024-05-15 09:08:41.891189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.387 [2024-05-15 09:08:41.891214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.387 qpair failed and we were unable to recover it. 00:42:47.387 [2024-05-15 09:08:41.891368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.387 [2024-05-15 09:08:41.891591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.387 [2024-05-15 09:08:41.891640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.387 qpair failed and we were unable to recover it. 00:42:47.387 [2024-05-15 09:08:41.891769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.387 [2024-05-15 09:08:41.891876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.387 [2024-05-15 09:08:41.891903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.387 qpair failed and we were unable to recover it. 00:42:47.387 [2024-05-15 09:08:41.892063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.387 [2024-05-15 09:08:41.892158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.387 [2024-05-15 09:08:41.892183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.387 qpair failed and we were unable to recover it. 00:42:47.387 [2024-05-15 09:08:41.892322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.387 [2024-05-15 09:08:41.892478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.387 [2024-05-15 09:08:41.892525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.387 qpair failed and we were unable to recover it. 00:42:47.387 [2024-05-15 09:08:41.892680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.387 [2024-05-15 09:08:41.892827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.387 [2024-05-15 09:08:41.892851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.387 qpair failed and we were unable to recover it. 00:42:47.387 [2024-05-15 09:08:41.892977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.387 [2024-05-15 09:08:41.893108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.387 [2024-05-15 09:08:41.893133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.387 qpair failed and we were unable to recover it. 00:42:47.387 [2024-05-15 09:08:41.893259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.387 [2024-05-15 09:08:41.893363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.387 [2024-05-15 09:08:41.893388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.387 qpair failed and we were unable to recover it. 00:42:47.387 [2024-05-15 09:08:41.893546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.387 [2024-05-15 09:08:41.893672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.387 [2024-05-15 09:08:41.893715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.387 qpair failed and we were unable to recover it. 00:42:47.387 [2024-05-15 09:08:41.893873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.387 [2024-05-15 09:08:41.894021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.387 [2024-05-15 09:08:41.894047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.387 qpair failed and we were unable to recover it. 00:42:47.387 [2024-05-15 09:08:41.894140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.387 [2024-05-15 09:08:41.894289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.387 [2024-05-15 09:08:41.894319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.388 qpair failed and we were unable to recover it. 00:42:47.388 [2024-05-15 09:08:41.894457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.388 [2024-05-15 09:08:41.894606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.388 [2024-05-15 09:08:41.894631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.388 qpair failed and we were unable to recover it. 00:42:47.388 [2024-05-15 09:08:41.894733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.388 [2024-05-15 09:08:41.894866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.388 [2024-05-15 09:08:41.894891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.388 qpair failed and we were unable to recover it. 00:42:47.388 [2024-05-15 09:08:41.895001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.388 [2024-05-15 09:08:41.895130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.388 [2024-05-15 09:08:41.895156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.388 qpair failed and we were unable to recover it. 00:42:47.388 [2024-05-15 09:08:41.895289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.388 [2024-05-15 09:08:41.895457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.388 [2024-05-15 09:08:41.895508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.388 qpair failed and we were unable to recover it. 00:42:47.388 [2024-05-15 09:08:41.895617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.388 [2024-05-15 09:08:41.895722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.388 [2024-05-15 09:08:41.895747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.388 qpair failed and we were unable to recover it. 00:42:47.388 [2024-05-15 09:08:41.895856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.388 [2024-05-15 09:08:41.896010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.388 [2024-05-15 09:08:41.896035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.388 qpair failed and we were unable to recover it. 00:42:47.388 [2024-05-15 09:08:41.896167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.388 [2024-05-15 09:08:41.896335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.388 [2024-05-15 09:08:41.896361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.388 qpair failed and we were unable to recover it. 00:42:47.388 [2024-05-15 09:08:41.896470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.388 [2024-05-15 09:08:41.896569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.388 [2024-05-15 09:08:41.896595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.388 qpair failed and we were unable to recover it. 00:42:47.388 [2024-05-15 09:08:41.896697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.388 [2024-05-15 09:08:41.896849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.388 [2024-05-15 09:08:41.896875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.388 qpair failed and we were unable to recover it. 00:42:47.388 [2024-05-15 09:08:41.896974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.388 [2024-05-15 09:08:41.897104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.388 [2024-05-15 09:08:41.897130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.388 qpair failed and we were unable to recover it. 00:42:47.388 [2024-05-15 09:08:41.897239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.388 [2024-05-15 09:08:41.897341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.388 [2024-05-15 09:08:41.897367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.388 qpair failed and we were unable to recover it. 00:42:47.388 [2024-05-15 09:08:41.897478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.388 [2024-05-15 09:08:41.897630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.388 [2024-05-15 09:08:41.897674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.388 qpair failed and we were unable to recover it. 00:42:47.388 [2024-05-15 09:08:41.897777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.388 [2024-05-15 09:08:41.897933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.388 [2024-05-15 09:08:41.897958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.388 qpair failed and we were unable to recover it. 00:42:47.388 [2024-05-15 09:08:41.898110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.388 [2024-05-15 09:08:41.898242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.388 [2024-05-15 09:08:41.898268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.388 qpair failed and we were unable to recover it. 00:42:47.388 [2024-05-15 09:08:41.898376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.388 [2024-05-15 09:08:41.898483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.388 [2024-05-15 09:08:41.898509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.388 qpair failed and we were unable to recover it. 00:42:47.388 [2024-05-15 09:08:41.898638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.388 [2024-05-15 09:08:41.898793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.388 [2024-05-15 09:08:41.898818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.388 qpair failed and we were unable to recover it. 00:42:47.388 [2024-05-15 09:08:41.898918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.388 [2024-05-15 09:08:41.899076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.388 [2024-05-15 09:08:41.899100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.388 qpair failed and we were unable to recover it. 00:42:47.388 [2024-05-15 09:08:41.899229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.388 [2024-05-15 09:08:41.899357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.388 [2024-05-15 09:08:41.899401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.388 qpair failed and we were unable to recover it. 00:42:47.388 [2024-05-15 09:08:41.899553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.388 [2024-05-15 09:08:41.899698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.388 [2024-05-15 09:08:41.899723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.388 qpair failed and we were unable to recover it. 00:42:47.388 [2024-05-15 09:08:41.899851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.388 [2024-05-15 09:08:41.899983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.388 [2024-05-15 09:08:41.900008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.388 qpair failed and we were unable to recover it. 00:42:47.388 [2024-05-15 09:08:41.900142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.388 [2024-05-15 09:08:41.900287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.388 [2024-05-15 09:08:41.900315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.388 qpair failed and we were unable to recover it. 00:42:47.388 [2024-05-15 09:08:41.900444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.388 [2024-05-15 09:08:41.900573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.388 [2024-05-15 09:08:41.900599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.388 qpair failed and we were unable to recover it. 00:42:47.388 [2024-05-15 09:08:41.900751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.388 [2024-05-15 09:08:41.900914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.388 [2024-05-15 09:08:41.900939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.388 qpair failed and we were unable to recover it. 00:42:47.388 [2024-05-15 09:08:41.901065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.388 [2024-05-15 09:08:41.901174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.388 [2024-05-15 09:08:41.901198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.388 qpair failed and we were unable to recover it. 00:42:47.388 [2024-05-15 09:08:41.901318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.388 [2024-05-15 09:08:41.901417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.388 [2024-05-15 09:08:41.901443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.388 qpair failed and we were unable to recover it. 00:42:47.388 [2024-05-15 09:08:41.901557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.388 [2024-05-15 09:08:41.901685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.388 [2024-05-15 09:08:41.901709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.388 qpair failed and we were unable to recover it. 00:42:47.388 [2024-05-15 09:08:41.901807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.388 [2024-05-15 09:08:41.901959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.388 [2024-05-15 09:08:41.901983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.388 qpair failed and we were unable to recover it. 00:42:47.388 [2024-05-15 09:08:41.902116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.388 [2024-05-15 09:08:41.902267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.388 [2024-05-15 09:08:41.902293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.388 qpair failed and we were unable to recover it. 00:42:47.388 [2024-05-15 09:08:41.902400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.388 [2024-05-15 09:08:41.902542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.389 [2024-05-15 09:08:41.902569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.389 qpair failed and we were unable to recover it. 00:42:47.389 [2024-05-15 09:08:41.902695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.389 [2024-05-15 09:08:41.902824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.389 [2024-05-15 09:08:41.902849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.389 qpair failed and we were unable to recover it. 00:42:47.389 [2024-05-15 09:08:41.903002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.389 [2024-05-15 09:08:41.903112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.389 [2024-05-15 09:08:41.903139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.389 qpair failed and we were unable to recover it. 00:42:47.389 [2024-05-15 09:08:41.903284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.389 [2024-05-15 09:08:41.903411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.389 [2024-05-15 09:08:41.903436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.389 qpair failed and we were unable to recover it. 00:42:47.389 [2024-05-15 09:08:41.903556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.389 [2024-05-15 09:08:41.903705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.389 [2024-05-15 09:08:41.903729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.389 qpair failed and we were unable to recover it. 00:42:47.389 [2024-05-15 09:08:41.903861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.389 [2024-05-15 09:08:41.903966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.389 [2024-05-15 09:08:41.903990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.389 qpair failed and we were unable to recover it. 00:42:47.389 [2024-05-15 09:08:41.904151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.389 [2024-05-15 09:08:41.904257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.389 [2024-05-15 09:08:41.904283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.389 qpair failed and we were unable to recover it. 00:42:47.389 [2024-05-15 09:08:41.904462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.389 [2024-05-15 09:08:41.904663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.389 [2024-05-15 09:08:41.904691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.389 qpair failed and we were unable to recover it. 00:42:47.389 [2024-05-15 09:08:41.904888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.389 [2024-05-15 09:08:41.905056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.389 [2024-05-15 09:08:41.905081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.389 qpair failed and we were unable to recover it. 00:42:47.389 [2024-05-15 09:08:41.905186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.389 [2024-05-15 09:08:41.905346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.389 [2024-05-15 09:08:41.905389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.389 qpair failed and we were unable to recover it. 00:42:47.389 [2024-05-15 09:08:41.905563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.389 [2024-05-15 09:08:41.905730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.389 [2024-05-15 09:08:41.905772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.389 qpair failed and we were unable to recover it. 00:42:47.389 [2024-05-15 09:08:41.905907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.389 [2024-05-15 09:08:41.906005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.389 [2024-05-15 09:08:41.906030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.389 qpair failed and we were unable to recover it. 00:42:47.389 [2024-05-15 09:08:41.906139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.389 [2024-05-15 09:08:41.906265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.389 [2024-05-15 09:08:41.906290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.389 qpair failed and we were unable to recover it. 00:42:47.389 [2024-05-15 09:08:41.906440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.389 [2024-05-15 09:08:41.906612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.389 [2024-05-15 09:08:41.906654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.389 qpair failed and we were unable to recover it. 00:42:47.389 [2024-05-15 09:08:41.906806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.389 [2024-05-15 09:08:41.906928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.389 [2024-05-15 09:08:41.906954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.389 qpair failed and we were unable to recover it. 00:42:47.389 [2024-05-15 09:08:41.907105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.389 [2024-05-15 09:08:41.907211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.389 [2024-05-15 09:08:41.907243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.389 qpair failed and we were unable to recover it. 00:42:47.389 [2024-05-15 09:08:41.907400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.389 [2024-05-15 09:08:41.907546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.389 [2024-05-15 09:08:41.907572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.389 qpair failed and we were unable to recover it. 00:42:47.389 [2024-05-15 09:08:41.907704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.389 [2024-05-15 09:08:41.907854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.389 [2024-05-15 09:08:41.907879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.389 qpair failed and we were unable to recover it. 00:42:47.389 [2024-05-15 09:08:41.908038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.389 [2024-05-15 09:08:41.908133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.389 [2024-05-15 09:08:41.908158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.389 qpair failed and we were unable to recover it. 00:42:47.389 [2024-05-15 09:08:41.908274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.389 [2024-05-15 09:08:41.908455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.389 [2024-05-15 09:08:41.908481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.389 qpair failed and we were unable to recover it. 00:42:47.389 [2024-05-15 09:08:41.908607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.389 [2024-05-15 09:08:41.908706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.389 [2024-05-15 09:08:41.908731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.389 qpair failed and we were unable to recover it. 00:42:47.389 [2024-05-15 09:08:41.908858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.389 [2024-05-15 09:08:41.908987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.389 [2024-05-15 09:08:41.909012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.389 qpair failed and we were unable to recover it. 00:42:47.389 [2024-05-15 09:08:41.909142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.389 [2024-05-15 09:08:41.909252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.389 [2024-05-15 09:08:41.909278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.389 qpair failed and we were unable to recover it. 00:42:47.389 [2024-05-15 09:08:41.909388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.389 [2024-05-15 09:08:41.909494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.389 [2024-05-15 09:08:41.909519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.389 qpair failed and we were unable to recover it. 00:42:47.389 [2024-05-15 09:08:41.909621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.389 [2024-05-15 09:08:41.909747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.390 [2024-05-15 09:08:41.909773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.390 qpair failed and we were unable to recover it. 00:42:47.390 [2024-05-15 09:08:41.909876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.390 [2024-05-15 09:08:41.910009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.390 [2024-05-15 09:08:41.910034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.390 qpair failed and we were unable to recover it. 00:42:47.390 [2024-05-15 09:08:41.910165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.390 [2024-05-15 09:08:41.910299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.390 [2024-05-15 09:08:41.910326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.390 qpair failed and we were unable to recover it. 00:42:47.390 [2024-05-15 09:08:41.910478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.390 [2024-05-15 09:08:41.910625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.390 [2024-05-15 09:08:41.910651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.390 qpair failed and we were unable to recover it. 00:42:47.390 [2024-05-15 09:08:41.910760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.390 [2024-05-15 09:08:41.910876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.390 [2024-05-15 09:08:41.910900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.390 qpair failed and we were unable to recover it. 00:42:47.390 [2024-05-15 09:08:41.911000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.390 [2024-05-15 09:08:41.911110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.390 [2024-05-15 09:08:41.911135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.390 qpair failed and we were unable to recover it. 00:42:47.390 [2024-05-15 09:08:41.911269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.390 [2024-05-15 09:08:41.911380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.390 [2024-05-15 09:08:41.911405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.390 qpair failed and we were unable to recover it. 00:42:47.390 [2024-05-15 09:08:41.911530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.390 [2024-05-15 09:08:41.911683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.390 [2024-05-15 09:08:41.911707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.390 qpair failed and we were unable to recover it. 00:42:47.390 [2024-05-15 09:08:41.911811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.390 [2024-05-15 09:08:41.911942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.390 [2024-05-15 09:08:41.911966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.390 qpair failed and we were unable to recover it. 00:42:47.390 [2024-05-15 09:08:41.912095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.390 [2024-05-15 09:08:41.912224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.390 [2024-05-15 09:08:41.912249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.390 qpair failed and we were unable to recover it. 00:42:47.390 [2024-05-15 09:08:41.912373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.390 [2024-05-15 09:08:41.912499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.390 [2024-05-15 09:08:41.912523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.390 qpair failed and we were unable to recover it. 00:42:47.390 [2024-05-15 09:08:41.912620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.390 [2024-05-15 09:08:41.912755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.390 [2024-05-15 09:08:41.912779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.390 qpair failed and we were unable to recover it. 00:42:47.390 [2024-05-15 09:08:41.912911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.390 [2024-05-15 09:08:41.913066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.390 [2024-05-15 09:08:41.913091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.390 qpair failed and we were unable to recover it. 00:42:47.390 [2024-05-15 09:08:41.913198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.390 [2024-05-15 09:08:41.913335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.390 [2024-05-15 09:08:41.913360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.390 qpair failed and we were unable to recover it. 00:42:47.390 [2024-05-15 09:08:41.913492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.390 [2024-05-15 09:08:41.913620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.390 [2024-05-15 09:08:41.913645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.390 qpair failed and we were unable to recover it. 00:42:47.390 [2024-05-15 09:08:41.913800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.390 [2024-05-15 09:08:41.913902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.390 [2024-05-15 09:08:41.913928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.390 qpair failed and we were unable to recover it. 00:42:47.390 [2024-05-15 09:08:41.914028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.390 [2024-05-15 09:08:41.914151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.390 [2024-05-15 09:08:41.914176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.390 qpair failed and we were unable to recover it. 00:42:47.390 [2024-05-15 09:08:41.914351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.390 [2024-05-15 09:08:41.914505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.390 [2024-05-15 09:08:41.914548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.390 qpair failed and we were unable to recover it. 00:42:47.390 [2024-05-15 09:08:41.914674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.390 [2024-05-15 09:08:41.914822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.390 [2024-05-15 09:08:41.914847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.390 qpair failed and we were unable to recover it. 00:42:47.390 [2024-05-15 09:08:41.914957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.390 [2024-05-15 09:08:41.915081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.390 [2024-05-15 09:08:41.915106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.390 qpair failed and we were unable to recover it. 00:42:47.390 [2024-05-15 09:08:41.915241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.390 [2024-05-15 09:08:41.915363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.390 [2024-05-15 09:08:41.915388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.390 qpair failed and we were unable to recover it. 00:42:47.390 [2024-05-15 09:08:41.915530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.390 [2024-05-15 09:08:41.915649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.390 [2024-05-15 09:08:41.915673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.390 qpair failed and we were unable to recover it. 00:42:47.390 [2024-05-15 09:08:41.915833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.390 [2024-05-15 09:08:41.915962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.390 [2024-05-15 09:08:41.915987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.390 qpair failed and we were unable to recover it. 00:42:47.390 [2024-05-15 09:08:41.916114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.390 [2024-05-15 09:08:41.916226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.390 [2024-05-15 09:08:41.916252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.390 qpair failed and we were unable to recover it. 00:42:47.390 [2024-05-15 09:08:41.916381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.390 [2024-05-15 09:08:41.916526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.390 [2024-05-15 09:08:41.916551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.390 qpair failed and we were unable to recover it. 00:42:47.390 [2024-05-15 09:08:41.916679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.390 [2024-05-15 09:08:41.916810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.390 [2024-05-15 09:08:41.916834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.390 qpair failed and we were unable to recover it. 00:42:47.390 [2024-05-15 09:08:41.916968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.390 [2024-05-15 09:08:41.917121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.390 [2024-05-15 09:08:41.917146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.390 qpair failed and we were unable to recover it. 00:42:47.390 [2024-05-15 09:08:41.917302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.390 [2024-05-15 09:08:41.917423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.390 [2024-05-15 09:08:41.917451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.390 qpair failed and we were unable to recover it. 00:42:47.390 [2024-05-15 09:08:41.917600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.390 [2024-05-15 09:08:41.917733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.390 [2024-05-15 09:08:41.917760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.390 qpair failed and we were unable to recover it. 00:42:47.391 [2024-05-15 09:08:41.917868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.391 [2024-05-15 09:08:41.917977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.391 [2024-05-15 09:08:41.918002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.391 qpair failed and we were unable to recover it. 00:42:47.391 [2024-05-15 09:08:41.918102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.391 [2024-05-15 09:08:41.918226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.391 [2024-05-15 09:08:41.918251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.391 qpair failed and we were unable to recover it. 00:42:47.391 [2024-05-15 09:08:41.918359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.391 [2024-05-15 09:08:41.918464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.391 [2024-05-15 09:08:41.918489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.391 qpair failed and we were unable to recover it. 00:42:47.391 [2024-05-15 09:08:41.918622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.391 [2024-05-15 09:08:41.918760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.391 [2024-05-15 09:08:41.918785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.391 qpair failed and we were unable to recover it. 00:42:47.391 [2024-05-15 09:08:41.918889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.391 [2024-05-15 09:08:41.919009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.391 [2024-05-15 09:08:41.919033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.391 qpair failed and we were unable to recover it. 00:42:47.391 [2024-05-15 09:08:41.919140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.391 [2024-05-15 09:08:41.919269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.391 [2024-05-15 09:08:41.919294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.391 qpair failed and we were unable to recover it. 00:42:47.391 [2024-05-15 09:08:41.919400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.391 [2024-05-15 09:08:41.919508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.391 [2024-05-15 09:08:41.919533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.391 qpair failed and we were unable to recover it. 00:42:47.391 [2024-05-15 09:08:41.919637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.391 [2024-05-15 09:08:41.919769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.391 [2024-05-15 09:08:41.919795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.391 qpair failed and we were unable to recover it. 00:42:47.391 [2024-05-15 09:08:41.919924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.391 [2024-05-15 09:08:41.920055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.391 [2024-05-15 09:08:41.920079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.391 qpair failed and we were unable to recover it. 00:42:47.391 [2024-05-15 09:08:41.920206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.391 [2024-05-15 09:08:41.920316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.391 [2024-05-15 09:08:41.920341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.391 qpair failed and we were unable to recover it. 00:42:47.391 [2024-05-15 09:08:41.920458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.391 [2024-05-15 09:08:41.920602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.391 [2024-05-15 09:08:41.920626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.391 qpair failed and we were unable to recover it. 00:42:47.391 [2024-05-15 09:08:41.920729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.391 [2024-05-15 09:08:41.920835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.391 [2024-05-15 09:08:41.920860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.391 qpair failed and we were unable to recover it. 00:42:47.391 [2024-05-15 09:08:41.921000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.391 [2024-05-15 09:08:41.921128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.391 [2024-05-15 09:08:41.921152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.391 qpair failed and we were unable to recover it. 00:42:47.391 [2024-05-15 09:08:41.921276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.391 [2024-05-15 09:08:41.921413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.391 [2024-05-15 09:08:41.921438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.391 qpair failed and we were unable to recover it. 00:42:47.391 [2024-05-15 09:08:41.921544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.391 [2024-05-15 09:08:41.921673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.391 [2024-05-15 09:08:41.921697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.391 qpair failed and we were unable to recover it. 00:42:47.391 [2024-05-15 09:08:41.921806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.391 [2024-05-15 09:08:41.921966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.391 [2024-05-15 09:08:41.921992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.391 qpair failed and we were unable to recover it. 00:42:47.391 [2024-05-15 09:08:41.922105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.391 [2024-05-15 09:08:41.922242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.391 [2024-05-15 09:08:41.922268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.391 qpair failed and we were unable to recover it. 00:42:47.391 [2024-05-15 09:08:41.922393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.391 [2024-05-15 09:08:41.922535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.391 [2024-05-15 09:08:41.922563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.391 qpair failed and we were unable to recover it. 00:42:47.391 [2024-05-15 09:08:41.922686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.391 [2024-05-15 09:08:41.922789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.391 [2024-05-15 09:08:41.922814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.391 qpair failed and we were unable to recover it. 00:42:47.391 [2024-05-15 09:08:41.922921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.391 [2024-05-15 09:08:41.923026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.391 [2024-05-15 09:08:41.923050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.391 qpair failed and we were unable to recover it. 00:42:47.391 [2024-05-15 09:08:41.923149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.391 [2024-05-15 09:08:41.923266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.391 [2024-05-15 09:08:41.923292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.391 qpair failed and we were unable to recover it. 00:42:47.391 [2024-05-15 09:08:41.923397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.391 [2024-05-15 09:08:41.923528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.391 [2024-05-15 09:08:41.923553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.391 qpair failed and we were unable to recover it. 00:42:47.391 [2024-05-15 09:08:41.923685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.391 [2024-05-15 09:08:41.923783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.391 [2024-05-15 09:08:41.923807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.391 qpair failed and we were unable to recover it. 00:42:47.391 [2024-05-15 09:08:41.923939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.391 [2024-05-15 09:08:41.924077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.391 [2024-05-15 09:08:41.924102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.391 qpair failed and we were unable to recover it. 00:42:47.391 [2024-05-15 09:08:41.924258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.391 [2024-05-15 09:08:41.924362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.391 [2024-05-15 09:08:41.924387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.391 qpair failed and we were unable to recover it. 00:42:47.391 [2024-05-15 09:08:41.924488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.391 [2024-05-15 09:08:41.924642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.391 [2024-05-15 09:08:41.924667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.392 qpair failed and we were unable to recover it. 00:42:47.392 [2024-05-15 09:08:41.924800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.392 [2024-05-15 09:08:41.924902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.392 [2024-05-15 09:08:41.924926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.392 qpair failed and we were unable to recover it. 00:42:47.392 [2024-05-15 09:08:41.925079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.392 [2024-05-15 09:08:41.925210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.392 [2024-05-15 09:08:41.925240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.392 qpair failed and we were unable to recover it. 00:42:47.392 [2024-05-15 09:08:41.925395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.392 [2024-05-15 09:08:41.925504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.392 [2024-05-15 09:08:41.925529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.392 qpair failed and we were unable to recover it. 00:42:47.392 [2024-05-15 09:08:41.925639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.392 [2024-05-15 09:08:41.925739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.392 [2024-05-15 09:08:41.925763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.392 qpair failed and we were unable to recover it. 00:42:47.392 [2024-05-15 09:08:41.925891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.392 [2024-05-15 09:08:41.926020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.392 [2024-05-15 09:08:41.926045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.392 qpair failed and we were unable to recover it. 00:42:47.392 [2024-05-15 09:08:41.926145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.392 [2024-05-15 09:08:41.926277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.392 [2024-05-15 09:08:41.926302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.392 qpair failed and we were unable to recover it. 00:42:47.392 [2024-05-15 09:08:41.926434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.392 [2024-05-15 09:08:41.926542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.392 [2024-05-15 09:08:41.926566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.392 qpair failed and we were unable to recover it. 00:42:47.392 [2024-05-15 09:08:41.926702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.392 [2024-05-15 09:08:41.926866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.392 [2024-05-15 09:08:41.926891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.392 qpair failed and we were unable to recover it. 00:42:47.392 [2024-05-15 09:08:41.927046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.392 [2024-05-15 09:08:41.927176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.392 [2024-05-15 09:08:41.927200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.392 qpair failed and we were unable to recover it. 00:42:47.392 [2024-05-15 09:08:41.927344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.392 [2024-05-15 09:08:41.927517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.392 [2024-05-15 09:08:41.927560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.392 qpair failed and we were unable to recover it. 00:42:47.392 [2024-05-15 09:08:41.927738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.392 [2024-05-15 09:08:41.927852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.392 [2024-05-15 09:08:41.927876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.392 qpair failed and we were unable to recover it. 00:42:47.392 [2024-05-15 09:08:41.927982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.392 [2024-05-15 09:08:41.928106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.392 [2024-05-15 09:08:41.928130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.392 qpair failed and we were unable to recover it. 00:42:47.392 [2024-05-15 09:08:41.928246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.392 [2024-05-15 09:08:41.928367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.392 [2024-05-15 09:08:41.928395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.392 qpair failed and we were unable to recover it. 00:42:47.392 [2024-05-15 09:08:41.928522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.392 [2024-05-15 09:08:41.928624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.392 [2024-05-15 09:08:41.928649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.392 qpair failed and we were unable to recover it. 00:42:47.392 [2024-05-15 09:08:41.928779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.392 [2024-05-15 09:08:41.928932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.392 [2024-05-15 09:08:41.928957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.392 qpair failed and we were unable to recover it. 00:42:47.392 [2024-05-15 09:08:41.929059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.392 [2024-05-15 09:08:41.929157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.392 [2024-05-15 09:08:41.929181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.392 qpair failed and we were unable to recover it. 00:42:47.392 [2024-05-15 09:08:41.929293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.392 [2024-05-15 09:08:41.929424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.392 [2024-05-15 09:08:41.929449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.392 qpair failed and we were unable to recover it. 00:42:47.392 [2024-05-15 09:08:41.929584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.392 [2024-05-15 09:08:41.929689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.392 [2024-05-15 09:08:41.929717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.392 qpair failed and we were unable to recover it. 00:42:47.392 [2024-05-15 09:08:41.929852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.392 [2024-05-15 09:08:41.929954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.392 [2024-05-15 09:08:41.929980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.392 qpair failed and we were unable to recover it. 00:42:47.392 [2024-05-15 09:08:41.930100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.392 [2024-05-15 09:08:41.930241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.392 [2024-05-15 09:08:41.930268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.392 qpair failed and we were unable to recover it. 00:42:47.392 [2024-05-15 09:08:41.930369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.392 [2024-05-15 09:08:41.930484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.392 [2024-05-15 09:08:41.930513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.392 qpair failed and we were unable to recover it. 00:42:47.392 [2024-05-15 09:08:41.930691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.392 [2024-05-15 09:08:41.930809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.392 [2024-05-15 09:08:41.930835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.392 qpair failed and we were unable to recover it. 00:42:47.392 [2024-05-15 09:08:41.930966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.392 [2024-05-15 09:08:41.931090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.392 [2024-05-15 09:08:41.931116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.392 qpair failed and we were unable to recover it. 00:42:47.392 [2024-05-15 09:08:41.931253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.392 [2024-05-15 09:08:41.931405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.392 [2024-05-15 09:08:41.931430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.392 qpair failed and we were unable to recover it. 00:42:47.392 [2024-05-15 09:08:41.931583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.392 [2024-05-15 09:08:41.931711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.392 [2024-05-15 09:08:41.931735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.392 qpair failed and we were unable to recover it. 00:42:47.392 [2024-05-15 09:08:41.931863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.392 [2024-05-15 09:08:41.931986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.392 [2024-05-15 09:08:41.932011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.392 qpair failed and we were unable to recover it. 00:42:47.392 [2024-05-15 09:08:41.932110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.392 [2024-05-15 09:08:41.932223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.392 [2024-05-15 09:08:41.932248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.392 qpair failed and we were unable to recover it. 00:42:47.392 [2024-05-15 09:08:41.932373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.392 [2024-05-15 09:08:41.932516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.392 [2024-05-15 09:08:41.932546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.393 qpair failed and we were unable to recover it. 00:42:47.393 [2024-05-15 09:08:41.932676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.393 [2024-05-15 09:08:41.932806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.393 [2024-05-15 09:08:41.932831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.393 qpair failed and we were unable to recover it. 00:42:47.393 [2024-05-15 09:08:41.932929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.393 [2024-05-15 09:08:41.933060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.393 [2024-05-15 09:08:41.933086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.393 qpair failed and we were unable to recover it. 00:42:47.393 [2024-05-15 09:08:41.933183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.393 [2024-05-15 09:08:41.933343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.393 [2024-05-15 09:08:41.933387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.393 qpair failed and we were unable to recover it. 00:42:47.393 [2024-05-15 09:08:41.933510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.393 [2024-05-15 09:08:41.933660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.393 [2024-05-15 09:08:41.933685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.393 qpair failed and we were unable to recover it. 00:42:47.393 [2024-05-15 09:08:41.933807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.393 [2024-05-15 09:08:41.933907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.393 [2024-05-15 09:08:41.933932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.393 qpair failed and we were unable to recover it. 00:42:47.393 [2024-05-15 09:08:41.934041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.393 [2024-05-15 09:08:41.934170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.393 [2024-05-15 09:08:41.934194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.393 qpair failed and we were unable to recover it. 00:42:47.393 [2024-05-15 09:08:41.934348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.393 [2024-05-15 09:08:41.934473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.393 [2024-05-15 09:08:41.934498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.393 qpair failed and we were unable to recover it. 00:42:47.393 [2024-05-15 09:08:41.934620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.393 [2024-05-15 09:08:41.934729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.393 [2024-05-15 09:08:41.934755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.393 qpair failed and we were unable to recover it. 00:42:47.393 [2024-05-15 09:08:41.934889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.393 [2024-05-15 09:08:41.935015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.393 [2024-05-15 09:08:41.935040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.393 qpair failed and we were unable to recover it. 00:42:47.393 [2024-05-15 09:08:41.935173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.393 [2024-05-15 09:08:41.935304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.393 [2024-05-15 09:08:41.935334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.393 qpair failed and we were unable to recover it. 00:42:47.393 [2024-05-15 09:08:41.935486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.393 [2024-05-15 09:08:41.935652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.393 [2024-05-15 09:08:41.935695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.393 qpair failed and we were unable to recover it. 00:42:47.393 [2024-05-15 09:08:41.935824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.393 [2024-05-15 09:08:41.935955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.393 [2024-05-15 09:08:41.935980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.393 qpair failed and we were unable to recover it. 00:42:47.393 [2024-05-15 09:08:41.936110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.393 [2024-05-15 09:08:41.936238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.393 [2024-05-15 09:08:41.936263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.393 qpair failed and we were unable to recover it. 00:42:47.393 [2024-05-15 09:08:41.936439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.393 [2024-05-15 09:08:41.936630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.393 [2024-05-15 09:08:41.936684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.393 qpair failed and we were unable to recover it. 00:42:47.393 [2024-05-15 09:08:41.936816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.393 [2024-05-15 09:08:41.936950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.393 [2024-05-15 09:08:41.936976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.393 qpair failed and we were unable to recover it. 00:42:47.393 [2024-05-15 09:08:41.937104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.393 [2024-05-15 09:08:41.937232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.393 [2024-05-15 09:08:41.937257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.393 qpair failed and we were unable to recover it. 00:42:47.393 [2024-05-15 09:08:41.937399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.393 [2024-05-15 09:08:41.937570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.393 [2024-05-15 09:08:41.937618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.393 qpair failed and we were unable to recover it. 00:42:47.393 [2024-05-15 09:08:41.937723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.393 [2024-05-15 09:08:41.937858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.393 [2024-05-15 09:08:41.937883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.393 qpair failed and we were unable to recover it. 00:42:47.393 [2024-05-15 09:08:41.937989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.393 [2024-05-15 09:08:41.938141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.393 [2024-05-15 09:08:41.938165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.393 qpair failed and we were unable to recover it. 00:42:47.393 [2024-05-15 09:08:41.938295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.393 [2024-05-15 09:08:41.938394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.393 [2024-05-15 09:08:41.938425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.393 qpair failed and we were unable to recover it. 00:42:47.393 [2024-05-15 09:08:41.938559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.393 [2024-05-15 09:08:41.938723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.393 [2024-05-15 09:08:41.938749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.393 qpair failed and we were unable to recover it. 00:42:47.393 [2024-05-15 09:08:41.938881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.393 [2024-05-15 09:08:41.939034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.393 [2024-05-15 09:08:41.939059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.393 qpair failed and we were unable to recover it. 00:42:47.393 [2024-05-15 09:08:41.939195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.393 [2024-05-15 09:08:41.939327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.393 [2024-05-15 09:08:41.939356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.393 qpair failed and we were unable to recover it. 00:42:47.393 [2024-05-15 09:08:41.939506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.393 [2024-05-15 09:08:41.939677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.393 [2024-05-15 09:08:41.939704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.393 qpair failed and we were unable to recover it. 00:42:47.393 [2024-05-15 09:08:41.939819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.393 [2024-05-15 09:08:41.939929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.393 [2024-05-15 09:08:41.939954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.393 qpair failed and we were unable to recover it. 00:42:47.393 [2024-05-15 09:08:41.940086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.393 [2024-05-15 09:08:41.940206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.393 [2024-05-15 09:08:41.940239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.393 qpair failed and we were unable to recover it. 00:42:47.393 [2024-05-15 09:08:41.940347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.393 [2024-05-15 09:08:41.940462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.393 [2024-05-15 09:08:41.940488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.394 qpair failed and we were unable to recover it. 00:42:47.394 [2024-05-15 09:08:41.940622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.394 [2024-05-15 09:08:41.940723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.394 [2024-05-15 09:08:41.940750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.394 qpair failed and we were unable to recover it. 00:42:47.394 [2024-05-15 09:08:41.940882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.394 [2024-05-15 09:08:41.941031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.394 [2024-05-15 09:08:41.941056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.394 qpair failed and we were unable to recover it. 00:42:47.394 [2024-05-15 09:08:41.941165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.394 [2024-05-15 09:08:41.941334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.394 [2024-05-15 09:08:41.941363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.394 qpair failed and we were unable to recover it. 00:42:47.394 [2024-05-15 09:08:41.941542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.394 [2024-05-15 09:08:41.941734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.394 [2024-05-15 09:08:41.941777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.394 qpair failed and we were unable to recover it. 00:42:47.394 [2024-05-15 09:08:41.941903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.394 [2024-05-15 09:08:41.942052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.394 [2024-05-15 09:08:41.942078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.394 qpair failed and we were unable to recover it. 00:42:47.394 [2024-05-15 09:08:41.942210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.394 [2024-05-15 09:08:41.942366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.394 [2024-05-15 09:08:41.942410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.394 qpair failed and we were unable to recover it. 00:42:47.394 [2024-05-15 09:08:41.942539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.394 [2024-05-15 09:08:41.942707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.394 [2024-05-15 09:08:41.942749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.394 qpair failed and we were unable to recover it. 00:42:47.394 [2024-05-15 09:08:41.942905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.394 [2024-05-15 09:08:41.943038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.394 [2024-05-15 09:08:41.943063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.394 qpair failed and we were unable to recover it. 00:42:47.394 [2024-05-15 09:08:41.943238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.394 [2024-05-15 09:08:41.943359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.394 [2024-05-15 09:08:41.943387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.394 qpair failed and we were unable to recover it. 00:42:47.394 [2024-05-15 09:08:41.943545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.394 [2024-05-15 09:08:41.943686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.394 [2024-05-15 09:08:41.943729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.394 qpair failed and we were unable to recover it. 00:42:47.394 [2024-05-15 09:08:41.943925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.394 [2024-05-15 09:08:41.944057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.394 [2024-05-15 09:08:41.944083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.394 qpair failed and we were unable to recover it. 00:42:47.394 [2024-05-15 09:08:41.944178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.394 [2024-05-15 09:08:41.944311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.394 [2024-05-15 09:08:41.944339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.394 qpair failed and we were unable to recover it. 00:42:47.394 [2024-05-15 09:08:41.944476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.394 [2024-05-15 09:08:41.944669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.394 [2024-05-15 09:08:41.944711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.394 qpair failed and we were unable to recover it. 00:42:47.394 [2024-05-15 09:08:41.944844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.394 [2024-05-15 09:08:41.944978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.394 [2024-05-15 09:08:41.945003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.394 qpair failed and we were unable to recover it. 00:42:47.394 [2024-05-15 09:08:41.945131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.394 [2024-05-15 09:08:41.945449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.394 [2024-05-15 09:08:41.945477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.394 qpair failed and we were unable to recover it. 00:42:47.394 [2024-05-15 09:08:41.945638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.394 [2024-05-15 09:08:41.945796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.394 [2024-05-15 09:08:41.945823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.394 qpair failed and we were unable to recover it. 00:42:47.394 [2024-05-15 09:08:41.945978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.394 [2024-05-15 09:08:41.946101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.394 [2024-05-15 09:08:41.946126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.394 qpair failed and we were unable to recover it. 00:42:47.394 [2024-05-15 09:08:41.946235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.394 [2024-05-15 09:08:41.946384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.394 [2024-05-15 09:08:41.946428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.394 qpair failed and we were unable to recover it. 00:42:47.394 [2024-05-15 09:08:41.946553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.394 [2024-05-15 09:08:41.946701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.394 [2024-05-15 09:08:41.946726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.394 qpair failed and we were unable to recover it. 00:42:47.394 [2024-05-15 09:08:41.946856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.394 [2024-05-15 09:08:41.946963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.394 [2024-05-15 09:08:41.946989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.394 qpair failed and we were unable to recover it. 00:42:47.394 [2024-05-15 09:08:41.947094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.394 [2024-05-15 09:08:41.947233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.394 [2024-05-15 09:08:41.947259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.394 qpair failed and we were unable to recover it. 00:42:47.394 [2024-05-15 09:08:41.947417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.394 [2024-05-15 09:08:41.947563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.394 [2024-05-15 09:08:41.947588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.394 qpair failed and we were unable to recover it. 00:42:47.395 [2024-05-15 09:08:41.947752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.395 [2024-05-15 09:08:41.947858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.395 [2024-05-15 09:08:41.947883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.395 qpair failed and we were unable to recover it. 00:42:47.395 [2024-05-15 09:08:41.947996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.395 [2024-05-15 09:08:41.948146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.395 [2024-05-15 09:08:41.948170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.395 qpair failed and we were unable to recover it. 00:42:47.395 [2024-05-15 09:08:41.948310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.395 [2024-05-15 09:08:41.948442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.395 [2024-05-15 09:08:41.948469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.395 qpair failed and we were unable to recover it. 00:42:47.395 [2024-05-15 09:08:41.948575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.395 [2024-05-15 09:08:41.948700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.395 [2024-05-15 09:08:41.948725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.395 qpair failed and we were unable to recover it. 00:42:47.395 [2024-05-15 09:08:41.948835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.395 [2024-05-15 09:08:41.948991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.395 [2024-05-15 09:08:41.949016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.395 qpair failed and we were unable to recover it. 00:42:47.395 [2024-05-15 09:08:41.949124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.395 [2024-05-15 09:08:41.949235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.395 [2024-05-15 09:08:41.949261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.395 qpair failed and we were unable to recover it. 00:42:47.395 [2024-05-15 09:08:41.949386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.395 [2024-05-15 09:08:41.949533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.395 [2024-05-15 09:08:41.949558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.395 qpair failed and we were unable to recover it. 00:42:47.395 [2024-05-15 09:08:41.949707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.395 [2024-05-15 09:08:41.949880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.395 [2024-05-15 09:08:41.949905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.395 qpair failed and we were unable to recover it. 00:42:47.395 [2024-05-15 09:08:41.950057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.395 [2024-05-15 09:08:41.950186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.395 [2024-05-15 09:08:41.950211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.395 qpair failed and we were unable to recover it. 00:42:47.395 [2024-05-15 09:08:41.950349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.395 [2024-05-15 09:08:41.950460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.395 [2024-05-15 09:08:41.950484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.395 qpair failed and we were unable to recover it. 00:42:47.395 [2024-05-15 09:08:41.950593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.395 [2024-05-15 09:08:41.950716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.395 [2024-05-15 09:08:41.950741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.395 qpair failed and we were unable to recover it. 00:42:47.395 [2024-05-15 09:08:41.950905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.395 [2024-05-15 09:08:41.951010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.395 [2024-05-15 09:08:41.951036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.395 qpair failed and we were unable to recover it. 00:42:47.395 [2024-05-15 09:08:41.951148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.395 [2024-05-15 09:08:41.951283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.395 [2024-05-15 09:08:41.951310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.395 qpair failed and we were unable to recover it. 00:42:47.395 [2024-05-15 09:08:41.951436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.395 [2024-05-15 09:08:41.951540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.395 [2024-05-15 09:08:41.951565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.395 qpair failed and we were unable to recover it. 00:42:47.395 [2024-05-15 09:08:41.951693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.395 [2024-05-15 09:08:41.951818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.395 [2024-05-15 09:08:41.951844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.395 qpair failed and we were unable to recover it. 00:42:47.395 [2024-05-15 09:08:41.951994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.395 [2024-05-15 09:08:41.952123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.395 [2024-05-15 09:08:41.952149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.395 qpair failed and we were unable to recover it. 00:42:47.395 [2024-05-15 09:08:41.952298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.395 [2024-05-15 09:08:41.952494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.395 [2024-05-15 09:08:41.952535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.395 qpair failed and we were unable to recover it. 00:42:47.395 [2024-05-15 09:08:41.952680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.395 [2024-05-15 09:08:41.952803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.395 [2024-05-15 09:08:41.952830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.395 qpair failed and we were unable to recover it. 00:42:47.395 [2024-05-15 09:08:41.952985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.395 [2024-05-15 09:08:41.953086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.395 [2024-05-15 09:08:41.953113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.395 qpair failed and we were unable to recover it. 00:42:47.395 [2024-05-15 09:08:41.953273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.395 [2024-05-15 09:08:41.953427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.395 [2024-05-15 09:08:41.953452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.395 qpair failed and we were unable to recover it. 00:42:47.395 [2024-05-15 09:08:41.953580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.395 [2024-05-15 09:08:41.953687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.395 [2024-05-15 09:08:41.953712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.395 qpair failed and we were unable to recover it. 00:42:47.395 [2024-05-15 09:08:41.953845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.395 [2024-05-15 09:08:41.954004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.395 [2024-05-15 09:08:41.954029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.395 qpair failed and we were unable to recover it. 00:42:47.395 [2024-05-15 09:08:41.954133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.395 [2024-05-15 09:08:41.954298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.395 [2024-05-15 09:08:41.954344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.395 qpair failed and we were unable to recover it. 00:42:47.395 [2024-05-15 09:08:41.954496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.395 [2024-05-15 09:08:41.954645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.395 [2024-05-15 09:08:41.954676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.395 qpair failed and we were unable to recover it. 00:42:47.395 [2024-05-15 09:08:41.954775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.395 [2024-05-15 09:08:41.954901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.395 [2024-05-15 09:08:41.954927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.395 qpair failed and we were unable to recover it. 00:42:47.395 [2024-05-15 09:08:41.955025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.395 [2024-05-15 09:08:41.955135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.395 [2024-05-15 09:08:41.955160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.395 qpair failed and we were unable to recover it. 00:42:47.395 [2024-05-15 09:08:41.955266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.395 [2024-05-15 09:08:41.955405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.395 [2024-05-15 09:08:41.955431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.395 qpair failed and we were unable to recover it. 00:42:47.395 [2024-05-15 09:08:41.955534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.395 [2024-05-15 09:08:41.955661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.395 [2024-05-15 09:08:41.955687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.395 qpair failed and we were unable to recover it. 00:42:47.395 [2024-05-15 09:08:41.955820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.395 [2024-05-15 09:08:41.955971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.396 [2024-05-15 09:08:41.955996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.396 qpair failed and we were unable to recover it. 00:42:47.396 [2024-05-15 09:08:41.956092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.396 [2024-05-15 09:08:41.956194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.396 [2024-05-15 09:08:41.956225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.396 qpair failed and we were unable to recover it. 00:42:47.396 [2024-05-15 09:08:41.956337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.396 [2024-05-15 09:08:41.956459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.396 [2024-05-15 09:08:41.956484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.396 qpair failed and we were unable to recover it. 00:42:47.396 [2024-05-15 09:08:41.956588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.396 [2024-05-15 09:08:41.956745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.396 [2024-05-15 09:08:41.956770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.396 qpair failed and we were unable to recover it. 00:42:47.396 [2024-05-15 09:08:41.956871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.396 [2024-05-15 09:08:41.956994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.396 [2024-05-15 09:08:41.957019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.396 qpair failed and we were unable to recover it. 00:42:47.396 [2024-05-15 09:08:41.957149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.396 [2024-05-15 09:08:41.957255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.396 [2024-05-15 09:08:41.957282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.396 qpair failed and we were unable to recover it. 00:42:47.396 [2024-05-15 09:08:41.957410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.396 [2024-05-15 09:08:41.957563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.396 [2024-05-15 09:08:41.957588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.396 qpair failed and we were unable to recover it. 00:42:47.396 [2024-05-15 09:08:41.957761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.396 [2024-05-15 09:08:41.957911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.396 [2024-05-15 09:08:41.957936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.396 qpair failed and we were unable to recover it. 00:42:47.396 [2024-05-15 09:08:41.958066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.396 [2024-05-15 09:08:41.958192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.396 [2024-05-15 09:08:41.958222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.396 qpair failed and we were unable to recover it. 00:42:47.396 [2024-05-15 09:08:41.958398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.396 [2024-05-15 09:08:41.958621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.396 [2024-05-15 09:08:41.958680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.396 qpair failed and we were unable to recover it. 00:42:47.396 [2024-05-15 09:08:41.958848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.396 [2024-05-15 09:08:41.958981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.396 [2024-05-15 09:08:41.959005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.396 qpair failed and we were unable to recover it. 00:42:47.396 [2024-05-15 09:08:41.959159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.396 [2024-05-15 09:08:41.959310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.396 [2024-05-15 09:08:41.959354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.396 qpair failed and we were unable to recover it. 00:42:47.396 [2024-05-15 09:08:41.959481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.396 [2024-05-15 09:08:41.959625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.396 [2024-05-15 09:08:41.959668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.396 qpair failed and we were unable to recover it. 00:42:47.396 [2024-05-15 09:08:41.959823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.396 [2024-05-15 09:08:41.959969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.396 [2024-05-15 09:08:41.959993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.396 qpair failed and we were unable to recover it. 00:42:47.396 [2024-05-15 09:08:41.960124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.396 [2024-05-15 09:08:41.960258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.396 [2024-05-15 09:08:41.960285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.396 qpair failed and we were unable to recover it. 00:42:47.396 [2024-05-15 09:08:41.960434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.396 [2024-05-15 09:08:41.960605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.396 [2024-05-15 09:08:41.960647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.396 qpair failed and we were unable to recover it. 00:42:47.396 [2024-05-15 09:08:41.960753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.396 [2024-05-15 09:08:41.960915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.396 [2024-05-15 09:08:41.960939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.396 qpair failed and we were unable to recover it. 00:42:47.396 [2024-05-15 09:08:41.961073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.396 [2024-05-15 09:08:41.961227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.396 [2024-05-15 09:08:41.961253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.396 qpair failed and we were unable to recover it. 00:42:47.396 [2024-05-15 09:08:41.961406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.396 [2024-05-15 09:08:41.961511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.396 [2024-05-15 09:08:41.961536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.396 qpair failed and we were unable to recover it. 00:42:47.396 [2024-05-15 09:08:41.961643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.396 [2024-05-15 09:08:41.961747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.396 [2024-05-15 09:08:41.961772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.396 qpair failed and we were unable to recover it. 00:42:47.396 [2024-05-15 09:08:41.961879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.396 [2024-05-15 09:08:41.962012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.396 [2024-05-15 09:08:41.962037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.396 qpair failed and we were unable to recover it. 00:42:47.396 [2024-05-15 09:08:41.962144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.396 [2024-05-15 09:08:41.962257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.396 [2024-05-15 09:08:41.962282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.396 qpair failed and we were unable to recover it. 00:42:47.396 [2024-05-15 09:08:41.962416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.396 [2024-05-15 09:08:41.962541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.396 [2024-05-15 09:08:41.962566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.396 qpair failed and we were unable to recover it. 00:42:47.396 [2024-05-15 09:08:41.962693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.396 [2024-05-15 09:08:41.962864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.396 [2024-05-15 09:08:41.962889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.396 qpair failed and we were unable to recover it. 00:42:47.396 [2024-05-15 09:08:41.963015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.396 [2024-05-15 09:08:41.963119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.396 [2024-05-15 09:08:41.963143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.396 qpair failed and we were unable to recover it. 00:42:47.396 [2024-05-15 09:08:41.963247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.396 [2024-05-15 09:08:41.963398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.396 [2024-05-15 09:08:41.963440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.396 qpair failed and we were unable to recover it. 00:42:47.396 [2024-05-15 09:08:41.963566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.396 [2024-05-15 09:08:41.963736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.397 [2024-05-15 09:08:41.963761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.397 qpair failed and we were unable to recover it. 00:42:47.397 [2024-05-15 09:08:41.963893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.397 [2024-05-15 09:08:41.964028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.397 [2024-05-15 09:08:41.964054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.397 qpair failed and we were unable to recover it. 00:42:47.397 [2024-05-15 09:08:41.964158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.397 [2024-05-15 09:08:41.964306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.397 [2024-05-15 09:08:41.964349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.397 qpair failed and we were unable to recover it. 00:42:47.397 [2024-05-15 09:08:41.964501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.397 [2024-05-15 09:08:41.964628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.397 [2024-05-15 09:08:41.964652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.397 qpair failed and we were unable to recover it. 00:42:47.397 [2024-05-15 09:08:41.964785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.397 [2024-05-15 09:08:41.964879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.397 [2024-05-15 09:08:41.964904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.397 qpair failed and we were unable to recover it. 00:42:47.397 [2024-05-15 09:08:41.965007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.397 [2024-05-15 09:08:41.965136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.397 [2024-05-15 09:08:41.965161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.397 qpair failed and we were unable to recover it. 00:42:47.397 [2024-05-15 09:08:41.965316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.397 [2024-05-15 09:08:41.965477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.397 [2024-05-15 09:08:41.965518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.397 qpair failed and we were unable to recover it. 00:42:47.397 [2024-05-15 09:08:41.965669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.397 [2024-05-15 09:08:41.965809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.397 [2024-05-15 09:08:41.965834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.397 qpair failed and we were unable to recover it. 00:42:47.397 [2024-05-15 09:08:41.965965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.397 [2024-05-15 09:08:41.966094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.397 [2024-05-15 09:08:41.966118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.397 qpair failed and we were unable to recover it. 00:42:47.397 [2024-05-15 09:08:41.966225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.397 [2024-05-15 09:08:41.966381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.397 [2024-05-15 09:08:41.966424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.397 qpair failed and we were unable to recover it. 00:42:47.397 [2024-05-15 09:08:41.966554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.397 [2024-05-15 09:08:41.966726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.397 [2024-05-15 09:08:41.966771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.397 qpair failed and we were unable to recover it. 00:42:47.397 [2024-05-15 09:08:41.966876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.397 [2024-05-15 09:08:41.967002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.397 [2024-05-15 09:08:41.967028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.397 qpair failed and we were unable to recover it. 00:42:47.397 [2024-05-15 09:08:41.967170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.397 [2024-05-15 09:08:41.967296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.397 [2024-05-15 09:08:41.967324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.397 qpair failed and we were unable to recover it. 00:42:47.397 [2024-05-15 09:08:41.967497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.397 [2024-05-15 09:08:41.967692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.397 [2024-05-15 09:08:41.967719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.397 qpair failed and we were unable to recover it. 00:42:47.397 [2024-05-15 09:08:41.967863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.397 [2024-05-15 09:08:41.968018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.397 [2024-05-15 09:08:41.968042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.397 qpair failed and we were unable to recover it. 00:42:47.397 [2024-05-15 09:08:41.968173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.397 [2024-05-15 09:08:41.968329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.397 [2024-05-15 09:08:41.968373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.397 qpair failed and we were unable to recover it. 00:42:47.397 [2024-05-15 09:08:41.968500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.397 [2024-05-15 09:08:41.968642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.397 [2024-05-15 09:08:41.968686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.397 qpair failed and we were unable to recover it. 00:42:47.397 [2024-05-15 09:08:41.968814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.397 [2024-05-15 09:08:41.968982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.397 [2024-05-15 09:08:41.969007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.397 qpair failed and we were unable to recover it. 00:42:47.397 [2024-05-15 09:08:41.969115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.397 [2024-05-15 09:08:41.969237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.397 [2024-05-15 09:08:41.969263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.397 qpair failed and we were unable to recover it. 00:42:47.397 [2024-05-15 09:08:41.969404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.397 [2024-05-15 09:08:41.969573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.397 [2024-05-15 09:08:41.969632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.397 qpair failed and we were unable to recover it. 00:42:47.397 [2024-05-15 09:08:41.969765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.397 [2024-05-15 09:08:41.969898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.397 [2024-05-15 09:08:41.969925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.397 qpair failed and we were unable to recover it. 00:42:47.397 [2024-05-15 09:08:41.970057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.397 [2024-05-15 09:08:41.970187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.397 [2024-05-15 09:08:41.970211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.397 qpair failed and we were unable to recover it. 00:42:47.397 [2024-05-15 09:08:41.970361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.397 [2024-05-15 09:08:41.970540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.397 [2024-05-15 09:08:41.970582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.397 qpair failed and we were unable to recover it. 00:42:47.397 [2024-05-15 09:08:41.970714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.397 [2024-05-15 09:08:41.970816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.397 [2024-05-15 09:08:41.970842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.397 qpair failed and we were unable to recover it. 00:42:47.397 [2024-05-15 09:08:41.970946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.397 [2024-05-15 09:08:41.971073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.397 [2024-05-15 09:08:41.971098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.397 qpair failed and we were unable to recover it. 00:42:47.397 [2024-05-15 09:08:41.971234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.397 [2024-05-15 09:08:41.971356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.397 [2024-05-15 09:08:41.971385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.397 qpair failed and we were unable to recover it. 00:42:47.397 [2024-05-15 09:08:41.971559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.397 [2024-05-15 09:08:41.971712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.397 [2024-05-15 09:08:41.971736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.397 qpair failed and we were unable to recover it. 00:42:47.397 [2024-05-15 09:08:41.971866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.397 [2024-05-15 09:08:41.972002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.397 [2024-05-15 09:08:41.972027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.397 qpair failed and we were unable to recover it. 00:42:47.397 [2024-05-15 09:08:41.972124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.397 [2024-05-15 09:08:41.972255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.397 [2024-05-15 09:08:41.972281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.397 qpair failed and we were unable to recover it. 00:42:47.398 [2024-05-15 09:08:41.972439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.398 [2024-05-15 09:08:41.972622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.398 [2024-05-15 09:08:41.972678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.398 qpair failed and we were unable to recover it. 00:42:47.398 [2024-05-15 09:08:41.972832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.398 [2024-05-15 09:08:41.972929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.398 [2024-05-15 09:08:41.972954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.398 qpair failed and we were unable to recover it. 00:42:47.398 [2024-05-15 09:08:41.973114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.398 [2024-05-15 09:08:41.973224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.398 [2024-05-15 09:08:41.973250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.398 qpair failed and we were unable to recover it. 00:42:47.398 [2024-05-15 09:08:41.973407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.398 [2024-05-15 09:08:41.973585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.398 [2024-05-15 09:08:41.973612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.398 qpair failed and we were unable to recover it. 00:42:47.398 [2024-05-15 09:08:41.973783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.398 [2024-05-15 09:08:41.973914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.398 [2024-05-15 09:08:41.973939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.398 qpair failed and we were unable to recover it. 00:42:47.398 [2024-05-15 09:08:41.974069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.398 [2024-05-15 09:08:41.974227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.398 [2024-05-15 09:08:41.974253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.398 qpair failed and we were unable to recover it. 00:42:47.398 [2024-05-15 09:08:41.974377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.398 [2024-05-15 09:08:41.974526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.398 [2024-05-15 09:08:41.974554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.398 qpair failed and we were unable to recover it. 00:42:47.398 [2024-05-15 09:08:41.974740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.398 [2024-05-15 09:08:41.974872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.398 [2024-05-15 09:08:41.974897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.398 qpair failed and we were unable to recover it. 00:42:47.398 [2024-05-15 09:08:41.975050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.398 [2024-05-15 09:08:41.975188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.398 [2024-05-15 09:08:41.975234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.398 qpair failed and we were unable to recover it. 00:42:47.398 [2024-05-15 09:08:41.975368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.398 [2024-05-15 09:08:41.975572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.398 [2024-05-15 09:08:41.975616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.398 qpair failed and we were unable to recover it. 00:42:47.398 [2024-05-15 09:08:41.975732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.398 [2024-05-15 09:08:41.975874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.398 [2024-05-15 09:08:41.975899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.398 qpair failed and we were unable to recover it. 00:42:47.398 [2024-05-15 09:08:41.976000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.398 [2024-05-15 09:08:41.976154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.398 [2024-05-15 09:08:41.976179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.398 qpair failed and we were unable to recover it. 00:42:47.398 [2024-05-15 09:08:41.976349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.398 [2024-05-15 09:08:41.976528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.398 [2024-05-15 09:08:41.976571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.398 qpair failed and we were unable to recover it. 00:42:47.398 [2024-05-15 09:08:41.976721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.398 [2024-05-15 09:08:41.976862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.398 [2024-05-15 09:08:41.976906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.398 qpair failed and we were unable to recover it. 00:42:47.398 [2024-05-15 09:08:41.977011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.398 [2024-05-15 09:08:41.977118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.398 [2024-05-15 09:08:41.977143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.398 qpair failed and we were unable to recover it. 00:42:47.398 [2024-05-15 09:08:41.977291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.398 [2024-05-15 09:08:41.977438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.398 [2024-05-15 09:08:41.977462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.398 qpair failed and we were unable to recover it. 00:42:47.398 [2024-05-15 09:08:41.977597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.398 [2024-05-15 09:08:41.977726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.398 [2024-05-15 09:08:41.977751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.398 qpair failed and we were unable to recover it. 00:42:47.398 [2024-05-15 09:08:41.977857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.398 [2024-05-15 09:08:41.978010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.398 [2024-05-15 09:08:41.978035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.398 qpair failed and we were unable to recover it. 00:42:47.398 [2024-05-15 09:08:41.978190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.398 [2024-05-15 09:08:41.978314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.398 [2024-05-15 09:08:41.978345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.398 qpair failed and we were unable to recover it. 00:42:47.398 [2024-05-15 09:08:41.978478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.398 [2024-05-15 09:08:41.978593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.398 [2024-05-15 09:08:41.978619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.398 qpair failed and we were unable to recover it. 00:42:47.398 [2024-05-15 09:08:41.978782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.398 [2024-05-15 09:08:41.978915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.398 [2024-05-15 09:08:41.978940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.398 qpair failed and we were unable to recover it. 00:42:47.398 [2024-05-15 09:08:41.979074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.398 [2024-05-15 09:08:41.979211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.398 [2024-05-15 09:08:41.979240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.398 qpair failed and we were unable to recover it. 00:42:47.398 [2024-05-15 09:08:41.979364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.398 [2024-05-15 09:08:41.979509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.398 [2024-05-15 09:08:41.979552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.398 qpair failed and we were unable to recover it. 00:42:47.398 [2024-05-15 09:08:41.979710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.398 [2024-05-15 09:08:41.979855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.398 [2024-05-15 09:08:41.979880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.398 qpair failed and we were unable to recover it. 00:42:47.398 [2024-05-15 09:08:41.979984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.398 [2024-05-15 09:08:41.980103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.398 [2024-05-15 09:08:41.980129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.398 qpair failed and we were unable to recover it. 00:42:47.398 [2024-05-15 09:08:41.980297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.398 [2024-05-15 09:08:41.980449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.398 [2024-05-15 09:08:41.980493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.398 qpair failed and we were unable to recover it. 00:42:47.399 [2024-05-15 09:08:41.980623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.399 [2024-05-15 09:08:41.980743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.399 [2024-05-15 09:08:41.980769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.399 qpair failed and we were unable to recover it. 00:42:47.399 [2024-05-15 09:08:41.980923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.399 [2024-05-15 09:08:41.981047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.399 [2024-05-15 09:08:41.981072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.399 qpair failed and we were unable to recover it. 00:42:47.399 [2024-05-15 09:08:41.981200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.399 [2024-05-15 09:08:41.981308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.399 [2024-05-15 09:08:41.981338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.399 qpair failed and we were unable to recover it. 00:42:47.399 [2024-05-15 09:08:41.981444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.399 [2024-05-15 09:08:41.981596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.399 [2024-05-15 09:08:41.981622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.399 qpair failed and we were unable to recover it. 00:42:47.399 [2024-05-15 09:08:41.981751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.399 [2024-05-15 09:08:41.981865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.399 [2024-05-15 09:08:41.981890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.399 qpair failed and we were unable to recover it. 00:42:47.399 [2024-05-15 09:08:41.982021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.399 [2024-05-15 09:08:41.982128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.399 [2024-05-15 09:08:41.982152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.399 qpair failed and we were unable to recover it. 00:42:47.399 [2024-05-15 09:08:41.982281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.399 [2024-05-15 09:08:41.982402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.399 [2024-05-15 09:08:41.982445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.399 qpair failed and we were unable to recover it. 00:42:47.399 [2024-05-15 09:08:41.982571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.399 [2024-05-15 09:08:41.982730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.399 [2024-05-15 09:08:41.982755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.399 qpair failed and we were unable to recover it. 00:42:47.399 [2024-05-15 09:08:41.982864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.399 [2024-05-15 09:08:41.982968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.399 [2024-05-15 09:08:41.982994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.399 qpair failed and we were unable to recover it. 00:42:47.399 [2024-05-15 09:08:41.983130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.399 [2024-05-15 09:08:41.983295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.399 [2024-05-15 09:08:41.983322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.399 qpair failed and we were unable to recover it. 00:42:47.399 [2024-05-15 09:08:41.983452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.399 [2024-05-15 09:08:41.983608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.399 [2024-05-15 09:08:41.983652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.399 qpair failed and we were unable to recover it. 00:42:47.399 [2024-05-15 09:08:41.983807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.399 [2024-05-15 09:08:41.983913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.399 [2024-05-15 09:08:41.983938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.399 qpair failed and we were unable to recover it. 00:42:47.399 [2024-05-15 09:08:41.984081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.399 [2024-05-15 09:08:41.984208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.399 [2024-05-15 09:08:41.984243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.399 qpair failed and we were unable to recover it. 00:42:47.399 [2024-05-15 09:08:41.984352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.399 [2024-05-15 09:08:41.984457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.399 [2024-05-15 09:08:41.984483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.399 qpair failed and we were unable to recover it. 00:42:47.399 [2024-05-15 09:08:41.984582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.399 [2024-05-15 09:08:41.984691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.399 [2024-05-15 09:08:41.984716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.399 qpair failed and we were unable to recover it. 00:42:47.399 [2024-05-15 09:08:41.984848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.399 [2024-05-15 09:08:41.984954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.399 [2024-05-15 09:08:41.984979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.399 qpair failed and we were unable to recover it. 00:42:47.399 [2024-05-15 09:08:41.985107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.399 [2024-05-15 09:08:41.985267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.399 [2024-05-15 09:08:41.985292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.399 qpair failed and we were unable to recover it. 00:42:47.399 [2024-05-15 09:08:41.985431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.399 [2024-05-15 09:08:41.985558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.399 [2024-05-15 09:08:41.985583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.399 qpair failed and we were unable to recover it. 00:42:47.399 [2024-05-15 09:08:41.985684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.399 [2024-05-15 09:08:41.985787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.399 [2024-05-15 09:08:41.985812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.399 qpair failed and we were unable to recover it. 00:42:47.399 [2024-05-15 09:08:41.985921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.399 [2024-05-15 09:08:41.986030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.399 [2024-05-15 09:08:41.986059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.399 qpair failed and we were unable to recover it. 00:42:47.399 [2024-05-15 09:08:41.986200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.399 [2024-05-15 09:08:41.986335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.399 [2024-05-15 09:08:41.986362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.399 qpair failed and we were unable to recover it. 00:42:47.399 [2024-05-15 09:08:41.986468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.399 [2024-05-15 09:08:41.986576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.399 [2024-05-15 09:08:41.986600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.399 qpair failed and we were unable to recover it. 00:42:47.399 [2024-05-15 09:08:41.986704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.399 [2024-05-15 09:08:41.986838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.399 [2024-05-15 09:08:41.986868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.399 qpair failed and we were unable to recover it. 00:42:47.399 [2024-05-15 09:08:41.986999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.399 [2024-05-15 09:08:41.987104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.399 [2024-05-15 09:08:41.987130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.400 qpair failed and we were unable to recover it. 00:42:47.400 [2024-05-15 09:08:41.987316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.400 [2024-05-15 09:08:41.987445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.400 [2024-05-15 09:08:41.987471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.400 qpair failed and we were unable to recover it. 00:42:47.400 [2024-05-15 09:08:41.987642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.400 [2024-05-15 09:08:41.987791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.400 [2024-05-15 09:08:41.987816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.400 qpair failed and we were unable to recover it. 00:42:47.400 [2024-05-15 09:08:41.987928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.400 [2024-05-15 09:08:41.988051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.400 [2024-05-15 09:08:41.988076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.400 qpair failed and we were unable to recover it. 00:42:47.400 [2024-05-15 09:08:41.988208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.400 [2024-05-15 09:08:41.988312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.400 [2024-05-15 09:08:41.988336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.400 qpair failed and we were unable to recover it. 00:42:47.400 [2024-05-15 09:08:41.988444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.400 [2024-05-15 09:08:41.988577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.400 [2024-05-15 09:08:41.988602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.400 qpair failed and we were unable to recover it. 00:42:47.400 [2024-05-15 09:08:41.988738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.400 [2024-05-15 09:08:41.988863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.400 [2024-05-15 09:08:41.988888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.400 qpair failed and we were unable to recover it. 00:42:47.400 [2024-05-15 09:08:41.989017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.400 [2024-05-15 09:08:41.989144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.400 [2024-05-15 09:08:41.989170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.400 qpair failed and we were unable to recover it. 00:42:47.400 [2024-05-15 09:08:41.989283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.400 [2024-05-15 09:08:41.989406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.400 [2024-05-15 09:08:41.989431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.400 qpair failed and we were unable to recover it. 00:42:47.400 [2024-05-15 09:08:41.989564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.400 [2024-05-15 09:08:41.989695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.400 [2024-05-15 09:08:41.989719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.400 qpair failed and we were unable to recover it. 00:42:47.400 [2024-05-15 09:08:41.989827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.400 [2024-05-15 09:08:41.989966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.400 [2024-05-15 09:08:41.989991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.400 qpair failed and we were unable to recover it. 00:42:47.400 [2024-05-15 09:08:41.990118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.400 [2024-05-15 09:08:41.990257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.400 [2024-05-15 09:08:41.990283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.400 qpair failed and we were unable to recover it. 00:42:47.400 [2024-05-15 09:08:41.990390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.400 [2024-05-15 09:08:41.990542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.400 [2024-05-15 09:08:41.990567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.400 qpair failed and we were unable to recover it. 00:42:47.400 [2024-05-15 09:08:41.990668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.400 [2024-05-15 09:08:41.990799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.400 [2024-05-15 09:08:41.990824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.400 qpair failed and we were unable to recover it. 00:42:47.400 [2024-05-15 09:08:41.990957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.400 [2024-05-15 09:08:41.991091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.400 [2024-05-15 09:08:41.991117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.400 qpair failed and we were unable to recover it. 00:42:47.400 [2024-05-15 09:08:41.991226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.400 [2024-05-15 09:08:41.991338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.400 [2024-05-15 09:08:41.991362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.400 qpair failed and we were unable to recover it. 00:42:47.400 [2024-05-15 09:08:41.991468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.400 [2024-05-15 09:08:41.991594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.400 [2024-05-15 09:08:41.991619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.400 qpair failed and we were unable to recover it. 00:42:47.400 [2024-05-15 09:08:41.991749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.400 [2024-05-15 09:08:41.991900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.400 [2024-05-15 09:08:41.991924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.400 qpair failed and we were unable to recover it. 00:42:47.400 [2024-05-15 09:08:41.992026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.400 [2024-05-15 09:08:41.992149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.400 [2024-05-15 09:08:41.992176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.400 qpair failed and we were unable to recover it. 00:42:47.400 [2024-05-15 09:08:41.992336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.400 [2024-05-15 09:08:41.992495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.400 [2024-05-15 09:08:41.992536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.400 qpair failed and we were unable to recover it. 00:42:47.400 [2024-05-15 09:08:41.992695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.400 [2024-05-15 09:08:41.992819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.400 [2024-05-15 09:08:41.992843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.400 qpair failed and we were unable to recover it. 00:42:47.400 [2024-05-15 09:08:41.992972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.400 [2024-05-15 09:08:41.993129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.400 [2024-05-15 09:08:41.993154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.400 qpair failed and we were unable to recover it. 00:42:47.400 [2024-05-15 09:08:41.993290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.400 [2024-05-15 09:08:41.993464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.400 [2024-05-15 09:08:41.993492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.400 qpair failed and we were unable to recover it. 00:42:47.400 [2024-05-15 09:08:41.993628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.400 [2024-05-15 09:08:41.993771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.400 [2024-05-15 09:08:41.993796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.400 qpair failed and we were unable to recover it. 00:42:47.400 [2024-05-15 09:08:41.993928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.400 [2024-05-15 09:08:41.994058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.400 [2024-05-15 09:08:41.994084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.400 qpair failed and we were unable to recover it. 00:42:47.400 [2024-05-15 09:08:41.994188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.400 [2024-05-15 09:08:41.994340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.400 [2024-05-15 09:08:41.994366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.400 qpair failed and we were unable to recover it. 00:42:47.400 [2024-05-15 09:08:41.994500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.400 [2024-05-15 09:08:41.994682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.400 [2024-05-15 09:08:41.994709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.400 qpair failed and we were unable to recover it. 00:42:47.400 [2024-05-15 09:08:41.994857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.400 [2024-05-15 09:08:41.995011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.400 [2024-05-15 09:08:41.995037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.400 qpair failed and we were unable to recover it. 00:42:47.400 [2024-05-15 09:08:41.995137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.400 [2024-05-15 09:08:41.995265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.400 [2024-05-15 09:08:41.995291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.400 qpair failed and we were unable to recover it. 00:42:47.400 [2024-05-15 09:08:41.995410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.400 [2024-05-15 09:08:41.995531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.401 [2024-05-15 09:08:41.995557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.401 qpair failed and we were unable to recover it. 00:42:47.401 [2024-05-15 09:08:41.995716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.401 [2024-05-15 09:08:41.995825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.401 [2024-05-15 09:08:41.995850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.401 qpair failed and we were unable to recover it. 00:42:47.401 [2024-05-15 09:08:41.995957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.401 [2024-05-15 09:08:41.996087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.401 [2024-05-15 09:08:41.996112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.401 qpair failed and we were unable to recover it. 00:42:47.401 [2024-05-15 09:08:41.996219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.401 [2024-05-15 09:08:41.996355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.401 [2024-05-15 09:08:41.996381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.401 qpair failed and we were unable to recover it. 00:42:47.401 [2024-05-15 09:08:41.996554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.401 [2024-05-15 09:08:41.996690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.401 [2024-05-15 09:08:41.996715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.401 qpair failed and we were unable to recover it. 00:42:47.401 [2024-05-15 09:08:41.996819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.401 [2024-05-15 09:08:41.996923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.401 [2024-05-15 09:08:41.996949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.401 qpair failed and we were unable to recover it. 00:42:47.401 [2024-05-15 09:08:41.997083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.401 [2024-05-15 09:08:41.997188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.401 [2024-05-15 09:08:41.997213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.401 qpair failed and we were unable to recover it. 00:42:47.401 [2024-05-15 09:08:41.997376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.401 [2024-05-15 09:08:41.997475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.401 [2024-05-15 09:08:41.997499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.401 qpair failed and we were unable to recover it. 00:42:47.401 [2024-05-15 09:08:41.997633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.401 [2024-05-15 09:08:41.997741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.401 [2024-05-15 09:08:41.997766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.401 qpair failed and we were unable to recover it. 00:42:47.401 [2024-05-15 09:08:41.997897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.401 [2024-05-15 09:08:41.998000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.401 [2024-05-15 09:08:41.998024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.401 qpair failed and we were unable to recover it. 00:42:47.401 [2024-05-15 09:08:41.998124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.401 [2024-05-15 09:08:41.998234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.401 [2024-05-15 09:08:41.998260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.401 qpair failed and we were unable to recover it. 00:42:47.401 [2024-05-15 09:08:41.998430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.401 [2024-05-15 09:08:41.998596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.401 [2024-05-15 09:08:41.998621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.401 qpair failed and we were unable to recover it. 00:42:47.401 [2024-05-15 09:08:41.998728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.401 [2024-05-15 09:08:41.998835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.401 [2024-05-15 09:08:41.998860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.401 qpair failed and we were unable to recover it. 00:42:47.401 [2024-05-15 09:08:41.998989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.401 [2024-05-15 09:08:41.999144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.401 [2024-05-15 09:08:41.999169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.401 qpair failed and we were unable to recover it. 00:42:47.401 [2024-05-15 09:08:41.999326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.401 [2024-05-15 09:08:41.999468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.401 [2024-05-15 09:08:41.999496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.401 qpair failed and we were unable to recover it. 00:42:47.401 [2024-05-15 09:08:41.999644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.401 [2024-05-15 09:08:41.999753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.401 [2024-05-15 09:08:41.999779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.401 qpair failed and we were unable to recover it. 00:42:47.401 [2024-05-15 09:08:41.999934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.401 [2024-05-15 09:08:42.000060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.401 [2024-05-15 09:08:42.000085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.401 qpair failed and we were unable to recover it. 00:42:47.401 [2024-05-15 09:08:42.000221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.401 [2024-05-15 09:08:42.000347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.401 [2024-05-15 09:08:42.000390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.401 qpair failed and we were unable to recover it. 00:42:47.401 [2024-05-15 09:08:42.000520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.401 [2024-05-15 09:08:42.000684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.401 [2024-05-15 09:08:42.000726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.401 qpair failed and we were unable to recover it. 00:42:47.401 [2024-05-15 09:08:42.000830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.401 [2024-05-15 09:08:42.000922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.401 [2024-05-15 09:08:42.000947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.401 qpair failed and we were unable to recover it. 00:42:47.401 [2024-05-15 09:08:42.001077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.401 [2024-05-15 09:08:42.001206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.401 [2024-05-15 09:08:42.001237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.401 qpair failed and we were unable to recover it. 00:42:47.401 [2024-05-15 09:08:42.001374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.401 [2024-05-15 09:08:42.001511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.401 [2024-05-15 09:08:42.001536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.401 qpair failed and we were unable to recover it. 00:42:47.401 [2024-05-15 09:08:42.001695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.401 [2024-05-15 09:08:42.001823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.401 [2024-05-15 09:08:42.001848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.401 qpair failed and we were unable to recover it. 00:42:47.401 [2024-05-15 09:08:42.001976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.401 [2024-05-15 09:08:42.002080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.401 [2024-05-15 09:08:42.002106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.401 qpair failed and we were unable to recover it. 00:42:47.401 [2024-05-15 09:08:42.002213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.401 [2024-05-15 09:08:42.002375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.401 [2024-05-15 09:08:42.002419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.401 qpair failed and we were unable to recover it. 00:42:47.401 [2024-05-15 09:08:42.002545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.401 [2024-05-15 09:08:42.002687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.401 [2024-05-15 09:08:42.002712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.401 qpair failed and we were unable to recover it. 00:42:47.401 [2024-05-15 09:08:42.002805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.401 [2024-05-15 09:08:42.002915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.401 [2024-05-15 09:08:42.002939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.401 qpair failed and we were unable to recover it. 00:42:47.401 [2024-05-15 09:08:42.003066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.401 [2024-05-15 09:08:42.003171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.401 [2024-05-15 09:08:42.003195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.401 qpair failed and we were unable to recover it. 00:42:47.401 [2024-05-15 09:08:42.003317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.401 [2024-05-15 09:08:42.003472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.401 [2024-05-15 09:08:42.003497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.401 qpair failed and we were unable to recover it. 00:42:47.401 [2024-05-15 09:08:42.003645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.402 [2024-05-15 09:08:42.003763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.402 [2024-05-15 09:08:42.003789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.402 qpair failed and we were unable to recover it. 00:42:47.402 [2024-05-15 09:08:42.003895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.402 [2024-05-15 09:08:42.004043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.402 [2024-05-15 09:08:42.004068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.402 qpair failed and we were unable to recover it. 00:42:47.402 [2024-05-15 09:08:42.004208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.402 [2024-05-15 09:08:42.004351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.402 [2024-05-15 09:08:42.004376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.402 qpair failed and we were unable to recover it. 00:42:47.402 [2024-05-15 09:08:42.004561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.402 [2024-05-15 09:08:42.004678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.402 [2024-05-15 09:08:42.004702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.402 qpair failed and we were unable to recover it. 00:42:47.402 [2024-05-15 09:08:42.004857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.402 [2024-05-15 09:08:42.004952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.402 [2024-05-15 09:08:42.004978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.402 qpair failed and we were unable to recover it. 00:42:47.402 [2024-05-15 09:08:42.005072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.402 [2024-05-15 09:08:42.005166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.402 [2024-05-15 09:08:42.005191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.402 qpair failed and we were unable to recover it. 00:42:47.402 [2024-05-15 09:08:42.005344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.402 [2024-05-15 09:08:42.005536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.402 [2024-05-15 09:08:42.005578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.402 qpair failed and we were unable to recover it. 00:42:47.402 [2024-05-15 09:08:42.005677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.402 [2024-05-15 09:08:42.005781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.402 [2024-05-15 09:08:42.005806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.402 qpair failed and we were unable to recover it. 00:42:47.402 [2024-05-15 09:08:42.005934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.402 [2024-05-15 09:08:42.006084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.402 [2024-05-15 09:08:42.006109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.402 qpair failed and we were unable to recover it. 00:42:47.402 [2024-05-15 09:08:42.006267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.402 [2024-05-15 09:08:42.006404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.402 [2024-05-15 09:08:42.006433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.402 qpair failed and we were unable to recover it. 00:42:47.402 [2024-05-15 09:08:42.006595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.402 [2024-05-15 09:08:42.006744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.402 [2024-05-15 09:08:42.006770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.402 qpair failed and we were unable to recover it. 00:42:47.402 [2024-05-15 09:08:42.006925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.402 [2024-05-15 09:08:42.007032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.402 [2024-05-15 09:08:42.007056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.402 qpair failed and we were unable to recover it. 00:42:47.402 [2024-05-15 09:08:42.007245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.402 [2024-05-15 09:08:42.007386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.402 [2024-05-15 09:08:42.007414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.402 qpair failed and we were unable to recover it. 00:42:47.402 [2024-05-15 09:08:42.007611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.402 [2024-05-15 09:08:42.007751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.402 [2024-05-15 09:08:42.007777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.402 qpair failed and we were unable to recover it. 00:42:47.402 [2024-05-15 09:08:42.007933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.402 [2024-05-15 09:08:42.008087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.402 [2024-05-15 09:08:42.008111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.402 qpair failed and we were unable to recover it. 00:42:47.402 [2024-05-15 09:08:42.008249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.402 [2024-05-15 09:08:42.008363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.402 [2024-05-15 09:08:42.008391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.402 qpair failed and we were unable to recover it. 00:42:47.402 [2024-05-15 09:08:42.008523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.402 [2024-05-15 09:08:42.008681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.402 [2024-05-15 09:08:42.008706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.402 qpair failed and we were unable to recover it. 00:42:47.402 [2024-05-15 09:08:42.008814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.402 [2024-05-15 09:08:42.008943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.402 [2024-05-15 09:08:42.008968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.402 qpair failed and we were unable to recover it. 00:42:47.402 [2024-05-15 09:08:42.009102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.402 [2024-05-15 09:08:42.009206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.402 [2024-05-15 09:08:42.009236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.402 qpair failed and we were unable to recover it. 00:42:47.402 [2024-05-15 09:08:42.009364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.402 [2024-05-15 09:08:42.009558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.402 [2024-05-15 09:08:42.009600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.402 qpair failed and we were unable to recover it. 00:42:47.402 [2024-05-15 09:08:42.009727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.402 [2024-05-15 09:08:42.009876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.402 [2024-05-15 09:08:42.009901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.402 qpair failed and we were unable to recover it. 00:42:47.402 [2024-05-15 09:08:42.010004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.402 [2024-05-15 09:08:42.010140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.402 [2024-05-15 09:08:42.010165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.402 qpair failed and we were unable to recover it. 00:42:47.402 [2024-05-15 09:08:42.010329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.402 [2024-05-15 09:08:42.010472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.402 [2024-05-15 09:08:42.010507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.402 qpair failed and we were unable to recover it. 00:42:47.402 [2024-05-15 09:08:42.010684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.402 [2024-05-15 09:08:42.010844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.402 [2024-05-15 09:08:42.010885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.402 qpair failed and we were unable to recover it. 00:42:47.402 [2024-05-15 09:08:42.011060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.402 [2024-05-15 09:08:42.011184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.402 [2024-05-15 09:08:42.011228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.402 qpair failed and we were unable to recover it. 00:42:47.402 [2024-05-15 09:08:42.011385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.402 [2024-05-15 09:08:42.011511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.402 [2024-05-15 09:08:42.011557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.402 qpair failed and we were unable to recover it. 00:42:47.402 [2024-05-15 09:08:42.011713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.402 [2024-05-15 09:08:42.011846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.402 [2024-05-15 09:08:42.011879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.402 qpair failed and we were unable to recover it. 00:42:47.402 [2024-05-15 09:08:42.012031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.402 [2024-05-15 09:08:42.012171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.402 [2024-05-15 09:08:42.012200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.402 qpair failed and we were unable to recover it. 00:42:47.402 [2024-05-15 09:08:42.012371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.402 [2024-05-15 09:08:42.012521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.402 [2024-05-15 09:08:42.012553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.402 qpair failed and we were unable to recover it. 00:42:47.402 [2024-05-15 09:08:42.012698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.403 [2024-05-15 09:08:42.012827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.403 [2024-05-15 09:08:42.012860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.403 qpair failed and we were unable to recover it. 00:42:47.403 [2024-05-15 09:08:42.012992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.403 [2024-05-15 09:08:42.013154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.403 [2024-05-15 09:08:42.013184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.403 qpair failed and we were unable to recover it. 00:42:47.403 [2024-05-15 09:08:42.013338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.403 [2024-05-15 09:08:42.013474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.403 [2024-05-15 09:08:42.013509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.403 qpair failed and we were unable to recover it. 00:42:47.403 [2024-05-15 09:08:42.013642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.403 [2024-05-15 09:08:42.013811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.403 [2024-05-15 09:08:42.013854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.403 qpair failed and we were unable to recover it. 00:42:47.403 [2024-05-15 09:08:42.013979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.403 [2024-05-15 09:08:42.014124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.403 [2024-05-15 09:08:42.014148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.403 qpair failed and we were unable to recover it. 00:42:47.403 [2024-05-15 09:08:42.014295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.403 [2024-05-15 09:08:42.014446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.403 [2024-05-15 09:08:42.014471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.403 qpair failed and we were unable to recover it. 00:42:47.403 [2024-05-15 09:08:42.014597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.403 [2024-05-15 09:08:42.014731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.403 [2024-05-15 09:08:42.014756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.403 qpair failed and we were unable to recover it. 00:42:47.403 [2024-05-15 09:08:42.014885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.403 [2024-05-15 09:08:42.014993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.403 [2024-05-15 09:08:42.015019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.403 qpair failed and we were unable to recover it. 00:42:47.403 [2024-05-15 09:08:42.015134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.403 [2024-05-15 09:08:42.015239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.403 [2024-05-15 09:08:42.015266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.403 qpair failed and we were unable to recover it. 00:42:47.403 [2024-05-15 09:08:42.015432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.403 [2024-05-15 09:08:42.015551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.403 [2024-05-15 09:08:42.015580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.403 qpair failed and we were unable to recover it. 00:42:47.403 [2024-05-15 09:08:42.015704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.403 [2024-05-15 09:08:42.015835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.403 [2024-05-15 09:08:42.015860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.403 qpair failed and we were unable to recover it. 00:42:47.403 [2024-05-15 09:08:42.015962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.403 [2024-05-15 09:08:42.016091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.403 [2024-05-15 09:08:42.016116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.403 qpair failed and we were unable to recover it. 00:42:47.403 [2024-05-15 09:08:42.016231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.403 [2024-05-15 09:08:42.016350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.403 [2024-05-15 09:08:42.016375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.403 qpair failed and we were unable to recover it. 00:42:47.403 [2024-05-15 09:08:42.016504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.403 [2024-05-15 09:08:42.016613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.403 [2024-05-15 09:08:42.016638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.403 qpair failed and we were unable to recover it. 00:42:47.403 [2024-05-15 09:08:42.016748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.403 [2024-05-15 09:08:42.016902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.403 [2024-05-15 09:08:42.016928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.403 qpair failed and we were unable to recover it. 00:42:47.403 [2024-05-15 09:08:42.017064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.403 [2024-05-15 09:08:42.017164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.403 [2024-05-15 09:08:42.017191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.403 qpair failed and we were unable to recover it. 00:42:47.403 [2024-05-15 09:08:42.017347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.403 [2024-05-15 09:08:42.017492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.403 [2024-05-15 09:08:42.017519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.403 qpair failed and we were unable to recover it. 00:42:47.403 [2024-05-15 09:08:42.017678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.403 [2024-05-15 09:08:42.017782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.403 [2024-05-15 09:08:42.017806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.403 qpair failed and we were unable to recover it. 00:42:47.403 [2024-05-15 09:08:42.017909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.403 [2024-05-15 09:08:42.018013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.403 [2024-05-15 09:08:42.018039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.403 qpair failed and we were unable to recover it. 00:42:47.403 [2024-05-15 09:08:42.018147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.403 [2024-05-15 09:08:42.018280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.403 [2024-05-15 09:08:42.018307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.403 qpair failed and we were unable to recover it. 00:42:47.403 [2024-05-15 09:08:42.018453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.403 [2024-05-15 09:08:42.018586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.403 [2024-05-15 09:08:42.018612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.403 qpair failed and we were unable to recover it. 00:42:47.403 [2024-05-15 09:08:42.018745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.403 [2024-05-15 09:08:42.018859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.403 [2024-05-15 09:08:42.018883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.403 qpair failed and we were unable to recover it. 00:42:47.403 [2024-05-15 09:08:42.018990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.403 [2024-05-15 09:08:42.019113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.403 [2024-05-15 09:08:42.019138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.403 qpair failed and we were unable to recover it. 00:42:47.403 [2024-05-15 09:08:42.019254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.403 [2024-05-15 09:08:42.019387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.403 [2024-05-15 09:08:42.019417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.403 qpair failed and we were unable to recover it. 00:42:47.403 [2024-05-15 09:08:42.019573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.403 [2024-05-15 09:08:42.019683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.404 [2024-05-15 09:08:42.019708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.404 qpair failed and we were unable to recover it. 00:42:47.404 [2024-05-15 09:08:42.019840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.404 [2024-05-15 09:08:42.019947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.404 [2024-05-15 09:08:42.019972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.404 qpair failed and we were unable to recover it. 00:42:47.404 [2024-05-15 09:08:42.020101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.404 [2024-05-15 09:08:42.020201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.404 [2024-05-15 09:08:42.020232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.404 qpair failed and we were unable to recover it. 00:42:47.404 [2024-05-15 09:08:42.020343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.404 [2024-05-15 09:08:42.020446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.404 [2024-05-15 09:08:42.020471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.404 qpair failed and we were unable to recover it. 00:42:47.404 [2024-05-15 09:08:42.020575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.404 [2024-05-15 09:08:42.020704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.404 [2024-05-15 09:08:42.020730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.404 qpair failed and we were unable to recover it. 00:42:47.404 [2024-05-15 09:08:42.020861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.404 [2024-05-15 09:08:42.020985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.404 [2024-05-15 09:08:42.021010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.404 qpair failed and we were unable to recover it. 00:42:47.404 [2024-05-15 09:08:42.021140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.404 [2024-05-15 09:08:42.021259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.404 [2024-05-15 09:08:42.021285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.404 qpair failed and we were unable to recover it. 00:42:47.404 [2024-05-15 09:08:42.021407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.404 [2024-05-15 09:08:42.021508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.404 [2024-05-15 09:08:42.021535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.404 qpair failed and we were unable to recover it. 00:42:47.404 [2024-05-15 09:08:42.021668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.404 [2024-05-15 09:08:42.021773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.404 [2024-05-15 09:08:42.021800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.404 qpair failed and we were unable to recover it. 00:42:47.404 [2024-05-15 09:08:42.021926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.404 [2024-05-15 09:08:42.022035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.404 [2024-05-15 09:08:42.022065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.404 qpair failed and we were unable to recover it. 00:42:47.404 [2024-05-15 09:08:42.022176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.404 [2024-05-15 09:08:42.022281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.404 [2024-05-15 09:08:42.022308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.404 qpair failed and we were unable to recover it. 00:42:47.404 [2024-05-15 09:08:42.022430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.404 [2024-05-15 09:08:42.022528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.404 [2024-05-15 09:08:42.022555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.404 qpair failed and we were unable to recover it. 00:42:47.404 [2024-05-15 09:08:42.022653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.404 [2024-05-15 09:08:42.022782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.404 [2024-05-15 09:08:42.022808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.404 qpair failed and we were unable to recover it. 00:42:47.404 [2024-05-15 09:08:42.022915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.404 [2024-05-15 09:08:42.023045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.404 [2024-05-15 09:08:42.023070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.404 qpair failed and we were unable to recover it. 00:42:47.404 [2024-05-15 09:08:42.023189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.404 [2024-05-15 09:08:42.023322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.404 [2024-05-15 09:08:42.023367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.404 qpair failed and we were unable to recover it. 00:42:47.404 [2024-05-15 09:08:42.023521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.404 [2024-05-15 09:08:42.023650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.404 [2024-05-15 09:08:42.023674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.404 qpair failed and we were unable to recover it. 00:42:47.404 [2024-05-15 09:08:42.023777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.404 [2024-05-15 09:08:42.023885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.404 [2024-05-15 09:08:42.023909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.404 qpair failed and we were unable to recover it. 00:42:47.404 [2024-05-15 09:08:42.024032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.404 [2024-05-15 09:08:42.024154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.404 [2024-05-15 09:08:42.024178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.404 qpair failed and we were unable to recover it. 00:42:47.404 [2024-05-15 09:08:42.024312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.404 [2024-05-15 09:08:42.024445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.404 [2024-05-15 09:08:42.024470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.404 qpair failed and we were unable to recover it. 00:42:47.404 [2024-05-15 09:08:42.024627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.404 [2024-05-15 09:08:42.024728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.404 [2024-05-15 09:08:42.024756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.404 qpair failed and we were unable to recover it. 00:42:47.404 [2024-05-15 09:08:42.024865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.404 [2024-05-15 09:08:42.024995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.404 [2024-05-15 09:08:42.025020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.404 qpair failed and we were unable to recover it. 00:42:47.404 [2024-05-15 09:08:42.025147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.404 [2024-05-15 09:08:42.025268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.404 [2024-05-15 09:08:42.025293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.404 qpair failed and we were unable to recover it. 00:42:47.404 [2024-05-15 09:08:42.025421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.404 [2024-05-15 09:08:42.025531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.404 [2024-05-15 09:08:42.025555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.404 qpair failed and we were unable to recover it. 00:42:47.404 [2024-05-15 09:08:42.025666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.404 [2024-05-15 09:08:42.025817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.404 [2024-05-15 09:08:42.025842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.404 qpair failed and we were unable to recover it. 00:42:47.404 [2024-05-15 09:08:42.025947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.404 [2024-05-15 09:08:42.026054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.404 [2024-05-15 09:08:42.026080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.404 qpair failed and we were unable to recover it. 00:42:47.404 [2024-05-15 09:08:42.026221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.404 [2024-05-15 09:08:42.026352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.404 [2024-05-15 09:08:42.026378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.404 qpair failed and we were unable to recover it. 00:42:47.404 [2024-05-15 09:08:42.026492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.404 [2024-05-15 09:08:42.026645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.404 [2024-05-15 09:08:42.026671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.404 qpair failed and we were unable to recover it. 00:42:47.404 [2024-05-15 09:08:42.026807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.404 [2024-05-15 09:08:42.026907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.404 [2024-05-15 09:08:42.026933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.404 qpair failed and we were unable to recover it. 00:42:47.404 [2024-05-15 09:08:42.027038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.404 [2024-05-15 09:08:42.027177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.404 [2024-05-15 09:08:42.027201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.404 qpair failed and we were unable to recover it. 00:42:47.404 [2024-05-15 09:08:42.027337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.405 [2024-05-15 09:08:42.027503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.405 [2024-05-15 09:08:42.027551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.405 qpair failed and we were unable to recover it. 00:42:47.405 [2024-05-15 09:08:42.027673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.405 [2024-05-15 09:08:42.027801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.405 [2024-05-15 09:08:42.027827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.405 qpair failed and we were unable to recover it. 00:42:47.405 [2024-05-15 09:08:42.027962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.405 [2024-05-15 09:08:42.028063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.405 [2024-05-15 09:08:42.028088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.405 qpair failed and we were unable to recover it. 00:42:47.405 [2024-05-15 09:08:42.028196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.405 [2024-05-15 09:08:42.028328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.405 [2024-05-15 09:08:42.028373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.405 qpair failed and we were unable to recover it. 00:42:47.405 [2024-05-15 09:08:42.028502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.405 [2024-05-15 09:08:42.028643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.405 [2024-05-15 09:08:42.028673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.405 qpair failed and we were unable to recover it. 00:42:47.405 [2024-05-15 09:08:42.028823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.405 [2024-05-15 09:08:42.028976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.405 [2024-05-15 09:08:42.029001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.405 qpair failed and we were unable to recover it. 00:42:47.405 [2024-05-15 09:08:42.029132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.405 [2024-05-15 09:08:42.029241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.405 [2024-05-15 09:08:42.029266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.405 qpair failed and we were unable to recover it. 00:42:47.405 [2024-05-15 09:08:42.029384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.405 [2024-05-15 09:08:42.029493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.405 [2024-05-15 09:08:42.029518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.405 qpair failed and we were unable to recover it. 00:42:47.405 [2024-05-15 09:08:42.029654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.405 [2024-05-15 09:08:42.029782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.405 [2024-05-15 09:08:42.029808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.405 qpair failed and we were unable to recover it. 00:42:47.405 [2024-05-15 09:08:42.029932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.405 [2024-05-15 09:08:42.030040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.405 [2024-05-15 09:08:42.030065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.405 qpair failed and we were unable to recover it. 00:42:47.405 [2024-05-15 09:08:42.030190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.405 [2024-05-15 09:08:42.030335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.405 [2024-05-15 09:08:42.030379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.405 qpair failed and we were unable to recover it. 00:42:47.405 [2024-05-15 09:08:42.030540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.405 [2024-05-15 09:08:42.030652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.405 [2024-05-15 09:08:42.030677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.405 qpair failed and we were unable to recover it. 00:42:47.405 [2024-05-15 09:08:42.030803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.405 [2024-05-15 09:08:42.030910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.405 [2024-05-15 09:08:42.030937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.405 qpair failed and we were unable to recover it. 00:42:47.405 [2024-05-15 09:08:42.031046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.405 [2024-05-15 09:08:42.031150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.405 [2024-05-15 09:08:42.031176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.405 qpair failed and we were unable to recover it. 00:42:47.405 [2024-05-15 09:08:42.031323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.405 [2024-05-15 09:08:42.031467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.405 [2024-05-15 09:08:42.031512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.405 qpair failed and we were unable to recover it. 00:42:47.405 [2024-05-15 09:08:42.031679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.405 [2024-05-15 09:08:42.031803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.405 [2024-05-15 09:08:42.031827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.405 qpair failed and we were unable to recover it. 00:42:47.405 [2024-05-15 09:08:42.031969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.405 [2024-05-15 09:08:42.032070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.405 [2024-05-15 09:08:42.032094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.405 qpair failed and we were unable to recover it. 00:42:47.405 [2024-05-15 09:08:42.032234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.405 [2024-05-15 09:08:42.032363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.405 [2024-05-15 09:08:42.032407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.405 qpair failed and we were unable to recover it. 00:42:47.405 [2024-05-15 09:08:42.032556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.405 [2024-05-15 09:08:42.032683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.405 [2024-05-15 09:08:42.032708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.405 qpair failed and we were unable to recover it. 00:42:47.405 [2024-05-15 09:08:42.032815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.405 [2024-05-15 09:08:42.032945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.405 [2024-05-15 09:08:42.032971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.405 qpair failed and we were unable to recover it. 00:42:47.405 [2024-05-15 09:08:42.033105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.405 [2024-05-15 09:08:42.033241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.405 [2024-05-15 09:08:42.033273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.405 qpair failed and we were unable to recover it. 00:42:47.405 [2024-05-15 09:08:42.033427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.405 [2024-05-15 09:08:42.033551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.405 [2024-05-15 09:08:42.033577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.405 qpair failed and we were unable to recover it. 00:42:47.405 [2024-05-15 09:08:42.033710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.405 [2024-05-15 09:08:42.033817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.405 [2024-05-15 09:08:42.033842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.405 qpair failed and we were unable to recover it. 00:42:47.405 [2024-05-15 09:08:42.033975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.405 [2024-05-15 09:08:42.034074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.405 [2024-05-15 09:08:42.034099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.405 qpair failed and we were unable to recover it. 00:42:47.405 [2024-05-15 09:08:42.034209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.405 [2024-05-15 09:08:42.034345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.405 [2024-05-15 09:08:42.034371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.405 qpair failed and we were unable to recover it. 00:42:47.405 [2024-05-15 09:08:42.034529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.405 [2024-05-15 09:08:42.034637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.405 [2024-05-15 09:08:42.034662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.405 qpair failed and we were unable to recover it. 00:42:47.405 [2024-05-15 09:08:42.034771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.405 [2024-05-15 09:08:42.034876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.405 [2024-05-15 09:08:42.034901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.405 qpair failed and we were unable to recover it. 00:42:47.405 [2024-05-15 09:08:42.035032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.405 [2024-05-15 09:08:42.035141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.405 [2024-05-15 09:08:42.035167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.405 qpair failed and we were unable to recover it. 00:42:47.405 [2024-05-15 09:08:42.035301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.405 [2024-05-15 09:08:42.035414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.405 [2024-05-15 09:08:42.035441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.405 qpair failed and we were unable to recover it. 00:42:47.406 [2024-05-15 09:08:42.035566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.406 [2024-05-15 09:08:42.035672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.406 [2024-05-15 09:08:42.035699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.406 qpair failed and we were unable to recover it. 00:42:47.406 [2024-05-15 09:08:42.035805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.406 [2024-05-15 09:08:42.035960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.406 [2024-05-15 09:08:42.035986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.406 qpair failed and we were unable to recover it. 00:42:47.406 [2024-05-15 09:08:42.036124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.406 [2024-05-15 09:08:42.036230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.406 [2024-05-15 09:08:42.036256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.406 qpair failed and we were unable to recover it. 00:42:47.406 [2024-05-15 09:08:42.036391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.406 [2024-05-15 09:08:42.036522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.406 [2024-05-15 09:08:42.036565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.406 qpair failed and we were unable to recover it. 00:42:47.406 [2024-05-15 09:08:42.036692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.406 [2024-05-15 09:08:42.036871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.406 [2024-05-15 09:08:42.036896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.406 qpair failed and we were unable to recover it. 00:42:47.406 [2024-05-15 09:08:42.037023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.406 [2024-05-15 09:08:42.037154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.406 [2024-05-15 09:08:42.037180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.406 qpair failed and we were unable to recover it. 00:42:47.406 [2024-05-15 09:08:42.037337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.406 [2024-05-15 09:08:42.037482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.406 [2024-05-15 09:08:42.037510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.406 qpair failed and we were unable to recover it. 00:42:47.406 [2024-05-15 09:08:42.037648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.406 [2024-05-15 09:08:42.037810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.406 [2024-05-15 09:08:42.037834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.406 qpair failed and we were unable to recover it. 00:42:47.406 [2024-05-15 09:08:42.037937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.406 [2024-05-15 09:08:42.038060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.406 [2024-05-15 09:08:42.038085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.406 qpair failed and we were unable to recover it. 00:42:47.406 [2024-05-15 09:08:42.038220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.406 [2024-05-15 09:08:42.038327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.406 [2024-05-15 09:08:42.038351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.406 qpair failed and we were unable to recover it. 00:42:47.406 [2024-05-15 09:08:42.038461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.406 [2024-05-15 09:08:42.038567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.406 [2024-05-15 09:08:42.038593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.406 qpair failed and we were unable to recover it. 00:42:47.406 [2024-05-15 09:08:42.038696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.406 [2024-05-15 09:08:42.038833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.406 [2024-05-15 09:08:42.038858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.406 qpair failed and we were unable to recover it. 00:42:47.406 [2024-05-15 09:08:42.038964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.406 [2024-05-15 09:08:42.039074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.406 [2024-05-15 09:08:42.039099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.406 qpair failed and we were unable to recover it. 00:42:47.406 [2024-05-15 09:08:42.039204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.406 [2024-05-15 09:08:42.039336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.406 [2024-05-15 09:08:42.039362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.406 qpair failed and we were unable to recover it. 00:42:47.406 [2024-05-15 09:08:42.039497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.406 [2024-05-15 09:08:42.039628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.406 [2024-05-15 09:08:42.039655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.406 qpair failed and we were unable to recover it. 00:42:47.406 [2024-05-15 09:08:42.039801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.406 [2024-05-15 09:08:42.039913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.406 [2024-05-15 09:08:42.039938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.406 qpair failed and we were unable to recover it. 00:42:47.406 [2024-05-15 09:08:42.040039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.406 [2024-05-15 09:08:42.040162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.406 [2024-05-15 09:08:42.040187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.406 qpair failed and we were unable to recover it. 00:42:47.406 [2024-05-15 09:08:42.040345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.406 [2024-05-15 09:08:42.040486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.406 [2024-05-15 09:08:42.040516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.406 qpair failed and we were unable to recover it. 00:42:47.406 [2024-05-15 09:08:42.040642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.406 [2024-05-15 09:08:42.040759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.406 [2024-05-15 09:08:42.040786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.406 qpair failed and we were unable to recover it. 00:42:47.406 [2024-05-15 09:08:42.040892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.406 [2024-05-15 09:08:42.041020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.406 [2024-05-15 09:08:42.041045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.406 qpair failed and we were unable to recover it. 00:42:47.406 [2024-05-15 09:08:42.041154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.406 [2024-05-15 09:08:42.041263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.406 [2024-05-15 09:08:42.041289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.406 qpair failed and we were unable to recover it. 00:42:47.406 [2024-05-15 09:08:42.041422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.406 [2024-05-15 09:08:42.041549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.406 [2024-05-15 09:08:42.041592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.406 qpair failed and we were unable to recover it. 00:42:47.406 [2024-05-15 09:08:42.041727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.406 [2024-05-15 09:08:42.041855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.406 [2024-05-15 09:08:42.041879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.406 qpair failed and we were unable to recover it. 00:42:47.406 [2024-05-15 09:08:42.041980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.406 [2024-05-15 09:08:42.042118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.406 [2024-05-15 09:08:42.042144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.406 qpair failed and we were unable to recover it. 00:42:47.406 [2024-05-15 09:08:42.042301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.406 [2024-05-15 09:08:42.042448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.406 [2024-05-15 09:08:42.042494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.406 qpair failed and we were unable to recover it. 00:42:47.406 [2024-05-15 09:08:42.042636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.406 [2024-05-15 09:08:42.042792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.406 [2024-05-15 09:08:42.042817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.406 qpair failed and we were unable to recover it. 00:42:47.406 [2024-05-15 09:08:42.042923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.406 [2024-05-15 09:08:42.043032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.406 [2024-05-15 09:08:42.043056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.406 qpair failed and we were unable to recover it. 00:42:47.406 [2024-05-15 09:08:42.043167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.406 [2024-05-15 09:08:42.043298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.406 [2024-05-15 09:08:42.043323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.406 qpair failed and we were unable to recover it. 00:42:47.406 [2024-05-15 09:08:42.043481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.406 [2024-05-15 09:08:42.043646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.406 [2024-05-15 09:08:42.043689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.406 qpair failed and we were unable to recover it. 00:42:47.407 [2024-05-15 09:08:42.043791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.407 [2024-05-15 09:08:42.043926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.407 [2024-05-15 09:08:42.043951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.407 qpair failed and we were unable to recover it. 00:42:47.407 [2024-05-15 09:08:42.044058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.407 [2024-05-15 09:08:42.044166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.407 [2024-05-15 09:08:42.044191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.407 qpair failed and we were unable to recover it. 00:42:47.407 [2024-05-15 09:08:42.044348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.407 [2024-05-15 09:08:42.044503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.407 [2024-05-15 09:08:42.044546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.407 qpair failed and we were unable to recover it. 00:42:47.407 [2024-05-15 09:08:42.044698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.407 [2024-05-15 09:08:42.044824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.407 [2024-05-15 09:08:42.044849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.407 qpair failed and we were unable to recover it. 00:42:47.407 [2024-05-15 09:08:42.044985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.407 [2024-05-15 09:08:42.045089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.407 [2024-05-15 09:08:42.045115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.407 qpair failed and we were unable to recover it. 00:42:47.407 [2024-05-15 09:08:42.045270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.407 [2024-05-15 09:08:42.045401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.407 [2024-05-15 09:08:42.045426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.407 qpair failed and we were unable to recover it. 00:42:47.407 [2024-05-15 09:08:42.045529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.407 [2024-05-15 09:08:42.045641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.407 [2024-05-15 09:08:42.045667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.407 qpair failed and we were unable to recover it. 00:42:47.407 [2024-05-15 09:08:42.045810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.407 [2024-05-15 09:08:42.045919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.407 [2024-05-15 09:08:42.045944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.407 qpair failed and we were unable to recover it. 00:42:47.407 [2024-05-15 09:08:42.046049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.407 [2024-05-15 09:08:42.046156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.407 [2024-05-15 09:08:42.046181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.407 qpair failed and we were unable to recover it. 00:42:47.407 [2024-05-15 09:08:42.046346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.407 [2024-05-15 09:08:42.046456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.407 [2024-05-15 09:08:42.046481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.407 qpair failed and we were unable to recover it. 00:42:47.407 [2024-05-15 09:08:42.046590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.407 [2024-05-15 09:08:42.046700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.407 [2024-05-15 09:08:42.046725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.407 qpair failed and we were unable to recover it. 00:42:47.407 [2024-05-15 09:08:42.046852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.407 [2024-05-15 09:08:42.046955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.407 [2024-05-15 09:08:42.046980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.407 qpair failed and we were unable to recover it. 00:42:47.407 [2024-05-15 09:08:42.047084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.407 [2024-05-15 09:08:42.047202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.407 [2024-05-15 09:08:42.047233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.407 qpair failed and we were unable to recover it. 00:42:47.407 [2024-05-15 09:08:42.047372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.407 [2024-05-15 09:08:42.047477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.407 [2024-05-15 09:08:42.047502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.407 qpair failed and we were unable to recover it. 00:42:47.407 [2024-05-15 09:08:42.047605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.407 [2024-05-15 09:08:42.047731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.407 [2024-05-15 09:08:42.047755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.407 qpair failed and we were unable to recover it. 00:42:47.407 [2024-05-15 09:08:42.047887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.407 [2024-05-15 09:08:42.047987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.407 [2024-05-15 09:08:42.048011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.407 qpair failed and we were unable to recover it. 00:42:47.407 [2024-05-15 09:08:42.048140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.407 [2024-05-15 09:08:42.048246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.407 [2024-05-15 09:08:42.048272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.407 qpair failed and we were unable to recover it. 00:42:47.407 [2024-05-15 09:08:42.048384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.407 [2024-05-15 09:08:42.048490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.407 [2024-05-15 09:08:42.048516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.407 qpair failed and we were unable to recover it. 00:42:47.407 [2024-05-15 09:08:42.048650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.407 [2024-05-15 09:08:42.048789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.407 [2024-05-15 09:08:42.048814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.407 qpair failed and we were unable to recover it. 00:42:47.407 [2024-05-15 09:08:42.048917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.407 [2024-05-15 09:08:42.049028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.407 [2024-05-15 09:08:42.049054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.407 qpair failed and we were unable to recover it. 00:42:47.407 [2024-05-15 09:08:42.049163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.407 [2024-05-15 09:08:42.049315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.407 [2024-05-15 09:08:42.049340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.407 qpair failed and we were unable to recover it. 00:42:47.407 [2024-05-15 09:08:42.049462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.407 [2024-05-15 09:08:42.049631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.407 [2024-05-15 09:08:42.049675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.407 qpair failed and we were unable to recover it. 00:42:47.407 [2024-05-15 09:08:42.049812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.407 [2024-05-15 09:08:42.049940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.407 [2024-05-15 09:08:42.049966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f38000b90 with addr=10.0.0.2, port=4420 00:42:47.407 qpair failed and we were unable to recover it. 00:42:47.407 [2024-05-15 09:08:42.050110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.407 [2024-05-15 09:08:42.050270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.407 [2024-05-15 09:08:42.050304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.407 qpair failed and we were unable to recover it. 00:42:47.407 [2024-05-15 09:08:42.050440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.407 [2024-05-15 09:08:42.050614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.407 [2024-05-15 09:08:42.050647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.407 qpair failed and we were unable to recover it. 00:42:47.407 [2024-05-15 09:08:42.050762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.407 [2024-05-15 09:08:42.050877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.407 [2024-05-15 09:08:42.050907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.407 qpair failed and we were unable to recover it. 00:42:47.407 [2024-05-15 09:08:42.051038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.407 [2024-05-15 09:08:42.051140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.407 [2024-05-15 09:08:42.051168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.407 qpair failed and we were unable to recover it. 00:42:47.407 [2024-05-15 09:08:42.051409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.407 [2024-05-15 09:08:42.051561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.407 [2024-05-15 09:08:42.051592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.407 qpair failed and we were unable to recover it. 00:42:47.407 [2024-05-15 09:08:42.051726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.407 [2024-05-15 09:08:42.051863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.407 [2024-05-15 09:08:42.051895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.408 qpair failed and we were unable to recover it. 00:42:47.408 [2024-05-15 09:08:42.052110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.408 [2024-05-15 09:08:42.052265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.408 [2024-05-15 09:08:42.052295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.408 qpair failed and we were unable to recover it. 00:42:47.408 [2024-05-15 09:08:42.052464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.408 [2024-05-15 09:08:42.052724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.408 [2024-05-15 09:08:42.052777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.408 qpair failed and we were unable to recover it. 00:42:47.408 [2024-05-15 09:08:42.052927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.408 [2024-05-15 09:08:42.053073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.408 [2024-05-15 09:08:42.053103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.408 qpair failed and we were unable to recover it. 00:42:47.408 [2024-05-15 09:08:42.053219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.408 [2024-05-15 09:08:42.053326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.408 [2024-05-15 09:08:42.053353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.408 qpair failed and we were unable to recover it. 00:42:47.408 [2024-05-15 09:08:42.053465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.408 [2024-05-15 09:08:42.053602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.408 [2024-05-15 09:08:42.053629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.408 qpair failed and we were unable to recover it. 00:42:47.408 [2024-05-15 09:08:42.053792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.408 [2024-05-15 09:08:42.053931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.408 [2024-05-15 09:08:42.053964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.408 qpair failed and we were unable to recover it. 00:42:47.408 [2024-05-15 09:08:42.054151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.408 [2024-05-15 09:08:42.054291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.408 [2024-05-15 09:08:42.054318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.408 qpair failed and we were unable to recover it. 00:42:47.408 [2024-05-15 09:08:42.054464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.408 [2024-05-15 09:08:42.054603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.408 [2024-05-15 09:08:42.054633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.408 qpair failed and we were unable to recover it. 00:42:47.408 [2024-05-15 09:08:42.054788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.408 [2024-05-15 09:08:42.054955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.408 [2024-05-15 09:08:42.054992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.408 qpair failed and we were unable to recover it. 00:42:47.408 [2024-05-15 09:08:42.055132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.408 [2024-05-15 09:08:42.055245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.408 [2024-05-15 09:08:42.055275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.408 qpair failed and we were unable to recover it. 00:42:47.408 [2024-05-15 09:08:42.055417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.408 [2024-05-15 09:08:42.055551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.408 [2024-05-15 09:08:42.055584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.408 qpair failed and we were unable to recover it. 00:42:47.408 [2024-05-15 09:08:42.055733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.408 [2024-05-15 09:08:42.055877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.408 [2024-05-15 09:08:42.055910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.408 qpair failed and we were unable to recover it. 00:42:47.408 [2024-05-15 09:08:42.056132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.408 [2024-05-15 09:08:42.056254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.408 [2024-05-15 09:08:42.056300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.408 qpair failed and we were unable to recover it. 00:42:47.408 [2024-05-15 09:08:42.056412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.408 [2024-05-15 09:08:42.056551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.408 [2024-05-15 09:08:42.056581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.408 qpair failed and we were unable to recover it. 00:42:47.408 [2024-05-15 09:08:42.056763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.408 [2024-05-15 09:08:42.056916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.408 [2024-05-15 09:08:42.056953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.408 qpair failed and we were unable to recover it. 00:42:47.408 [2024-05-15 09:08:42.057189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.408 [2024-05-15 09:08:42.057350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.408 [2024-05-15 09:08:42.057377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.408 qpair failed and we were unable to recover it. 00:42:47.408 [2024-05-15 09:08:42.057517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.408 [2024-05-15 09:08:42.057772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.408 [2024-05-15 09:08:42.057826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.408 qpair failed and we were unable to recover it. 00:42:47.408 [2024-05-15 09:08:42.058001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.408 [2024-05-15 09:08:42.058143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.408 [2024-05-15 09:08:42.058169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.408 qpair failed and we were unable to recover it. 00:42:47.408 [2024-05-15 09:08:42.058310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.408 [2024-05-15 09:08:42.058444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.408 [2024-05-15 09:08:42.058471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.408 qpair failed and we were unable to recover it. 00:42:47.408 [2024-05-15 09:08:42.058609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.408 [2024-05-15 09:08:42.058726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.408 [2024-05-15 09:08:42.058755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.408 qpair failed and we were unable to recover it. 00:42:47.408 [2024-05-15 09:08:42.058877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.408 [2024-05-15 09:08:42.059014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.408 [2024-05-15 09:08:42.059046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.408 qpair failed and we were unable to recover it. 00:42:47.408 [2024-05-15 09:08:42.059244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.408 [2024-05-15 09:08:42.059384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.408 [2024-05-15 09:08:42.059413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.408 qpair failed and we were unable to recover it. 00:42:47.408 [2024-05-15 09:08:42.059539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.408 [2024-05-15 09:08:42.059646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.408 [2024-05-15 09:08:42.059679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.408 qpair failed and we were unable to recover it. 00:42:47.408 [2024-05-15 09:08:42.059840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.408 [2024-05-15 09:08:42.060017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.408 [2024-05-15 09:08:42.060046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.408 qpair failed and we were unable to recover it. 00:42:47.408 [2024-05-15 09:08:42.060196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.409 [2024-05-15 09:08:42.060320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.409 [2024-05-15 09:08:42.060350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.409 qpair failed and we were unable to recover it. 00:42:47.409 [2024-05-15 09:08:42.060474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.409 [2024-05-15 09:08:42.060632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.409 [2024-05-15 09:08:42.060664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.409 qpair failed and we were unable to recover it. 00:42:47.409 [2024-05-15 09:08:42.060809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.409 [2024-05-15 09:08:42.060964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.409 [2024-05-15 09:08:42.060997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.409 qpair failed and we were unable to recover it. 00:42:47.409 [2024-05-15 09:08:42.061228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.409 [2024-05-15 09:08:42.061366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.409 [2024-05-15 09:08:42.061392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.409 qpair failed and we were unable to recover it. 00:42:47.409 [2024-05-15 09:08:42.061565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.409 [2024-05-15 09:08:42.061704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.409 [2024-05-15 09:08:42.061732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.409 qpair failed and we were unable to recover it. 00:42:47.409 [2024-05-15 09:08:42.061876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.409 [2024-05-15 09:08:42.062015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.409 [2024-05-15 09:08:42.062043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.409 qpair failed and we were unable to recover it. 00:42:47.409 [2024-05-15 09:08:42.062186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.409 [2024-05-15 09:08:42.062361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.409 [2024-05-15 09:08:42.062390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.409 qpair failed and we were unable to recover it. 00:42:47.409 [2024-05-15 09:08:42.062505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.409 [2024-05-15 09:08:42.062634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.409 [2024-05-15 09:08:42.062661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.409 qpair failed and we were unable to recover it. 00:42:47.409 [2024-05-15 09:08:42.062818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.409 [2024-05-15 09:08:42.062961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.409 [2024-05-15 09:08:42.062991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.409 qpair failed and we were unable to recover it. 00:42:47.409 [2024-05-15 09:08:42.063236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.409 [2024-05-15 09:08:42.063362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.409 [2024-05-15 09:08:42.063388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.409 qpair failed and we were unable to recover it. 00:42:47.409 [2024-05-15 09:08:42.063510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.409 [2024-05-15 09:08:42.063686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.409 [2024-05-15 09:08:42.063714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.409 qpair failed and we were unable to recover it. 00:42:47.409 [2024-05-15 09:08:42.063879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.409 [2024-05-15 09:08:42.064042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.409 [2024-05-15 09:08:42.064068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.409 qpair failed and we were unable to recover it. 00:42:47.409 [2024-05-15 09:08:42.064242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.409 [2024-05-15 09:08:42.064356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.409 [2024-05-15 09:08:42.064393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.409 qpair failed and we were unable to recover it. 00:42:47.409 [2024-05-15 09:08:42.064531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.409 [2024-05-15 09:08:42.064642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.409 [2024-05-15 09:08:42.064672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.409 qpair failed and we were unable to recover it. 00:42:47.409 [2024-05-15 09:08:42.064855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.409 [2024-05-15 09:08:42.064974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.409 [2024-05-15 09:08:42.065005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.409 qpair failed and we were unable to recover it. 00:42:47.409 [2024-05-15 09:08:42.065177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.409 [2024-05-15 09:08:42.065313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.409 [2024-05-15 09:08:42.065340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.409 qpair failed and we were unable to recover it. 00:42:47.409 [2024-05-15 09:08:42.065450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.409 [2024-05-15 09:08:42.065646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.409 [2024-05-15 09:08:42.065676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.409 qpair failed and we were unable to recover it. 00:42:47.409 [2024-05-15 09:08:42.065799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.409 [2024-05-15 09:08:42.065909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.409 [2024-05-15 09:08:42.065940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.409 qpair failed and we were unable to recover it. 00:42:47.409 [2024-05-15 09:08:42.066147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.409 [2024-05-15 09:08:42.066288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.409 [2024-05-15 09:08:42.066317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.409 qpair failed and we were unable to recover it. 00:42:47.409 [2024-05-15 09:08:42.066422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.409 [2024-05-15 09:08:42.066572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.409 [2024-05-15 09:08:42.066602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.409 qpair failed and we were unable to recover it. 00:42:47.409 [2024-05-15 09:08:42.066746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.409 [2024-05-15 09:08:42.066875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.409 [2024-05-15 09:08:42.066906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.409 qpair failed and we were unable to recover it. 00:42:47.409 [2024-05-15 09:08:42.067101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.409 [2024-05-15 09:08:42.067227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.409 [2024-05-15 09:08:42.067276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.409 qpair failed and we were unable to recover it. 00:42:47.409 [2024-05-15 09:08:42.067408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.409 [2024-05-15 09:08:42.067567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.409 [2024-05-15 09:08:42.067601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.409 qpair failed and we were unable to recover it. 00:42:47.409 [2024-05-15 09:08:42.067769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.409 [2024-05-15 09:08:42.067887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.409 [2024-05-15 09:08:42.067919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.409 qpair failed and we were unable to recover it. 00:42:47.409 [2024-05-15 09:08:42.068056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.409 [2024-05-15 09:08:42.068199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.409 [2024-05-15 09:08:42.068241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.409 qpair failed and we were unable to recover it. 00:42:47.409 [2024-05-15 09:08:42.068405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.409 [2024-05-15 09:08:42.068539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.409 [2024-05-15 09:08:42.068568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.409 qpair failed and we were unable to recover it. 00:42:47.409 [2024-05-15 09:08:42.068702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.409 [2024-05-15 09:08:42.068826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.409 [2024-05-15 09:08:42.068860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.409 qpair failed and we were unable to recover it. 00:42:47.409 [2024-05-15 09:08:42.069065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.409 [2024-05-15 09:08:42.069212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.409 [2024-05-15 09:08:42.069252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.409 qpair failed and we were unable to recover it. 00:42:47.409 [2024-05-15 09:08:42.069417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.409 [2024-05-15 09:08:42.069527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.409 [2024-05-15 09:08:42.069556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.409 qpair failed and we were unable to recover it. 00:42:47.409 [2024-05-15 09:08:42.069713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.409 [2024-05-15 09:08:42.069908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.409 [2024-05-15 09:08:42.069939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.409 qpair failed and we were unable to recover it. 00:42:47.409 [2024-05-15 09:08:42.070091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.409 [2024-05-15 09:08:42.070258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.410 [2024-05-15 09:08:42.070286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.410 qpair failed and we were unable to recover it. 00:42:47.410 [2024-05-15 09:08:42.070405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.410 [2024-05-15 09:08:42.070521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.410 [2024-05-15 09:08:42.070549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.410 qpair failed and we were unable to recover it. 00:42:47.410 [2024-05-15 09:08:42.070718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.410 [2024-05-15 09:08:42.070832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.410 [2024-05-15 09:08:42.070866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.410 qpair failed and we were unable to recover it. 00:42:47.410 [2024-05-15 09:08:42.071000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.410 [2024-05-15 09:08:42.071170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.410 [2024-05-15 09:08:42.071199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.410 qpair failed and we were unable to recover it. 00:42:47.410 [2024-05-15 09:08:42.071364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.410 [2024-05-15 09:08:42.071518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.410 [2024-05-15 09:08:42.071551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.410 qpair failed and we were unable to recover it. 00:42:47.410 [2024-05-15 09:08:42.071685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.410 [2024-05-15 09:08:42.071825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.410 [2024-05-15 09:08:42.071855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.410 qpair failed and we were unable to recover it. 00:42:47.410 [2024-05-15 09:08:42.072025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.410 [2024-05-15 09:08:42.072147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.410 [2024-05-15 09:08:42.072177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.410 qpair failed and we were unable to recover it. 00:42:47.410 [2024-05-15 09:08:42.072317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.410 [2024-05-15 09:08:42.072453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.410 [2024-05-15 09:08:42.072483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.410 qpair failed and we were unable to recover it. 00:42:47.410 [2024-05-15 09:08:42.072639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.410 [2024-05-15 09:08:42.072787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.410 [2024-05-15 09:08:42.072818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.410 qpair failed and we were unable to recover it. 00:42:47.410 [2024-05-15 09:08:42.072954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.410 [2024-05-15 09:08:42.073112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.410 [2024-05-15 09:08:42.073137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.410 qpair failed and we were unable to recover it. 00:42:47.410 [2024-05-15 09:08:42.073282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.410 [2024-05-15 09:08:42.073390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.410 [2024-05-15 09:08:42.073415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.410 qpair failed and we were unable to recover it. 00:42:47.410 [2024-05-15 09:08:42.073526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.410 [2024-05-15 09:08:42.073657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.410 [2024-05-15 09:08:42.073696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.410 qpair failed and we were unable to recover it. 00:42:47.410 [2024-05-15 09:08:42.073889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.410 [2024-05-15 09:08:42.074000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.410 [2024-05-15 09:08:42.074026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.410 qpair failed and we were unable to recover it. 00:42:47.410 [2024-05-15 09:08:42.074169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.410 [2024-05-15 09:08:42.074305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.410 [2024-05-15 09:08:42.074335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.410 qpair failed and we were unable to recover it. 00:42:47.410 [2024-05-15 09:08:42.074484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.410 [2024-05-15 09:08:42.074606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.410 [2024-05-15 09:08:42.074639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.410 qpair failed and we were unable to recover it. 00:42:47.410 [2024-05-15 09:08:42.074767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.410 [2024-05-15 09:08:42.074894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.410 [2024-05-15 09:08:42.074919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.410 qpair failed and we were unable to recover it. 00:42:47.410 [2024-05-15 09:08:42.075083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.410 [2024-05-15 09:08:42.075211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.410 [2024-05-15 09:08:42.075248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.410 qpair failed and we were unable to recover it. 00:42:47.410 [2024-05-15 09:08:42.075428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.410 [2024-05-15 09:08:42.075589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.410 [2024-05-15 09:08:42.075619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.410 qpair failed and we were unable to recover it. 00:42:47.410 [2024-05-15 09:08:42.075734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.410 [2024-05-15 09:08:42.075892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.410 [2024-05-15 09:08:42.075919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.410 qpair failed and we were unable to recover it. 00:42:47.410 [2024-05-15 09:08:42.076069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.410 [2024-05-15 09:08:42.076241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.410 [2024-05-15 09:08:42.076273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.410 qpair failed and we were unable to recover it. 00:42:47.410 [2024-05-15 09:08:42.076396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.410 [2024-05-15 09:08:42.076529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.410 [2024-05-15 09:08:42.076560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.410 qpair failed and we were unable to recover it. 00:42:47.410 [2024-05-15 09:08:42.076722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.410 [2024-05-15 09:08:42.076825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.410 [2024-05-15 09:08:42.076862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.410 qpair failed and we were unable to recover it. 00:42:47.410 [2024-05-15 09:08:42.076984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.410 [2024-05-15 09:08:42.077144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.410 [2024-05-15 09:08:42.077175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.410 qpair failed and we were unable to recover it. 00:42:47.410 [2024-05-15 09:08:42.077314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.410 [2024-05-15 09:08:42.077429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.410 [2024-05-15 09:08:42.077455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.410 qpair failed and we were unable to recover it. 00:42:47.410 [2024-05-15 09:08:42.077595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.410 [2024-05-15 09:08:42.077705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.410 [2024-05-15 09:08:42.077731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.410 qpair failed and we were unable to recover it. 00:42:47.410 [2024-05-15 09:08:42.077886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.410 [2024-05-15 09:08:42.078015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.410 [2024-05-15 09:08:42.078047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.410 qpair failed and we were unable to recover it. 00:42:47.410 [2024-05-15 09:08:42.078205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.410 [2024-05-15 09:08:42.078375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.410 [2024-05-15 09:08:42.078401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.410 qpair failed and we were unable to recover it. 00:42:47.410 [2024-05-15 09:08:42.078513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.410 [2024-05-15 09:08:42.078642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.410 [2024-05-15 09:08:42.078668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.410 qpair failed and we were unable to recover it. 00:42:47.410 [2024-05-15 09:08:42.078814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.410 [2024-05-15 09:08:42.078945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.410 [2024-05-15 09:08:42.078976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.410 qpair failed and we were unable to recover it. 00:42:47.410 [2024-05-15 09:08:42.079113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.410 [2024-05-15 09:08:42.079302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.410 [2024-05-15 09:08:42.079331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.410 qpair failed and we were unable to recover it. 00:42:47.410 [2024-05-15 09:08:42.079470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.410 [2024-05-15 09:08:42.079602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.410 [2024-05-15 09:08:42.079630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.410 qpair failed and we were unable to recover it. 00:42:47.410 [2024-05-15 09:08:42.079780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.410 [2024-05-15 09:08:42.079926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.410 [2024-05-15 09:08:42.079956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.410 qpair failed and we were unable to recover it. 00:42:47.411 [2024-05-15 09:08:42.080087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.411 [2024-05-15 09:08:42.080270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.411 [2024-05-15 09:08:42.080303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.411 qpair failed and we were unable to recover it. 00:42:47.411 [2024-05-15 09:08:42.080451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.411 [2024-05-15 09:08:42.080557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.411 [2024-05-15 09:08:42.080583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.411 qpair failed and we were unable to recover it. 00:42:47.411 [2024-05-15 09:08:42.080774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.411 [2024-05-15 09:08:42.080896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.411 [2024-05-15 09:08:42.080928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.411 qpair failed and we were unable to recover it. 00:42:47.411 [2024-05-15 09:08:42.081103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.411 [2024-05-15 09:08:42.081248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.411 [2024-05-15 09:08:42.081291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.411 qpair failed and we were unable to recover it. 00:42:47.411 [2024-05-15 09:08:42.081408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.411 [2024-05-15 09:08:42.081516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.411 [2024-05-15 09:08:42.081543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.411 qpair failed and we were unable to recover it. 00:42:47.411 [2024-05-15 09:08:42.081698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.411 [2024-05-15 09:08:42.081858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.411 [2024-05-15 09:08:42.081886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.411 qpair failed and we were unable to recover it. 00:42:47.411 [2024-05-15 09:08:42.082044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.411 [2024-05-15 09:08:42.082175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.411 [2024-05-15 09:08:42.082200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.411 qpair failed and we were unable to recover it. 00:42:47.411 [2024-05-15 09:08:42.082316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.411 [2024-05-15 09:08:42.082463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.411 [2024-05-15 09:08:42.082489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.411 qpair failed and we were unable to recover it. 00:42:47.411 [2024-05-15 09:08:42.082657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.411 [2024-05-15 09:08:42.082765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.411 [2024-05-15 09:08:42.082790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.411 qpair failed and we were unable to recover it. 00:42:47.411 [2024-05-15 09:08:42.082919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.411 [2024-05-15 09:08:42.083067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.411 [2024-05-15 09:08:42.083100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.411 qpair failed and we were unable to recover it. 00:42:47.411 [2024-05-15 09:08:42.083280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.411 [2024-05-15 09:08:42.083389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.411 [2024-05-15 09:08:42.083419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.411 qpair failed and we were unable to recover it. 00:42:47.411 [2024-05-15 09:08:42.083560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.411 [2024-05-15 09:08:42.083698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.411 [2024-05-15 09:08:42.083731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.411 qpair failed and we were unable to recover it. 00:42:47.411 [2024-05-15 09:08:42.083856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.411 [2024-05-15 09:08:42.084001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.411 [2024-05-15 09:08:42.084031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.411 qpair failed and we were unable to recover it. 00:42:47.411 [2024-05-15 09:08:42.084192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.411 [2024-05-15 09:08:42.084311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.411 [2024-05-15 09:08:42.084340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.411 qpair failed and we were unable to recover it. 00:42:47.411 [2024-05-15 09:08:42.084464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.411 [2024-05-15 09:08:42.084616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.411 [2024-05-15 09:08:42.084646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.411 qpair failed and we were unable to recover it. 00:42:47.411 [2024-05-15 09:08:42.084783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.411 [2024-05-15 09:08:42.084926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.411 [2024-05-15 09:08:42.084953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.411 qpair failed and we were unable to recover it. 00:42:47.411 [2024-05-15 09:08:42.085125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.411 [2024-05-15 09:08:42.085234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.411 [2024-05-15 09:08:42.085259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.411 qpair failed and we were unable to recover it. 00:42:47.411 [2024-05-15 09:08:42.085364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.411 [2024-05-15 09:08:42.085508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.411 [2024-05-15 09:08:42.085536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.411 qpair failed and we were unable to recover it. 00:42:47.411 [2024-05-15 09:08:42.085676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.411 [2024-05-15 09:08:42.085799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.411 [2024-05-15 09:08:42.085823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.411 qpair failed and we were unable to recover it. 00:42:47.411 [2024-05-15 09:08:42.085954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.411 [2024-05-15 09:08:42.086109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.411 [2024-05-15 09:08:42.086150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.411 qpair failed and we were unable to recover it. 00:42:47.411 [2024-05-15 09:08:42.086289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.411 [2024-05-15 09:08:42.086398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.411 [2024-05-15 09:08:42.086424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.411 qpair failed and we were unable to recover it. 00:42:47.411 [2024-05-15 09:08:42.086541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.411 [2024-05-15 09:08:42.086643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.411 [2024-05-15 09:08:42.086669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.411 qpair failed and we were unable to recover it. 00:42:47.411 [2024-05-15 09:08:42.086794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.411 [2024-05-15 09:08:42.086931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.411 [2024-05-15 09:08:42.086956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.411 qpair failed and we were unable to recover it. 00:42:47.411 [2024-05-15 09:08:42.087081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.411 [2024-05-15 09:08:42.087225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.411 [2024-05-15 09:08:42.087253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.411 qpair failed and we were unable to recover it. 00:42:47.411 [2024-05-15 09:08:42.087374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.411 [2024-05-15 09:08:42.087482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.411 [2024-05-15 09:08:42.087509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.411 qpair failed and we were unable to recover it. 00:42:47.411 [2024-05-15 09:08:42.087639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.411 [2024-05-15 09:08:42.087793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.411 [2024-05-15 09:08:42.087817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.411 qpair failed and we were unable to recover it. 00:42:47.411 [2024-05-15 09:08:42.087959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.411 [2024-05-15 09:08:42.088137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.411 [2024-05-15 09:08:42.088162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.411 qpair failed and we were unable to recover it. 00:42:47.411 [2024-05-15 09:08:42.088269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.411 [2024-05-15 09:08:42.088371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.411 [2024-05-15 09:08:42.088396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.411 qpair failed and we were unable to recover it. 00:42:47.411 [2024-05-15 09:08:42.088490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.411 [2024-05-15 09:08:42.088622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.411 [2024-05-15 09:08:42.088646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.411 qpair failed and we were unable to recover it. 00:42:47.411 [2024-05-15 09:08:42.088795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.411 [2024-05-15 09:08:42.088911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.411 [2024-05-15 09:08:42.088939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.411 qpair failed and we were unable to recover it. 00:42:47.411 [2024-05-15 09:08:42.089057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.411 [2024-05-15 09:08:42.089229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.411 [2024-05-15 09:08:42.089263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.411 qpair failed and we were unable to recover it. 00:42:47.412 [2024-05-15 09:08:42.089436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.412 [2024-05-15 09:08:42.089560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.412 [2024-05-15 09:08:42.089601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.412 qpair failed and we were unable to recover it. 00:42:47.412 [2024-05-15 09:08:42.089732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.412 [2024-05-15 09:08:42.089862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.412 [2024-05-15 09:08:42.089887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.412 qpair failed and we were unable to recover it. 00:42:47.412 [2024-05-15 09:08:42.090041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.412 [2024-05-15 09:08:42.090147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.412 [2024-05-15 09:08:42.090171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.412 qpair failed and we were unable to recover it. 00:42:47.412 [2024-05-15 09:08:42.090275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.412 [2024-05-15 09:08:42.090408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.412 [2024-05-15 09:08:42.090432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.412 qpair failed and we were unable to recover it. 00:42:47.412 [2024-05-15 09:08:42.090561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.412 [2024-05-15 09:08:42.090692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.412 [2024-05-15 09:08:42.090716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.412 qpair failed and we were unable to recover it. 00:42:47.412 [2024-05-15 09:08:42.090887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.412 [2024-05-15 09:08:42.091029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.412 [2024-05-15 09:08:42.091056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.412 qpair failed and we were unable to recover it. 00:42:47.412 [2024-05-15 09:08:42.091280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.412 [2024-05-15 09:08:42.091390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.412 [2024-05-15 09:08:42.091414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.412 qpair failed and we were unable to recover it. 00:42:47.412 [2024-05-15 09:08:42.091518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.412 [2024-05-15 09:08:42.091683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.412 [2024-05-15 09:08:42.091710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.412 qpair failed and we were unable to recover it. 00:42:47.412 [2024-05-15 09:08:42.091815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.412 [2024-05-15 09:08:42.091992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.412 [2024-05-15 09:08:42.092017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.412 qpair failed and we were unable to recover it. 00:42:47.412 [2024-05-15 09:08:42.092124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.412 [2024-05-15 09:08:42.092258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.412 [2024-05-15 09:08:42.092289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.412 qpair failed and we were unable to recover it. 00:42:47.412 [2024-05-15 09:08:42.092448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.412 [2024-05-15 09:08:42.092616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.412 [2024-05-15 09:08:42.092643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.412 qpair failed and we were unable to recover it. 00:42:47.412 [2024-05-15 09:08:42.092783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.412 [2024-05-15 09:08:42.092926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.412 [2024-05-15 09:08:42.092952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.412 qpair failed and we were unable to recover it. 00:42:47.412 [2024-05-15 09:08:42.093086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.412 [2024-05-15 09:08:42.093213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.412 [2024-05-15 09:08:42.093243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.412 qpair failed and we were unable to recover it. 00:42:47.412 [2024-05-15 09:08:42.093389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.412 [2024-05-15 09:08:42.093496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.412 [2024-05-15 09:08:42.093523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.412 qpair failed and we were unable to recover it. 00:42:47.412 [2024-05-15 09:08:42.093636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.412 [2024-05-15 09:08:42.093753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.412 [2024-05-15 09:08:42.093781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.412 qpair failed and we were unable to recover it. 00:42:47.412 [2024-05-15 09:08:42.093947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.412 [2024-05-15 09:08:42.094053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.412 [2024-05-15 09:08:42.094078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.412 qpair failed and we were unable to recover it. 00:42:47.412 [2024-05-15 09:08:42.094234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.412 [2024-05-15 09:08:42.094406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.412 [2024-05-15 09:08:42.094434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.412 qpair failed and we were unable to recover it. 00:42:47.412 [2024-05-15 09:08:42.094611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.412 [2024-05-15 09:08:42.094706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.412 [2024-05-15 09:08:42.094731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.412 qpair failed and we were unable to recover it. 00:42:47.412 [2024-05-15 09:08:42.094859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.412 [2024-05-15 09:08:42.094985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.412 [2024-05-15 09:08:42.095021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.412 qpair failed and we were unable to recover it. 00:42:47.412 [2024-05-15 09:08:42.095174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.412 [2024-05-15 09:08:42.095347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.412 [2024-05-15 09:08:42.095375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.412 qpair failed and we were unable to recover it. 00:42:47.412 [2024-05-15 09:08:42.095524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.412 [2024-05-15 09:08:42.095660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.412 [2024-05-15 09:08:42.095685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.412 qpair failed and we were unable to recover it. 00:42:47.412 [2024-05-15 09:08:42.095804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.412 [2024-05-15 09:08:42.095903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.412 [2024-05-15 09:08:42.095928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.412 qpair failed and we were unable to recover it. 00:42:47.412 [2024-05-15 09:08:42.096055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.412 [2024-05-15 09:08:42.096159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.412 [2024-05-15 09:08:42.096186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.412 qpair failed and we were unable to recover it. 00:42:47.412 [2024-05-15 09:08:42.096364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.412 [2024-05-15 09:08:42.096465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.412 [2024-05-15 09:08:42.096489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.412 qpair failed and we were unable to recover it. 00:42:47.412 [2024-05-15 09:08:42.096591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.412 [2024-05-15 09:08:42.096725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.412 [2024-05-15 09:08:42.096749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.412 qpair failed and we were unable to recover it. 00:42:47.412 [2024-05-15 09:08:42.096850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.412 [2024-05-15 09:08:42.096987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.412 [2024-05-15 09:08:42.097011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.412 qpair failed and we were unable to recover it. 00:42:47.412 [2024-05-15 09:08:42.097109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.412 [2024-05-15 09:08:42.097282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.412 [2024-05-15 09:08:42.097310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.412 qpair failed and we were unable to recover it. 00:42:47.412 [2024-05-15 09:08:42.097458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.412 [2024-05-15 09:08:42.097618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.412 [2024-05-15 09:08:42.097643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.412 qpair failed and we were unable to recover it. 00:42:47.412 [2024-05-15 09:08:42.097775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.412 [2024-05-15 09:08:42.097928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.097954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.413 qpair failed and we were unable to recover it. 00:42:47.413 [2024-05-15 09:08:42.098077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.098211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.098288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.413 qpair failed and we were unable to recover it. 00:42:47.413 [2024-05-15 09:08:42.098451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.098573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.098626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.413 qpair failed and we were unable to recover it. 00:42:47.413 [2024-05-15 09:08:42.098754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.098883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.098907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.413 qpair failed and we were unable to recover it. 00:42:47.413 [2024-05-15 09:08:42.099064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.099191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.099226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.413 qpair failed and we were unable to recover it. 00:42:47.413 [2024-05-15 09:08:42.099394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.099543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.099567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.413 qpair failed and we were unable to recover it. 00:42:47.413 [2024-05-15 09:08:42.099665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.099796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.099823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.413 qpair failed and we were unable to recover it. 00:42:47.413 [2024-05-15 09:08:42.099949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.100055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.100080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.413 qpair failed and we were unable to recover it. 00:42:47.413 [2024-05-15 09:08:42.100181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.100290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.100315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.413 qpair failed and we were unable to recover it. 00:42:47.413 [2024-05-15 09:08:42.100461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.100608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.100634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.413 qpair failed and we were unable to recover it. 00:42:47.413 [2024-05-15 09:08:42.100787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.100953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.100977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.413 qpair failed and we were unable to recover it. 00:42:47.413 [2024-05-15 09:08:42.101187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.101317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.101359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.413 qpair failed and we were unable to recover it. 00:42:47.413 [2024-05-15 09:08:42.101476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.101594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.101620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.413 qpair failed and we were unable to recover it. 00:42:47.413 [2024-05-15 09:08:42.101757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.101904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.101929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.413 qpair failed and we were unable to recover it. 00:42:47.413 [2024-05-15 09:08:42.102022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.102150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.102175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.413 qpair failed and we were unable to recover it. 00:42:47.413 [2024-05-15 09:08:42.102340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.102448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.102472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.413 qpair failed and we were unable to recover it. 00:42:47.413 [2024-05-15 09:08:42.102574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.102750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.102777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.413 qpair failed and we were unable to recover it. 00:42:47.413 [2024-05-15 09:08:42.102924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.103033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.103058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.413 qpair failed and we were unable to recover it. 00:42:47.413 [2024-05-15 09:08:42.103186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.103288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.103313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.413 qpair failed and we were unable to recover it. 00:42:47.413 [2024-05-15 09:08:42.103420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.103570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.103595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.413 qpair failed and we were unable to recover it. 00:42:47.413 [2024-05-15 09:08:42.103752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.103878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.103919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.413 qpair failed and we were unable to recover it. 00:42:47.413 [2024-05-15 09:08:42.104035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.104205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.104241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.413 qpair failed and we were unable to recover it. 00:42:47.413 [2024-05-15 09:08:42.104365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.104495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.104537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.413 qpair failed and we were unable to recover it. 00:42:47.413 [2024-05-15 09:08:42.104680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.104811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.104835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.413 qpair failed and we were unable to recover it. 00:42:47.413 [2024-05-15 09:08:42.104935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.105088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.105115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.413 qpair failed and we were unable to recover it. 00:42:47.413 [2024-05-15 09:08:42.105281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.105426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.105452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.413 qpair failed and we were unable to recover it. 00:42:47.413 [2024-05-15 09:08:42.105580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.105731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.105770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.413 qpair failed and we were unable to recover it. 00:42:47.413 [2024-05-15 09:08:42.105936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.106082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.106106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.413 qpair failed and we were unable to recover it. 00:42:47.413 [2024-05-15 09:08:42.106262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.106438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.106465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.413 qpair failed and we were unable to recover it. 00:42:47.413 [2024-05-15 09:08:42.106589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.106717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.106743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.413 qpair failed and we were unable to recover it. 00:42:47.413 [2024-05-15 09:08:42.106933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.107083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.107106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.413 qpair failed and we were unable to recover it. 00:42:47.413 [2024-05-15 09:08:42.107265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.107412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.107454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.413 qpair failed and we were unable to recover it. 00:42:47.413 [2024-05-15 09:08:42.107584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.107701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.107731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.413 qpair failed and we were unable to recover it. 00:42:47.413 [2024-05-15 09:08:42.107950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.108110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.413 [2024-05-15 09:08:42.108134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.413 qpair failed and we were unable to recover it. 00:42:47.413 [2024-05-15 09:08:42.108264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.108393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.108421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.414 qpair failed and we were unable to recover it. 00:42:47.414 [2024-05-15 09:08:42.108589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.108721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.108747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.414 qpair failed and we were unable to recover it. 00:42:47.414 [2024-05-15 09:08:42.108903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.109057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.109082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.414 qpair failed and we were unable to recover it. 00:42:47.414 [2024-05-15 09:08:42.109180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.109294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.109335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.414 qpair failed and we were unable to recover it. 00:42:47.414 [2024-05-15 09:08:42.109476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.109578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.109602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.414 qpair failed and we were unable to recover it. 00:42:47.414 [2024-05-15 09:08:42.109747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.109891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.109919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.414 qpair failed and we were unable to recover it. 00:42:47.414 [2024-05-15 09:08:42.110059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.110238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.110280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.414 qpair failed and we were unable to recover it. 00:42:47.414 [2024-05-15 09:08:42.110408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.110540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.110565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.414 qpair failed and we were unable to recover it. 00:42:47.414 [2024-05-15 09:08:42.110694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.110788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.110829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.414 qpair failed and we were unable to recover it. 00:42:47.414 [2024-05-15 09:08:42.110979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.111157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.111184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.414 qpair failed and we were unable to recover it. 00:42:47.414 [2024-05-15 09:08:42.111302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.111415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.111440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.414 qpair failed and we were unable to recover it. 00:42:47.414 [2024-05-15 09:08:42.111593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.111736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.111764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.414 qpair failed and we were unable to recover it. 00:42:47.414 [2024-05-15 09:08:42.111878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.112005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.112029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.414 qpair failed and we were unable to recover it. 00:42:47.414 [2024-05-15 09:08:42.112163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.112298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.112324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.414 qpair failed and we were unable to recover it. 00:42:47.414 [2024-05-15 09:08:42.112453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.112586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.112611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.414 qpair failed and we were unable to recover it. 00:42:47.414 [2024-05-15 09:08:42.112727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.112880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.112905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.414 qpair failed and we were unable to recover it. 00:42:47.414 [2024-05-15 09:08:42.113037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.113162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.113186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.414 qpair failed and we were unable to recover it. 00:42:47.414 [2024-05-15 09:08:42.113321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.113464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.113491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.414 qpair failed and we were unable to recover it. 00:42:47.414 [2024-05-15 09:08:42.113628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.113760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.113784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.414 qpair failed and we were unable to recover it. 00:42:47.414 [2024-05-15 09:08:42.113937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.114090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.114115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.414 qpair failed and we were unable to recover it. 00:42:47.414 [2024-05-15 09:08:42.114269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.114417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.114443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.414 qpair failed and we were unable to recover it. 00:42:47.414 [2024-05-15 09:08:42.114569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.114701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.114729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.414 qpair failed and we were unable to recover it. 00:42:47.414 [2024-05-15 09:08:42.114885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.114989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.115014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.414 qpair failed and we were unable to recover it. 00:42:47.414 [2024-05-15 09:08:42.115134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.115300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.115327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.414 qpair failed and we were unable to recover it. 00:42:47.414 [2024-05-15 09:08:42.115460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.115611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.115640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.414 qpair failed and we were unable to recover it. 00:42:47.414 [2024-05-15 09:08:42.115769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.115892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.115916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.414 qpair failed and we were unable to recover it. 00:42:47.414 [2024-05-15 09:08:42.116048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.116170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.116198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.414 qpair failed and we were unable to recover it. 00:42:47.414 [2024-05-15 09:08:42.116342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.116499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.116525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.414 qpair failed and we were unable to recover it. 00:42:47.414 [2024-05-15 09:08:42.116652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.116760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.116785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.414 qpair failed and we were unable to recover it. 00:42:47.414 [2024-05-15 09:08:42.116961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.117068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.117096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.414 qpair failed and we were unable to recover it. 00:42:47.414 [2024-05-15 09:08:42.117233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.117358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.117383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.414 qpair failed and we were unable to recover it. 00:42:47.414 [2024-05-15 09:08:42.117483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.117654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.117678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.414 qpair failed and we were unable to recover it. 00:42:47.414 [2024-05-15 09:08:42.117808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.117946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.117973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.414 qpair failed and we were unable to recover it. 00:42:47.414 [2024-05-15 09:08:42.118149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.118286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.414 [2024-05-15 09:08:42.118312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.414 qpair failed and we were unable to recover it. 00:42:47.414 [2024-05-15 09:08:42.118456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.118584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.118610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.415 qpair failed and we were unable to recover it. 00:42:47.415 [2024-05-15 09:08:42.118713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.118841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.118865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.415 qpair failed and we were unable to recover it. 00:42:47.415 [2024-05-15 09:08:42.118993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.119125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.119151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.415 qpair failed and we were unable to recover it. 00:42:47.415 [2024-05-15 09:08:42.119291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.119392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.119416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.415 qpair failed and we were unable to recover it. 00:42:47.415 [2024-05-15 09:08:42.119541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.119666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.119693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.415 qpair failed and we were unable to recover it. 00:42:47.415 [2024-05-15 09:08:42.119837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.119946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.119973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.415 qpair failed and we were unable to recover it. 00:42:47.415 [2024-05-15 09:08:42.120105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.120235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.120261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.415 qpair failed and we were unable to recover it. 00:42:47.415 [2024-05-15 09:08:42.120395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.120545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.120572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.415 qpair failed and we were unable to recover it. 00:42:47.415 [2024-05-15 09:08:42.120741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.120856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.120882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.415 qpair failed and we were unable to recover it. 00:42:47.415 [2024-05-15 09:08:42.121029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.121133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.121157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.415 qpair failed and we were unable to recover it. 00:42:47.415 [2024-05-15 09:08:42.121286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.121383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.121407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.415 qpair failed and we were unable to recover it. 00:42:47.415 [2024-05-15 09:08:42.121517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.121677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.121704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.415 qpair failed and we were unable to recover it. 00:42:47.415 [2024-05-15 09:08:42.121852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.122001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.122025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.415 qpair failed and we were unable to recover it. 00:42:47.415 [2024-05-15 09:08:42.122182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.122318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.122344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.415 qpair failed and we were unable to recover it. 00:42:47.415 [2024-05-15 09:08:42.122454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.122610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.122637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.415 qpair failed and we were unable to recover it. 00:42:47.415 [2024-05-15 09:08:42.122809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.122935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.122965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.415 qpair failed and we were unable to recover it. 00:42:47.415 [2024-05-15 09:08:42.123093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.123210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.123258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.415 qpair failed and we were unable to recover it. 00:42:47.415 [2024-05-15 09:08:42.123423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.123547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.123573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.415 qpair failed and we were unable to recover it. 00:42:47.415 [2024-05-15 09:08:42.123696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.123830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.123854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.415 qpair failed and we were unable to recover it. 00:42:47.415 [2024-05-15 09:08:42.123955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.124066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.124091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.415 qpair failed and we were unable to recover it. 00:42:47.415 [2024-05-15 09:08:42.124199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.124326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.124352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.415 qpair failed and we were unable to recover it. 00:42:47.415 [2024-05-15 09:08:42.124475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.124578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.124602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.415 qpair failed and we were unable to recover it. 00:42:47.415 [2024-05-15 09:08:42.124731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.124886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.124911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.415 qpair failed and we were unable to recover it. 00:42:47.415 [2024-05-15 09:08:42.125018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.125206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.125237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.415 qpair failed and we were unable to recover it. 00:42:47.415 [2024-05-15 09:08:42.125366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.125483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.125508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.415 qpair failed and we were unable to recover it. 00:42:47.415 [2024-05-15 09:08:42.125620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.125747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.125771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.415 qpair failed and we were unable to recover it. 00:42:47.415 [2024-05-15 09:08:42.125910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.126053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.126082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.415 qpair failed and we were unable to recover it. 00:42:47.415 [2024-05-15 09:08:42.126207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.126314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.126340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.415 qpair failed and we were unable to recover it. 00:42:47.415 [2024-05-15 09:08:42.126454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.126557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.126581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.415 qpair failed and we were unable to recover it. 00:42:47.415 [2024-05-15 09:08:42.126761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.126927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.126954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.415 qpair failed and we were unable to recover it. 00:42:47.415 [2024-05-15 09:08:42.127099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.127210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.127241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.415 qpair failed and we were unable to recover it. 00:42:47.415 [2024-05-15 09:08:42.127346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.127478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.127503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.415 qpair failed and we were unable to recover it. 00:42:47.415 [2024-05-15 09:08:42.127627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.127729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.127754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.415 qpair failed and we were unable to recover it. 00:42:47.415 [2024-05-15 09:08:42.127906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.128009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.415 [2024-05-15 09:08:42.128034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.415 qpair failed and we were unable to recover it. 00:42:47.416 [2024-05-15 09:08:42.128138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.416 [2024-05-15 09:08:42.128298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.416 [2024-05-15 09:08:42.128324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.416 qpair failed and we were unable to recover it. 00:42:47.416 [2024-05-15 09:08:42.128477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.416 [2024-05-15 09:08:42.128630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.416 [2024-05-15 09:08:42.128657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.416 qpair failed and we were unable to recover it. 00:42:47.416 [2024-05-15 09:08:42.128817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.416 [2024-05-15 09:08:42.128920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.416 [2024-05-15 09:08:42.128945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.416 qpair failed and we were unable to recover it. 00:42:47.416 [2024-05-15 09:08:42.129072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.416 [2024-05-15 09:08:42.129231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.416 [2024-05-15 09:08:42.129255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.416 qpair failed and we were unable to recover it. 00:42:47.416 [2024-05-15 09:08:42.129414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.416 [2024-05-15 09:08:42.129544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.416 [2024-05-15 09:08:42.129572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.416 qpair failed and we were unable to recover it. 00:42:47.416 [2024-05-15 09:08:42.129745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.416 [2024-05-15 09:08:42.129868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.416 [2024-05-15 09:08:42.129892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.416 qpair failed and we were unable to recover it. 00:42:47.416 [2024-05-15 09:08:42.130061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.416 [2024-05-15 09:08:42.130165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.416 [2024-05-15 09:08:42.130189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.416 qpair failed and we were unable to recover it. 00:42:47.416 [2024-05-15 09:08:42.130303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.416 [2024-05-15 09:08:42.130405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.416 [2024-05-15 09:08:42.130430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.416 qpair failed and we were unable to recover it. 00:42:47.416 [2024-05-15 09:08:42.130531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.416 [2024-05-15 09:08:42.130657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.416 [2024-05-15 09:08:42.130682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.416 qpair failed and we were unable to recover it. 00:42:47.416 [2024-05-15 09:08:42.130817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.416 [2024-05-15 09:08:42.130946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.416 [2024-05-15 09:08:42.130970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.416 qpair failed and we were unable to recover it. 00:42:47.416 [2024-05-15 09:08:42.131137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.416 [2024-05-15 09:08:42.131290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.416 [2024-05-15 09:08:42.131316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.416 qpair failed and we were unable to recover it. 00:42:47.416 [2024-05-15 09:08:42.131444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.416 [2024-05-15 09:08:42.131597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.416 [2024-05-15 09:08:42.131622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.416 qpair failed and we were unable to recover it. 00:42:47.416 [2024-05-15 09:08:42.131773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.416 [2024-05-15 09:08:42.131911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.416 [2024-05-15 09:08:42.131939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.416 qpair failed and we were unable to recover it. 00:42:47.416 [2024-05-15 09:08:42.132103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.416 [2024-05-15 09:08:42.132265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.416 [2024-05-15 09:08:42.132290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.416 qpair failed and we were unable to recover it. 00:42:47.416 [2024-05-15 09:08:42.132444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.416 [2024-05-15 09:08:42.132582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.416 [2024-05-15 09:08:42.132606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.416 qpair failed and we were unable to recover it. 00:42:47.416 [2024-05-15 09:08:42.132739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.416 [2024-05-15 09:08:42.132921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.416 [2024-05-15 09:08:42.132949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.416 qpair failed and we were unable to recover it. 00:42:47.416 [2024-05-15 09:08:42.133086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.416 [2024-05-15 09:08:42.133230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.416 [2024-05-15 09:08:42.133273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.416 qpair failed and we were unable to recover it. 00:42:47.416 [2024-05-15 09:08:42.133403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.416 [2024-05-15 09:08:42.133557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.416 [2024-05-15 09:08:42.133581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.416 qpair failed and we were unable to recover it. 00:42:47.416 [2024-05-15 09:08:42.133689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.416 [2024-05-15 09:08:42.133796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.416 [2024-05-15 09:08:42.133820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.416 qpair failed and we were unable to recover it. 00:42:47.416 [2024-05-15 09:08:42.133975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.416 [2024-05-15 09:08:42.134127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.416 [2024-05-15 09:08:42.134153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.416 qpair failed and we were unable to recover it. 00:42:47.416 [2024-05-15 09:08:42.134299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.416 [2024-05-15 09:08:42.134396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.416 [2024-05-15 09:08:42.134420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.416 qpair failed and we were unable to recover it. 00:42:47.416 [2024-05-15 09:08:42.134523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.416 [2024-05-15 09:08:42.134644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.416 [2024-05-15 09:08:42.134671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.416 qpair failed and we were unable to recover it. 00:42:47.416 [2024-05-15 09:08:42.134795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.416 [2024-05-15 09:08:42.134890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.416 [2024-05-15 09:08:42.134914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.416 qpair failed and we were unable to recover it. 00:42:47.416 [2024-05-15 09:08:42.135016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.416 [2024-05-15 09:08:42.135162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.416 [2024-05-15 09:08:42.135186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.416 qpair failed and we were unable to recover it. 00:42:47.416 [2024-05-15 09:08:42.135339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.416 [2024-05-15 09:08:42.135490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.416 [2024-05-15 09:08:42.135514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.416 qpair failed and we were unable to recover it. 00:42:47.416 [2024-05-15 09:08:42.135665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.416 [2024-05-15 09:08:42.135812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.416 [2024-05-15 09:08:42.135839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.416 qpair failed and we were unable to recover it. 00:42:47.416 [2024-05-15 09:08:42.135991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.416 [2024-05-15 09:08:42.136094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.416 [2024-05-15 09:08:42.136118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.416 qpair failed and we were unable to recover it. 00:42:47.416 [2024-05-15 09:08:42.136284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.416 [2024-05-15 09:08:42.136412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.416 [2024-05-15 09:08:42.136436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.416 qpair failed and we were unable to recover it. 00:42:47.416 [2024-05-15 09:08:42.136585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.416 [2024-05-15 09:08:42.136724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.416 [2024-05-15 09:08:42.136751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.416 qpair failed and we were unable to recover it. 00:42:47.417 [2024-05-15 09:08:42.136922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.137049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.137073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.417 qpair failed and we were unable to recover it. 00:42:47.417 [2024-05-15 09:08:42.137205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.137327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.137353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.417 qpair failed and we were unable to recover it. 00:42:47.417 [2024-05-15 09:08:42.137509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.137673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.137700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.417 qpair failed and we were unable to recover it. 00:42:47.417 [2024-05-15 09:08:42.137873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.137975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.138004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.417 qpair failed and we were unable to recover it. 00:42:47.417 [2024-05-15 09:08:42.138156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.138292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.138319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.417 qpair failed and we were unable to recover it. 00:42:47.417 [2024-05-15 09:08:42.138431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.138606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.138634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.417 qpair failed and we were unable to recover it. 00:42:47.417 [2024-05-15 09:08:42.138757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.138893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.138917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.417 qpair failed and we were unable to recover it. 00:42:47.417 [2024-05-15 09:08:42.139088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.139225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.139267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.417 qpair failed and we were unable to recover it. 00:42:47.417 [2024-05-15 09:08:42.139394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.139520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.139544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.417 qpair failed and we were unable to recover it. 00:42:47.417 [2024-05-15 09:08:42.139677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.139808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.139833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.417 qpair failed and we were unable to recover it. 00:42:47.417 [2024-05-15 09:08:42.139955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.140138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.140162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.417 qpair failed and we were unable to recover it. 00:42:47.417 [2024-05-15 09:08:42.140295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.140421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.140460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.417 qpair failed and we were unable to recover it. 00:42:47.417 [2024-05-15 09:08:42.140566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.140670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.140693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.417 qpair failed and we were unable to recover it. 00:42:47.417 [2024-05-15 09:08:42.140823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.140977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.141009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.417 qpair failed and we were unable to recover it. 00:42:47.417 [2024-05-15 09:08:42.141129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.141285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.141311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.417 qpair failed and we were unable to recover it. 00:42:47.417 [2024-05-15 09:08:42.141467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.141592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.141616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.417 qpair failed and we were unable to recover it. 00:42:47.417 [2024-05-15 09:08:42.141766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.141898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.141924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.417 qpair failed and we were unable to recover it. 00:42:47.417 [2024-05-15 09:08:42.142030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.142213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.142267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.417 qpair failed and we were unable to recover it. 00:42:47.417 [2024-05-15 09:08:42.142387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.142516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.142540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.417 qpair failed and we were unable to recover it. 00:42:47.417 [2024-05-15 09:08:42.142694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.142841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.142866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.417 qpair failed and we were unable to recover it. 00:42:47.417 [2024-05-15 09:08:42.142975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.143127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.143151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.417 qpair failed and we were unable to recover it. 00:42:47.417 [2024-05-15 09:08:42.143283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.143442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.143484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.417 qpair failed and we were unable to recover it. 00:42:47.417 [2024-05-15 09:08:42.143641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.143778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.143805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.417 qpair failed and we were unable to recover it. 00:42:47.417 [2024-05-15 09:08:42.143963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.144081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.144105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.417 qpair failed and we were unable to recover it. 00:42:47.417 [2024-05-15 09:08:42.144338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.144487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.144531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.417 qpair failed and we were unable to recover it. 00:42:47.417 [2024-05-15 09:08:42.144725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.144833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.144858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.417 qpair failed and we were unable to recover it. 00:42:47.417 [2024-05-15 09:08:42.144982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.145125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.145153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.417 qpair failed and we were unable to recover it. 00:42:47.417 [2024-05-15 09:08:42.145289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.145413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.145437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.417 qpair failed and we were unable to recover it. 00:42:47.417 [2024-05-15 09:08:42.145574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.145754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.145779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.417 qpair failed and we were unable to recover it. 00:42:47.417 [2024-05-15 09:08:42.145889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.146074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.146098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.417 qpair failed and we were unable to recover it. 00:42:47.417 [2024-05-15 09:08:42.146222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.146325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.146349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.417 qpair failed and we were unable to recover it. 00:42:47.417 [2024-05-15 09:08:42.146518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.146632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.146659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.417 qpair failed and we were unable to recover it. 00:42:47.417 [2024-05-15 09:08:42.146800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.146907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.146934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.417 qpair failed and we were unable to recover it. 00:42:47.417 [2024-05-15 09:08:42.147108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.147286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.147313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.417 qpair failed and we were unable to recover it. 00:42:47.417 [2024-05-15 09:08:42.147432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.147632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.147662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.417 qpair failed and we were unable to recover it. 00:42:47.417 [2024-05-15 09:08:42.147796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.417 [2024-05-15 09:08:42.147944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.418 [2024-05-15 09:08:42.147971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.418 qpair failed and we were unable to recover it. 00:42:47.418 [2024-05-15 09:08:42.148084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.418 [2024-05-15 09:08:42.148213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.418 [2024-05-15 09:08:42.148243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.418 qpair failed and we were unable to recover it. 00:42:47.418 [2024-05-15 09:08:42.148374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.418 [2024-05-15 09:08:42.148527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.418 [2024-05-15 09:08:42.148553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.418 qpair failed and we were unable to recover it. 00:42:47.418 [2024-05-15 09:08:42.148686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.418 [2024-05-15 09:08:42.148793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.418 [2024-05-15 09:08:42.148835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.418 qpair failed and we were unable to recover it. 00:42:47.418 [2024-05-15 09:08:42.148964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.418 [2024-05-15 09:08:42.149068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.418 [2024-05-15 09:08:42.149093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.418 qpair failed and we were unable to recover it. 00:42:47.418 [2024-05-15 09:08:42.149256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.418 [2024-05-15 09:08:42.149363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.418 [2024-05-15 09:08:42.149390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.418 qpair failed and we were unable to recover it. 00:42:47.418 [2024-05-15 09:08:42.149502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.418 [2024-05-15 09:08:42.149611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.418 [2024-05-15 09:08:42.149640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.418 qpair failed and we were unable to recover it. 00:42:47.418 [2024-05-15 09:08:42.149820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.418 [2024-05-15 09:08:42.149951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.418 [2024-05-15 09:08:42.149977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.418 qpair failed and we were unable to recover it. 00:42:47.418 [2024-05-15 09:08:42.150097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.418 [2024-05-15 09:08:42.150285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.418 [2024-05-15 09:08:42.150312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.418 qpair failed and we were unable to recover it. 00:42:47.418 [2024-05-15 09:08:42.150451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.418 [2024-05-15 09:08:42.150601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.418 [2024-05-15 09:08:42.150629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.418 qpair failed and we were unable to recover it. 00:42:47.418 [2024-05-15 09:08:42.150756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.418 [2024-05-15 09:08:42.150867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.418 [2024-05-15 09:08:42.150892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.418 qpair failed and we were unable to recover it. 00:42:47.418 [2024-05-15 09:08:42.151024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.418 [2024-05-15 09:08:42.151171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.418 [2024-05-15 09:08:42.151199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.418 qpair failed and we were unable to recover it. 00:42:47.418 [2024-05-15 09:08:42.151402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.418 [2024-05-15 09:08:42.151532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.418 [2024-05-15 09:08:42.151558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.418 qpair failed and we were unable to recover it. 00:42:47.418 [2024-05-15 09:08:42.151690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.418 [2024-05-15 09:08:42.151800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.418 [2024-05-15 09:08:42.151824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.418 qpair failed and we were unable to recover it. 00:42:47.418 [2024-05-15 09:08:42.151975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.687 [2024-05-15 09:08:42.152129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.687 [2024-05-15 09:08:42.152153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.687 qpair failed and we were unable to recover it. 00:42:47.687 [2024-05-15 09:08:42.152284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.687 [2024-05-15 09:08:42.152397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.687 [2024-05-15 09:08:42.152421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.687 qpair failed and we were unable to recover it. 00:42:47.687 [2024-05-15 09:08:42.152558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.687 [2024-05-15 09:08:42.152652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.687 [2024-05-15 09:08:42.152676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.687 qpair failed and we were unable to recover it. 00:42:47.687 [2024-05-15 09:08:42.152818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.687 [2024-05-15 09:08:42.152946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.687 [2024-05-15 09:08:42.152985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.687 qpair failed and we were unable to recover it. 00:42:47.687 [2024-05-15 09:08:42.153196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.687 [2024-05-15 09:08:42.153365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.687 [2024-05-15 09:08:42.153398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.687 qpair failed and we were unable to recover it. 00:42:47.687 [2024-05-15 09:08:42.153547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.687 [2024-05-15 09:08:42.153699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.687 [2024-05-15 09:08:42.153734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.687 qpair failed and we were unable to recover it. 00:42:47.687 [2024-05-15 09:08:42.153876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.687 [2024-05-15 09:08:42.154089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.687 [2024-05-15 09:08:42.154125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.687 qpair failed and we were unable to recover it. 00:42:47.687 [2024-05-15 09:08:42.154244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.687 [2024-05-15 09:08:42.154428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.687 [2024-05-15 09:08:42.154461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.687 qpair failed and we were unable to recover it. 00:42:47.687 [2024-05-15 09:08:42.154647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.687 [2024-05-15 09:08:42.154811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.687 [2024-05-15 09:08:42.154847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.687 qpair failed and we were unable to recover it. 00:42:47.687 [2024-05-15 09:08:42.155036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.687 [2024-05-15 09:08:42.155202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.687 [2024-05-15 09:08:42.155273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.687 qpair failed and we were unable to recover it. 00:42:47.687 [2024-05-15 09:08:42.155401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.687 [2024-05-15 09:08:42.155571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.687 [2024-05-15 09:08:42.155612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.687 qpair failed and we were unable to recover it. 00:42:47.687 [2024-05-15 09:08:42.155759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.687 [2024-05-15 09:08:42.155881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.687 [2024-05-15 09:08:42.155907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.687 qpair failed and we were unable to recover it. 00:42:47.687 [2024-05-15 09:08:42.156053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.687 [2024-05-15 09:08:42.156207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.687 [2024-05-15 09:08:42.156241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.687 qpair failed and we were unable to recover it. 00:42:47.687 [2024-05-15 09:08:42.156413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.687 [2024-05-15 09:08:42.156578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.687 [2024-05-15 09:08:42.156606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.687 qpair failed and we were unable to recover it. 00:42:47.687 [2024-05-15 09:08:42.156758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.687 [2024-05-15 09:08:42.156889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.687 [2024-05-15 09:08:42.156914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.687 qpair failed and we were unable to recover it. 00:42:47.687 [2024-05-15 09:08:42.157045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.687 [2024-05-15 09:08:42.157177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.687 [2024-05-15 09:08:42.157206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.687 qpair failed and we were unable to recover it. 00:42:47.687 [2024-05-15 09:08:42.157409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.687 [2024-05-15 09:08:42.157551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.687 [2024-05-15 09:08:42.157578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.687 qpair failed and we were unable to recover it. 00:42:47.687 [2024-05-15 09:08:42.157711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.687 [2024-05-15 09:08:42.157844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.687 [2024-05-15 09:08:42.157868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.687 qpair failed and we were unable to recover it. 00:42:47.687 [2024-05-15 09:08:42.157985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.687 [2024-05-15 09:08:42.158119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.687 [2024-05-15 09:08:42.158145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.687 qpair failed and we were unable to recover it. 00:42:47.687 [2024-05-15 09:08:42.158313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.687 [2024-05-15 09:08:42.158456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.687 [2024-05-15 09:08:42.158484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.687 qpair failed and we were unable to recover it. 00:42:47.687 [2024-05-15 09:08:42.158636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.687 [2024-05-15 09:08:42.158752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.687 [2024-05-15 09:08:42.158777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.687 qpair failed and we were unable to recover it. 00:42:47.687 [2024-05-15 09:08:42.158900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.687 [2024-05-15 09:08:42.159036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.687 [2024-05-15 09:08:42.159063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.687 qpair failed and we were unable to recover it. 00:42:47.687 [2024-05-15 09:08:42.159209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.687 [2024-05-15 09:08:42.159384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.687 [2024-05-15 09:08:42.159411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.687 qpair failed and we were unable to recover it. 00:42:47.687 [2024-05-15 09:08:42.159534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.687 [2024-05-15 09:08:42.159640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.687 [2024-05-15 09:08:42.159665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.687 qpair failed and we were unable to recover it. 00:42:47.687 [2024-05-15 09:08:42.159779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.687 [2024-05-15 09:08:42.159930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.687 [2024-05-15 09:08:42.159956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.687 qpair failed and we were unable to recover it. 00:42:47.687 [2024-05-15 09:08:42.160077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.688 [2024-05-15 09:08:42.160250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.688 [2024-05-15 09:08:42.160276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.688 qpair failed and we were unable to recover it. 00:42:47.688 [2024-05-15 09:08:42.160384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.688 [2024-05-15 09:08:42.160490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.688 [2024-05-15 09:08:42.160514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.688 qpair failed and we were unable to recover it. 00:42:47.688 [2024-05-15 09:08:42.160666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.688 [2024-05-15 09:08:42.160820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.688 [2024-05-15 09:08:42.160845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.688 qpair failed and we were unable to recover it. 00:42:47.688 [2024-05-15 09:08:42.161007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.688 [2024-05-15 09:08:42.161159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.688 [2024-05-15 09:08:42.161185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.688 qpair failed and we were unable to recover it. 00:42:47.688 [2024-05-15 09:08:42.161338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.688 [2024-05-15 09:08:42.161495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.688 [2024-05-15 09:08:42.161535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.688 qpair failed and we were unable to recover it. 00:42:47.688 [2024-05-15 09:08:42.161718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.688 [2024-05-15 09:08:42.161820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.688 [2024-05-15 09:08:42.161846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.688 qpair failed and we were unable to recover it. 00:42:47.688 [2024-05-15 09:08:42.161980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.688 [2024-05-15 09:08:42.162131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.688 [2024-05-15 09:08:42.162159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.688 qpair failed and we were unable to recover it. 00:42:47.688 [2024-05-15 09:08:42.162271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.688 [2024-05-15 09:08:42.162383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.688 [2024-05-15 09:08:42.162407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.688 qpair failed and we were unable to recover it. 00:42:47.688 [2024-05-15 09:08:42.162541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.688 [2024-05-15 09:08:42.162672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.688 [2024-05-15 09:08:42.162697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.688 qpair failed and we were unable to recover it. 00:42:47.688 [2024-05-15 09:08:42.162854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.688 [2024-05-15 09:08:42.162977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.688 [2024-05-15 09:08:42.163005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.688 qpair failed and we were unable to recover it. 00:42:47.688 [2024-05-15 09:08:42.163125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.688 [2024-05-15 09:08:42.163244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.688 [2024-05-15 09:08:42.163286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.688 qpair failed and we were unable to recover it. 00:42:47.688 [2024-05-15 09:08:42.163419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.688 [2024-05-15 09:08:42.163529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.688 [2024-05-15 09:08:42.163553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.688 qpair failed and we were unable to recover it. 00:42:47.688 [2024-05-15 09:08:42.163723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.688 [2024-05-15 09:08:42.163876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.688 [2024-05-15 09:08:42.163900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.688 qpair failed and we were unable to recover it. 00:42:47.688 [2024-05-15 09:08:42.164000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.688 [2024-05-15 09:08:42.164153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.688 [2024-05-15 09:08:42.164176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.688 qpair failed and we were unable to recover it. 00:42:47.688 [2024-05-15 09:08:42.164326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.688 [2024-05-15 09:08:42.164449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.688 [2024-05-15 09:08:42.164473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.688 qpair failed and we were unable to recover it. 00:42:47.688 [2024-05-15 09:08:42.164599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.688 [2024-05-15 09:08:42.164729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.688 [2024-05-15 09:08:42.164753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.688 qpair failed and we were unable to recover it. 00:42:47.688 [2024-05-15 09:08:42.164907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.688 [2024-05-15 09:08:42.165013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.688 [2024-05-15 09:08:42.165037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.688 qpair failed and we were unable to recover it. 00:42:47.688 [2024-05-15 09:08:42.165161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.688 [2024-05-15 09:08:42.165314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.688 [2024-05-15 09:08:42.165341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.688 qpair failed and we were unable to recover it. 00:42:47.688 [2024-05-15 09:08:42.165466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.688 [2024-05-15 09:08:42.165635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.688 [2024-05-15 09:08:42.165662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.688 qpair failed and we were unable to recover it. 00:42:47.688 [2024-05-15 09:08:42.165810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.688 [2024-05-15 09:08:42.165938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.688 [2024-05-15 09:08:42.165962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.688 qpair failed and we were unable to recover it. 00:42:47.688 [2024-05-15 09:08:42.166090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.688 [2024-05-15 09:08:42.166198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.688 [2024-05-15 09:08:42.166231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.688 qpair failed and we were unable to recover it. 00:42:47.688 [2024-05-15 09:08:42.166355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.688 [2024-05-15 09:08:42.166494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.688 [2024-05-15 09:08:42.166523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.688 qpair failed and we were unable to recover it. 00:42:47.688 [2024-05-15 09:08:42.166668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.688 [2024-05-15 09:08:42.166821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.688 [2024-05-15 09:08:42.166845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.688 qpair failed and we were unable to recover it. 00:42:47.688 [2024-05-15 09:08:42.166980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.688 [2024-05-15 09:08:42.167147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.688 [2024-05-15 09:08:42.167173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.688 qpair failed and we were unable to recover it. 00:42:47.688 [2024-05-15 09:08:42.167331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.688 [2024-05-15 09:08:42.167454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.688 [2024-05-15 09:08:42.167479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.688 qpair failed and we were unable to recover it. 00:42:47.688 [2024-05-15 09:08:42.167611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.688 [2024-05-15 09:08:42.167717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.688 [2024-05-15 09:08:42.167742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.688 qpair failed and we were unable to recover it. 00:42:47.688 [2024-05-15 09:08:42.167899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.688 [2024-05-15 09:08:42.168016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.688 [2024-05-15 09:08:42.168044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.688 qpair failed and we were unable to recover it. 00:42:47.688 [2024-05-15 09:08:42.168210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.688 [2024-05-15 09:08:42.168398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.688 [2024-05-15 09:08:42.168422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.688 qpair failed and we were unable to recover it. 00:42:47.688 [2024-05-15 09:08:42.168532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.688 [2024-05-15 09:08:42.168660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.688 [2024-05-15 09:08:42.168684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.689 qpair failed and we were unable to recover it. 00:42:47.689 [2024-05-15 09:08:42.168804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.689 [2024-05-15 09:08:42.168939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.689 [2024-05-15 09:08:42.168967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.689 qpair failed and we were unable to recover it. 00:42:47.689 [2024-05-15 09:08:42.169107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.689 [2024-05-15 09:08:42.169229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.689 [2024-05-15 09:08:42.169258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.689 qpair failed and we were unable to recover it. 00:42:47.689 [2024-05-15 09:08:42.169406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.689 [2024-05-15 09:08:42.169541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.689 [2024-05-15 09:08:42.169567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.689 qpair failed and we were unable to recover it. 00:42:47.689 [2024-05-15 09:08:42.169744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.689 [2024-05-15 09:08:42.169887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.689 [2024-05-15 09:08:42.169914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.689 qpair failed and we were unable to recover it. 00:42:47.689 [2024-05-15 09:08:42.170038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.689 [2024-05-15 09:08:42.170213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.689 [2024-05-15 09:08:42.170245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.689 qpair failed and we were unable to recover it. 00:42:47.689 [2024-05-15 09:08:42.170368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.689 [2024-05-15 09:08:42.170492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.689 [2024-05-15 09:08:42.170517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.689 qpair failed and we were unable to recover it. 00:42:47.689 [2024-05-15 09:08:42.170644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.689 [2024-05-15 09:08:42.170771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.689 [2024-05-15 09:08:42.170796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.689 qpair failed and we were unable to recover it. 00:42:47.689 [2024-05-15 09:08:42.170972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.689 [2024-05-15 09:08:42.171114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.689 [2024-05-15 09:08:42.171142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.689 qpair failed and we were unable to recover it. 00:42:47.689 [2024-05-15 09:08:42.171287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.689 [2024-05-15 09:08:42.171392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.689 [2024-05-15 09:08:42.171417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.689 qpair failed and we were unable to recover it. 00:42:47.689 [2024-05-15 09:08:42.171561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.689 [2024-05-15 09:08:42.171683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.689 [2024-05-15 09:08:42.171706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.689 qpair failed and we were unable to recover it. 00:42:47.689 [2024-05-15 09:08:42.171836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.689 [2024-05-15 09:08:42.171960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.689 [2024-05-15 09:08:42.171987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.689 qpair failed and we were unable to recover it. 00:42:47.689 [2024-05-15 09:08:42.172162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.689 [2024-05-15 09:08:42.172274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.689 [2024-05-15 09:08:42.172315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.689 qpair failed and we were unable to recover it. 00:42:47.689 [2024-05-15 09:08:42.172459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.689 [2024-05-15 09:08:42.172565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.689 [2024-05-15 09:08:42.172597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.689 qpair failed and we were unable to recover it. 00:42:47.689 [2024-05-15 09:08:42.172738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.689 [2024-05-15 09:08:42.172914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.689 [2024-05-15 09:08:42.172939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.689 qpair failed and we were unable to recover it. 00:42:47.689 [2024-05-15 09:08:42.173065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.689 [2024-05-15 09:08:42.173174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.689 [2024-05-15 09:08:42.173199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.689 qpair failed and we were unable to recover it. 00:42:47.689 [2024-05-15 09:08:42.173361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.689 [2024-05-15 09:08:42.173501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.689 [2024-05-15 09:08:42.173529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.689 qpair failed and we were unable to recover it. 00:42:47.689 [2024-05-15 09:08:42.173662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.689 [2024-05-15 09:08:42.173798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.689 [2024-05-15 09:08:42.173824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.689 qpair failed and we were unable to recover it. 00:42:47.689 [2024-05-15 09:08:42.173962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.689 [2024-05-15 09:08:42.174100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.689 [2024-05-15 09:08:42.174124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.689 qpair failed and we were unable to recover it. 00:42:47.689 [2024-05-15 09:08:42.174297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.689 [2024-05-15 09:08:42.174435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.689 [2024-05-15 09:08:42.174462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.689 qpair failed and we were unable to recover it. 00:42:47.689 [2024-05-15 09:08:42.174642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.689 [2024-05-15 09:08:42.174776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.689 [2024-05-15 09:08:42.174802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.689 qpair failed and we were unable to recover it. 00:42:47.689 [2024-05-15 09:08:42.174978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.689 [2024-05-15 09:08:42.175082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.689 [2024-05-15 09:08:42.175106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.689 qpair failed and we were unable to recover it. 00:42:47.689 [2024-05-15 09:08:42.175279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.689 [2024-05-15 09:08:42.175383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.689 [2024-05-15 09:08:42.175407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.689 qpair failed and we were unable to recover it. 00:42:47.689 [2024-05-15 09:08:42.175554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.689 [2024-05-15 09:08:42.175691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.689 [2024-05-15 09:08:42.175718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.690 qpair failed and we were unable to recover it. 00:42:47.690 [2024-05-15 09:08:42.175869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.690 [2024-05-15 09:08:42.175969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.690 [2024-05-15 09:08:42.175994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.690 qpair failed and we were unable to recover it. 00:42:47.690 [2024-05-15 09:08:42.176126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.690 [2024-05-15 09:08:42.176249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.690 [2024-05-15 09:08:42.176278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.690 qpair failed and we were unable to recover it. 00:42:47.690 [2024-05-15 09:08:42.176419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.690 [2024-05-15 09:08:42.176566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.690 [2024-05-15 09:08:42.176593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.690 qpair failed and we were unable to recover it. 00:42:47.690 [2024-05-15 09:08:42.176745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.690 [2024-05-15 09:08:42.176870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.690 [2024-05-15 09:08:42.176893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.690 qpair failed and we were unable to recover it. 00:42:47.690 [2024-05-15 09:08:42.177043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.690 [2024-05-15 09:08:42.177183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.690 [2024-05-15 09:08:42.177210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.690 qpair failed and we were unable to recover it. 00:42:47.690 [2024-05-15 09:08:42.177413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.690 [2024-05-15 09:08:42.177557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.690 [2024-05-15 09:08:42.177585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.690 qpair failed and we were unable to recover it. 00:42:47.690 [2024-05-15 09:08:42.177711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.690 [2024-05-15 09:08:42.177864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.690 [2024-05-15 09:08:42.177889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.690 qpair failed and we were unable to recover it. 00:42:47.690 [2024-05-15 09:08:42.178041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.690 [2024-05-15 09:08:42.178188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.690 [2024-05-15 09:08:42.178213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.690 qpair failed and we were unable to recover it. 00:42:47.690 [2024-05-15 09:08:42.178366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.690 [2024-05-15 09:08:42.178511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.690 [2024-05-15 09:08:42.178538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.690 qpair failed and we were unable to recover it. 00:42:47.690 [2024-05-15 09:08:42.178664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.690 [2024-05-15 09:08:42.178789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.690 [2024-05-15 09:08:42.178813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.690 qpair failed and we were unable to recover it. 00:42:47.690 [2024-05-15 09:08:42.178985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.690 [2024-05-15 09:08:42.179127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.690 [2024-05-15 09:08:42.179154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.690 qpair failed and we were unable to recover it. 00:42:47.690 [2024-05-15 09:08:42.179296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.690 [2024-05-15 09:08:42.179463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.690 [2024-05-15 09:08:42.179487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.690 qpair failed and we were unable to recover it. 00:42:47.690 [2024-05-15 09:08:42.179614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.690 [2024-05-15 09:08:42.179744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.690 [2024-05-15 09:08:42.179768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.690 qpair failed and we were unable to recover it. 00:42:47.690 [2024-05-15 09:08:42.179945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.690 [2024-05-15 09:08:42.180075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.690 [2024-05-15 09:08:42.180100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.690 qpair failed and we were unable to recover it. 00:42:47.690 [2024-05-15 09:08:42.180231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.690 [2024-05-15 09:08:42.180363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.690 [2024-05-15 09:08:42.180389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.690 qpair failed and we were unable to recover it. 00:42:47.690 [2024-05-15 09:08:42.180553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.690 [2024-05-15 09:08:42.180651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.690 [2024-05-15 09:08:42.180675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.690 qpair failed and we were unable to recover it. 00:42:47.690 [2024-05-15 09:08:42.180851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.690 [2024-05-15 09:08:42.181028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.690 [2024-05-15 09:08:42.181053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.690 qpair failed and we were unable to recover it. 00:42:47.690 [2024-05-15 09:08:42.181225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.690 [2024-05-15 09:08:42.181388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.690 [2024-05-15 09:08:42.181413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.690 qpair failed and we were unable to recover it. 00:42:47.690 [2024-05-15 09:08:42.181542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.690 [2024-05-15 09:08:42.181681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.690 [2024-05-15 09:08:42.181706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.690 qpair failed and we were unable to recover it. 00:42:47.690 [2024-05-15 09:08:42.181878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.690 [2024-05-15 09:08:42.182017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.690 [2024-05-15 09:08:42.182044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.690 qpair failed and we were unable to recover it. 00:42:47.690 [2024-05-15 09:08:42.182166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.690 [2024-05-15 09:08:42.182315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.690 [2024-05-15 09:08:42.182340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.690 qpair failed and we were unable to recover it. 00:42:47.690 [2024-05-15 09:08:42.182467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.690 [2024-05-15 09:08:42.182563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.690 [2024-05-15 09:08:42.182587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.690 qpair failed and we were unable to recover it. 00:42:47.690 [2024-05-15 09:08:42.182715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.690 [2024-05-15 09:08:42.182844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.690 [2024-05-15 09:08:42.182868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.690 qpair failed and we were unable to recover it. 00:42:47.690 [2024-05-15 09:08:42.183036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.690 [2024-05-15 09:08:42.183199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.690 [2024-05-15 09:08:42.183233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.690 qpair failed and we were unable to recover it. 00:42:47.690 [2024-05-15 09:08:42.183419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.690 [2024-05-15 09:08:42.183525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.690 [2024-05-15 09:08:42.183549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.690 qpair failed and we were unable to recover it. 00:42:47.690 [2024-05-15 09:08:42.183704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.690 [2024-05-15 09:08:42.183820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.690 [2024-05-15 09:08:42.183846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.690 qpair failed and we were unable to recover it. 00:42:47.690 [2024-05-15 09:08:42.183983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.690 [2024-05-15 09:08:42.184125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.690 [2024-05-15 09:08:42.184151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.690 qpair failed and we were unable to recover it. 00:42:47.690 [2024-05-15 09:08:42.184361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.690 [2024-05-15 09:08:42.184496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.690 [2024-05-15 09:08:42.184520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.690 qpair failed and we were unable to recover it. 00:42:47.690 [2024-05-15 09:08:42.184654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.691 [2024-05-15 09:08:42.184837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.691 [2024-05-15 09:08:42.184865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.691 qpair failed and we were unable to recover it. 00:42:47.691 [2024-05-15 09:08:42.185007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.691 [2024-05-15 09:08:42.185283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.691 [2024-05-15 09:08:42.185311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.691 qpair failed and we were unable to recover it. 00:42:47.691 [2024-05-15 09:08:42.185465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.691 [2024-05-15 09:08:42.185624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.691 [2024-05-15 09:08:42.185648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.691 qpair failed and we were unable to recover it. 00:42:47.691 [2024-05-15 09:08:42.185790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.691 [2024-05-15 09:08:42.185960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.691 [2024-05-15 09:08:42.185986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.691 qpair failed and we were unable to recover it. 00:42:47.691 [2024-05-15 09:08:42.186124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.691 [2024-05-15 09:08:42.186253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.691 [2024-05-15 09:08:42.186278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.691 qpair failed and we were unable to recover it. 00:42:47.691 [2024-05-15 09:08:42.186375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.691 [2024-05-15 09:08:42.186476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.691 [2024-05-15 09:08:42.186500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.691 qpair failed and we were unable to recover it. 00:42:47.691 [2024-05-15 09:08:42.186653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.691 [2024-05-15 09:08:42.186784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.691 [2024-05-15 09:08:42.186811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.691 qpair failed and we were unable to recover it. 00:42:47.691 [2024-05-15 09:08:42.186924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.691 [2024-05-15 09:08:42.187062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.691 [2024-05-15 09:08:42.187088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.691 qpair failed and we were unable to recover it. 00:42:47.691 [2024-05-15 09:08:42.187223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.691 [2024-05-15 09:08:42.187356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.691 [2024-05-15 09:08:42.187380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.691 qpair failed and we were unable to recover it. 00:42:47.691 [2024-05-15 09:08:42.187500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.691 [2024-05-15 09:08:42.187644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.691 [2024-05-15 09:08:42.187670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.691 qpair failed and we were unable to recover it. 00:42:47.691 [2024-05-15 09:08:42.187809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.691 [2024-05-15 09:08:42.187982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.691 [2024-05-15 09:08:42.188008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.691 qpair failed and we were unable to recover it. 00:42:47.691 [2024-05-15 09:08:42.188121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.691 [2024-05-15 09:08:42.188225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.691 [2024-05-15 09:08:42.188250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.691 qpair failed and we were unable to recover it. 00:42:47.691 [2024-05-15 09:08:42.188383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.691 [2024-05-15 09:08:42.188499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.691 [2024-05-15 09:08:42.188531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.691 qpair failed and we were unable to recover it. 00:42:47.691 [2024-05-15 09:08:42.188648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.691 [2024-05-15 09:08:42.188814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.691 [2024-05-15 09:08:42.188841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.691 qpair failed and we were unable to recover it. 00:42:47.691 [2024-05-15 09:08:42.188967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.691 [2024-05-15 09:08:42.189094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.691 [2024-05-15 09:08:42.189117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.691 qpair failed and we were unable to recover it. 00:42:47.691 [2024-05-15 09:08:42.189254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.691 [2024-05-15 09:08:42.189364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.691 [2024-05-15 09:08:42.189389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.691 qpair failed and we were unable to recover it. 00:42:47.691 [2024-05-15 09:08:42.189532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.691 [2024-05-15 09:08:42.189698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.691 [2024-05-15 09:08:42.189725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.691 qpair failed and we were unable to recover it. 00:42:47.691 [2024-05-15 09:08:42.189878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.691 [2024-05-15 09:08:42.190006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.691 [2024-05-15 09:08:42.190031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.691 qpair failed and we were unable to recover it. 00:42:47.691 [2024-05-15 09:08:42.190193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.691 [2024-05-15 09:08:42.190312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.691 [2024-05-15 09:08:42.190336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.691 qpair failed and we were unable to recover it. 00:42:47.691 [2024-05-15 09:08:42.190530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.691 [2024-05-15 09:08:42.190661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.691 [2024-05-15 09:08:42.190687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.691 qpair failed and we were unable to recover it. 00:42:47.691 [2024-05-15 09:08:42.190821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.691 [2024-05-15 09:08:42.190925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.691 [2024-05-15 09:08:42.190949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.691 qpair failed and we were unable to recover it. 00:42:47.691 [2024-05-15 09:08:42.191088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.691 [2024-05-15 09:08:42.191203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.691 [2024-05-15 09:08:42.191254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.691 qpair failed and we were unable to recover it. 00:42:47.691 [2024-05-15 09:08:42.191397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.691 [2024-05-15 09:08:42.191539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.691 [2024-05-15 09:08:42.191572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.691 qpair failed and we were unable to recover it. 00:42:47.691 [2024-05-15 09:08:42.191751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.691 [2024-05-15 09:08:42.191857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.691 [2024-05-15 09:08:42.191882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.692 qpair failed and we were unable to recover it. 00:42:47.692 [2024-05-15 09:08:42.191973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.692 [2024-05-15 09:08:42.192104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.692 [2024-05-15 09:08:42.192128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.692 qpair failed and we were unable to recover it. 00:42:47.692 [2024-05-15 09:08:42.192261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.692 [2024-05-15 09:08:42.192409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.692 [2024-05-15 09:08:42.192436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.692 qpair failed and we were unable to recover it. 00:42:47.692 [2024-05-15 09:08:42.192607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.692 [2024-05-15 09:08:42.192717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.692 [2024-05-15 09:08:42.192741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.692 qpair failed and we were unable to recover it. 00:42:47.692 [2024-05-15 09:08:42.192849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.692 [2024-05-15 09:08:42.192970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.692 [2024-05-15 09:08:42.192996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.692 qpair failed and we were unable to recover it. 00:42:47.692 [2024-05-15 09:08:42.193153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.692 [2024-05-15 09:08:42.193279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.692 [2024-05-15 09:08:42.193304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.692 qpair failed and we were unable to recover it. 00:42:47.692 [2024-05-15 09:08:42.193407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.692 [2024-05-15 09:08:42.193540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.692 [2024-05-15 09:08:42.193565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.692 qpair failed and we were unable to recover it. 00:42:47.692 [2024-05-15 09:08:42.193723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.692 [2024-05-15 09:08:42.193847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.692 [2024-05-15 09:08:42.193875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.692 qpair failed and we were unable to recover it. 00:42:47.692 [2024-05-15 09:08:42.194042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.692 [2024-05-15 09:08:42.194185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.692 [2024-05-15 09:08:42.194211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.692 qpair failed and we were unable to recover it. 00:42:47.692 [2024-05-15 09:08:42.194385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.692 [2024-05-15 09:08:42.194510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.692 [2024-05-15 09:08:42.194536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.692 qpair failed and we were unable to recover it. 00:42:47.692 [2024-05-15 09:08:42.194694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.692 [2024-05-15 09:08:42.194833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.692 [2024-05-15 09:08:42.194861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.692 qpair failed and we were unable to recover it. 00:42:47.692 [2024-05-15 09:08:42.195027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.692 [2024-05-15 09:08:42.195206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.692 [2024-05-15 09:08:42.195234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.692 qpair failed and we were unable to recover it. 00:42:47.692 [2024-05-15 09:08:42.195371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.692 [2024-05-15 09:08:42.195474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.692 [2024-05-15 09:08:42.195498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.692 qpair failed and we were unable to recover it. 00:42:47.692 [2024-05-15 09:08:42.195633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.692 [2024-05-15 09:08:42.195767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.692 [2024-05-15 09:08:42.195791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.692 qpair failed and we were unable to recover it. 00:42:47.692 [2024-05-15 09:08:42.195929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.692 [2024-05-15 09:08:42.196043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.692 [2024-05-15 09:08:42.196069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.692 qpair failed and we were unable to recover it. 00:42:47.692 [2024-05-15 09:08:42.196290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.692 [2024-05-15 09:08:42.196396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.692 [2024-05-15 09:08:42.196422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.692 qpair failed and we were unable to recover it. 00:42:47.692 [2024-05-15 09:08:42.196598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.692 [2024-05-15 09:08:42.196753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.692 [2024-05-15 09:08:42.196777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.692 qpair failed and we were unable to recover it. 00:42:47.692 [2024-05-15 09:08:42.196903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.692 [2024-05-15 09:08:42.197025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.692 [2024-05-15 09:08:42.197052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.692 qpair failed and we were unable to recover it. 00:42:47.692 [2024-05-15 09:08:42.197226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.692 [2024-05-15 09:08:42.197363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.692 [2024-05-15 09:08:42.197404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.692 qpair failed and we were unable to recover it. 00:42:47.692 [2024-05-15 09:08:42.197542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.692 [2024-05-15 09:08:42.197648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.692 [2024-05-15 09:08:42.197675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.692 qpair failed and we were unable to recover it. 00:42:47.692 [2024-05-15 09:08:42.197819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.692 [2024-05-15 09:08:42.197979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.692 [2024-05-15 09:08:42.198004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.692 qpair failed and we were unable to recover it. 00:42:47.692 [2024-05-15 09:08:42.198113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.692 [2024-05-15 09:08:42.198240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.692 [2024-05-15 09:08:42.198265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.692 qpair failed and we were unable to recover it. 00:42:47.692 [2024-05-15 09:08:42.198369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.692 [2024-05-15 09:08:42.198505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.692 [2024-05-15 09:08:42.198529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.692 qpair failed and we were unable to recover it. 00:42:47.692 [2024-05-15 09:08:42.198651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.692 [2024-05-15 09:08:42.198763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.692 [2024-05-15 09:08:42.198789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.692 qpair failed and we were unable to recover it. 00:42:47.692 [2024-05-15 09:08:42.198917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.692 [2024-05-15 09:08:42.199045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.692 [2024-05-15 09:08:42.199069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.692 qpair failed and we were unable to recover it. 00:42:47.692 [2024-05-15 09:08:42.199214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.692 [2024-05-15 09:08:42.199379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.692 [2024-05-15 09:08:42.199406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.692 qpair failed and we were unable to recover it. 00:42:47.692 [2024-05-15 09:08:42.199555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.692 [2024-05-15 09:08:42.199710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.692 [2024-05-15 09:08:42.199734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.692 qpair failed and we were unable to recover it. 00:42:47.693 [2024-05-15 09:08:42.199835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.693 [2024-05-15 09:08:42.199941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.693 [2024-05-15 09:08:42.199965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.693 qpair failed and we were unable to recover it. 00:42:47.693 [2024-05-15 09:08:42.200074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.693 [2024-05-15 09:08:42.200264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.693 [2024-05-15 09:08:42.200290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.693 qpair failed and we were unable to recover it. 00:42:47.693 [2024-05-15 09:08:42.200445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.693 [2024-05-15 09:08:42.200596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.693 [2024-05-15 09:08:42.200623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.693 qpair failed and we were unable to recover it. 00:42:47.693 [2024-05-15 09:08:42.200799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.693 [2024-05-15 09:08:42.200931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.693 [2024-05-15 09:08:42.200955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.693 qpair failed and we were unable to recover it. 00:42:47.693 [2024-05-15 09:08:42.201136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.693 [2024-05-15 09:08:42.201270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.693 [2024-05-15 09:08:42.201295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.693 qpair failed and we were unable to recover it. 00:42:47.693 [2024-05-15 09:08:42.201405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.693 [2024-05-15 09:08:42.201530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.693 [2024-05-15 09:08:42.201554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.693 qpair failed and we were unable to recover it. 00:42:47.693 [2024-05-15 09:08:42.201657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.693 [2024-05-15 09:08:42.201784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.693 [2024-05-15 09:08:42.201810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.693 qpair failed and we were unable to recover it. 00:42:47.693 [2024-05-15 09:08:42.201905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.693 [2024-05-15 09:08:42.202059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.693 [2024-05-15 09:08:42.202084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.693 qpair failed and we were unable to recover it. 00:42:47.693 [2024-05-15 09:08:42.202188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.693 [2024-05-15 09:08:42.202329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.693 [2024-05-15 09:08:42.202354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.693 qpair failed and we were unable to recover it. 00:42:47.693 [2024-05-15 09:08:42.202479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.693 [2024-05-15 09:08:42.202712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.693 [2024-05-15 09:08:42.202740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.693 qpair failed and we were unable to recover it. 00:42:47.693 [2024-05-15 09:08:42.202876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.693 [2024-05-15 09:08:42.203009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.693 [2024-05-15 09:08:42.203035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.693 qpair failed and we were unable to recover it. 00:42:47.693 [2024-05-15 09:08:42.203139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.693 [2024-05-15 09:08:42.203270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.693 [2024-05-15 09:08:42.203294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.693 qpair failed and we were unable to recover it. 00:42:47.693 [2024-05-15 09:08:42.203461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.693 [2024-05-15 09:08:42.203582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.693 [2024-05-15 09:08:42.203623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.693 qpair failed and we were unable to recover it. 00:42:47.693 [2024-05-15 09:08:42.203781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.693 [2024-05-15 09:08:42.203915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.693 [2024-05-15 09:08:42.203941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.693 qpair failed and we were unable to recover it. 00:42:47.693 [2024-05-15 09:08:42.204110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.693 [2024-05-15 09:08:42.204278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.693 [2024-05-15 09:08:42.204335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.693 qpair failed and we were unable to recover it. 00:42:47.693 [2024-05-15 09:08:42.204467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.693 [2024-05-15 09:08:42.204565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.693 [2024-05-15 09:08:42.204588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.693 qpair failed and we were unable to recover it. 00:42:47.693 [2024-05-15 09:08:42.204740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.693 [2024-05-15 09:08:42.204920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.693 [2024-05-15 09:08:42.204944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.693 qpair failed and we were unable to recover it. 00:42:47.693 [2024-05-15 09:08:42.205096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.693 [2024-05-15 09:08:42.205249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.693 [2024-05-15 09:08:42.205278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.693 qpair failed and we were unable to recover it. 00:42:47.693 [2024-05-15 09:08:42.205417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.693 [2024-05-15 09:08:42.205548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.693 [2024-05-15 09:08:42.205573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.693 qpair failed and we were unable to recover it. 00:42:47.693 [2024-05-15 09:08:42.205701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.693 [2024-05-15 09:08:42.205833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.693 [2024-05-15 09:08:42.205862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.693 qpair failed and we were unable to recover it. 00:42:47.693 [2024-05-15 09:08:42.206017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.693 [2024-05-15 09:08:42.206204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.693 [2024-05-15 09:08:42.206236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.693 qpair failed and we were unable to recover it. 00:42:47.693 [2024-05-15 09:08:42.206370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.693 [2024-05-15 09:08:42.206469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.693 [2024-05-15 09:08:42.206493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.693 qpair failed and we were unable to recover it. 00:42:47.693 [2024-05-15 09:08:42.206589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.693 [2024-05-15 09:08:42.206714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.693 [2024-05-15 09:08:42.206742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.693 qpair failed and we were unable to recover it. 00:42:47.693 [2024-05-15 09:08:42.206850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.693 [2024-05-15 09:08:42.206963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.693 [2024-05-15 09:08:42.206994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.693 qpair failed and we were unable to recover it. 00:42:47.693 [2024-05-15 09:08:42.207150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.693 [2024-05-15 09:08:42.207309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.693 [2024-05-15 09:08:42.207335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.693 qpair failed and we were unable to recover it. 00:42:47.693 [2024-05-15 09:08:42.207469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.693 [2024-05-15 09:08:42.207605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.693 [2024-05-15 09:08:42.207632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.693 qpair failed and we were unable to recover it. 00:42:47.693 [2024-05-15 09:08:42.207781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.693 [2024-05-15 09:08:42.207893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.693 [2024-05-15 09:08:42.207921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.693 qpair failed and we were unable to recover it. 00:42:47.693 [2024-05-15 09:08:42.208066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.693 [2024-05-15 09:08:42.208170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.693 [2024-05-15 09:08:42.208195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.693 qpair failed and we were unable to recover it. 00:42:47.693 [2024-05-15 09:08:42.208385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.693 [2024-05-15 09:08:42.208541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.694 [2024-05-15 09:08:42.208565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.694 qpair failed and we were unable to recover it. 00:42:47.694 [2024-05-15 09:08:42.208758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.694 [2024-05-15 09:08:42.208910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.694 [2024-05-15 09:08:42.208935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.694 qpair failed and we were unable to recover it. 00:42:47.694 [2024-05-15 09:08:42.209107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.694 [2024-05-15 09:08:42.209233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.694 [2024-05-15 09:08:42.209259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.694 qpair failed and we were unable to recover it. 00:42:47.694 [2024-05-15 09:08:42.209388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.694 [2024-05-15 09:08:42.209532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.694 [2024-05-15 09:08:42.209558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.694 qpair failed and we were unable to recover it. 00:42:47.694 [2024-05-15 09:08:42.209723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.694 [2024-05-15 09:08:42.209897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.694 [2024-05-15 09:08:42.209924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.694 qpair failed and we were unable to recover it. 00:42:47.694 [2024-05-15 09:08:42.210077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.694 [2024-05-15 09:08:42.210231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.694 [2024-05-15 09:08:42.210257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.694 qpair failed and we were unable to recover it. 00:42:47.694 [2024-05-15 09:08:42.210382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.694 [2024-05-15 09:08:42.210486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.694 [2024-05-15 09:08:42.210515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.694 qpair failed and we were unable to recover it. 00:42:47.694 [2024-05-15 09:08:42.210654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.694 [2024-05-15 09:08:42.210767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.694 [2024-05-15 09:08:42.210795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.694 qpair failed and we were unable to recover it. 00:42:47.694 [2024-05-15 09:08:42.210968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.694 [2024-05-15 09:08:42.211095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.694 [2024-05-15 09:08:42.211122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.694 qpair failed and we were unable to recover it. 00:42:47.694 [2024-05-15 09:08:42.211251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.694 [2024-05-15 09:08:42.211370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.694 [2024-05-15 09:08:42.211393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.694 qpair failed and we were unable to recover it. 00:42:47.694 [2024-05-15 09:08:42.211574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.694 [2024-05-15 09:08:42.211717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.694 [2024-05-15 09:08:42.211743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.694 qpair failed and we were unable to recover it. 00:42:47.694 [2024-05-15 09:08:42.211871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.694 [2024-05-15 09:08:42.212024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.694 [2024-05-15 09:08:42.212066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.694 qpair failed and we were unable to recover it. 00:42:47.694 [2024-05-15 09:08:42.212203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.694 [2024-05-15 09:08:42.212356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.694 [2024-05-15 09:08:42.212387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.694 qpair failed and we were unable to recover it. 00:42:47.694 [2024-05-15 09:08:42.212545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.694 [2024-05-15 09:08:42.212676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.694 [2024-05-15 09:08:42.212708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.694 qpair failed and we were unable to recover it. 00:42:47.694 [2024-05-15 09:08:42.212852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.694 [2024-05-15 09:08:42.212979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.694 [2024-05-15 09:08:42.213003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.694 qpair failed and we were unable to recover it. 00:42:47.694 [2024-05-15 09:08:42.213146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.694 [2024-05-15 09:08:42.213283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.694 [2024-05-15 09:08:42.213313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.694 qpair failed and we were unable to recover it. 00:42:47.694 [2024-05-15 09:08:42.213466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.694 [2024-05-15 09:08:42.213610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.694 [2024-05-15 09:08:42.213637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.694 qpair failed and we were unable to recover it. 00:42:47.694 [2024-05-15 09:08:42.213763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.694 [2024-05-15 09:08:42.213892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.694 [2024-05-15 09:08:42.213917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.694 qpair failed and we were unable to recover it. 00:42:47.694 [2024-05-15 09:08:42.214032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.694 [2024-05-15 09:08:42.214177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.694 [2024-05-15 09:08:42.214204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.694 qpair failed and we were unable to recover it. 00:42:47.694 [2024-05-15 09:08:42.214336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.694 [2024-05-15 09:08:42.214474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.694 [2024-05-15 09:08:42.214501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.694 qpair failed and we were unable to recover it. 00:42:47.694 [2024-05-15 09:08:42.214655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.694 [2024-05-15 09:08:42.214785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.694 [2024-05-15 09:08:42.214810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.694 qpair failed and we were unable to recover it. 00:42:47.694 [2024-05-15 09:08:42.214969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.694 [2024-05-15 09:08:42.215106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.694 [2024-05-15 09:08:42.215132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.694 qpair failed and we were unable to recover it. 00:42:47.694 [2024-05-15 09:08:42.215269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.694 [2024-05-15 09:08:42.215393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.694 [2024-05-15 09:08:42.215420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.694 qpair failed and we were unable to recover it. 00:42:47.694 [2024-05-15 09:08:42.215547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.694 [2024-05-15 09:08:42.215673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.694 [2024-05-15 09:08:42.215697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.694 qpair failed and we were unable to recover it. 00:42:47.694 [2024-05-15 09:08:42.215849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.694 [2024-05-15 09:08:42.216034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.694 [2024-05-15 09:08:42.216061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.694 qpair failed and we were unable to recover it. 00:42:47.694 [2024-05-15 09:08:42.216204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.694 [2024-05-15 09:08:42.216320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.694 [2024-05-15 09:08:42.216348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.694 qpair failed and we were unable to recover it. 00:42:47.694 [2024-05-15 09:08:42.216479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.694 [2024-05-15 09:08:42.216628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.694 [2024-05-15 09:08:42.216653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.694 qpair failed and we were unable to recover it. 00:42:47.694 [2024-05-15 09:08:42.216825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.694 [2024-05-15 09:08:42.216945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.694 [2024-05-15 09:08:42.216969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.694 qpair failed and we were unable to recover it. 00:42:47.694 [2024-05-15 09:08:42.217089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.694 [2024-05-15 09:08:42.217241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.694 [2024-05-15 09:08:42.217268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.694 qpair failed and we were unable to recover it. 00:42:47.694 [2024-05-15 09:08:42.217419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.695 [2024-05-15 09:08:42.217546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.695 [2024-05-15 09:08:42.217571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.695 qpair failed and we were unable to recover it. 00:42:47.695 [2024-05-15 09:08:42.217749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.695 [2024-05-15 09:08:42.217864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.695 [2024-05-15 09:08:42.217892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.695 qpair failed and we were unable to recover it. 00:42:47.695 [2024-05-15 09:08:42.218000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.695 [2024-05-15 09:08:42.218134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.695 [2024-05-15 09:08:42.218175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.695 qpair failed and we were unable to recover it. 00:42:47.695 [2024-05-15 09:08:42.218353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.695 [2024-05-15 09:08:42.218466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.695 [2024-05-15 09:08:42.218512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.695 qpair failed and we were unable to recover it. 00:42:47.695 [2024-05-15 09:08:42.218660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.695 [2024-05-15 09:08:42.218827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.695 [2024-05-15 09:08:42.218851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.695 qpair failed and we were unable to recover it. 00:42:47.695 [2024-05-15 09:08:42.219003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.695 [2024-05-15 09:08:42.219122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.695 [2024-05-15 09:08:42.219149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.695 qpair failed and we were unable to recover it. 00:42:47.695 [2024-05-15 09:08:42.219274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.695 [2024-05-15 09:08:42.219388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.695 [2024-05-15 09:08:42.219414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.695 qpair failed and we were unable to recover it. 00:42:47.695 [2024-05-15 09:08:42.219540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.695 [2024-05-15 09:08:42.219693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.695 [2024-05-15 09:08:42.219721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.695 qpair failed and we were unable to recover it. 00:42:47.695 [2024-05-15 09:08:42.219863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.695 [2024-05-15 09:08:42.220003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.695 [2024-05-15 09:08:42.220030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.695 qpair failed and we were unable to recover it. 00:42:47.695 [2024-05-15 09:08:42.220171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.695 [2024-05-15 09:08:42.220359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.695 [2024-05-15 09:08:42.220385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.695 qpair failed and we were unable to recover it. 00:42:47.695 [2024-05-15 09:08:42.220483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.695 [2024-05-15 09:08:42.220638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.695 [2024-05-15 09:08:42.220665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.695 qpair failed and we were unable to recover it. 00:42:47.695 [2024-05-15 09:08:42.220832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.695 [2024-05-15 09:08:42.221004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.695 [2024-05-15 09:08:42.221028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.695 qpair failed and we were unable to recover it. 00:42:47.695 [2024-05-15 09:08:42.221179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.695 [2024-05-15 09:08:42.221301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.695 [2024-05-15 09:08:42.221326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.695 qpair failed and we were unable to recover it. 00:42:47.695 [2024-05-15 09:08:42.221462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.695 [2024-05-15 09:08:42.221574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.695 [2024-05-15 09:08:42.221601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.695 qpair failed and we were unable to recover it. 00:42:47.695 [2024-05-15 09:08:42.221741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.695 [2024-05-15 09:08:42.221860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.695 [2024-05-15 09:08:42.221887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.695 qpair failed and we were unable to recover it. 00:42:47.695 [2024-05-15 09:08:42.222020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.695 [2024-05-15 09:08:42.222119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.695 [2024-05-15 09:08:42.222142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.695 qpair failed and we were unable to recover it. 00:42:47.695 [2024-05-15 09:08:42.222275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.695 [2024-05-15 09:08:42.222406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.695 [2024-05-15 09:08:42.222432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.695 qpair failed and we were unable to recover it. 00:42:47.695 [2024-05-15 09:08:42.222577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.695 [2024-05-15 09:08:42.222742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.695 [2024-05-15 09:08:42.222774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.695 qpair failed and we were unable to recover it. 00:42:47.695 [2024-05-15 09:08:42.222902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.695 [2024-05-15 09:08:42.223030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.695 [2024-05-15 09:08:42.223055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.695 qpair failed and we were unable to recover it. 00:42:47.695 [2024-05-15 09:08:42.223198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.695 [2024-05-15 09:08:42.223334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.695 [2024-05-15 09:08:42.223361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.695 qpair failed and we were unable to recover it. 00:42:47.695 [2024-05-15 09:08:42.223540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.695 [2024-05-15 09:08:42.223669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.695 [2024-05-15 09:08:42.223692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.695 qpair failed and we were unable to recover it. 00:42:47.695 [2024-05-15 09:08:42.223814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.695 [2024-05-15 09:08:42.223940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.695 [2024-05-15 09:08:42.223964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.695 qpair failed and we were unable to recover it. 00:42:47.695 [2024-05-15 09:08:42.224140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.695 [2024-05-15 09:08:42.224259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.695 [2024-05-15 09:08:42.224287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.695 qpair failed and we were unable to recover it. 00:42:47.695 [2024-05-15 09:08:42.224433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.695 [2024-05-15 09:08:42.224572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.695 [2024-05-15 09:08:42.224601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.695 qpair failed and we were unable to recover it. 00:42:47.695 [2024-05-15 09:08:42.224789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.695 [2024-05-15 09:08:42.224921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.695 [2024-05-15 09:08:42.224963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.695 qpair failed and we were unable to recover it. 00:42:47.695 [2024-05-15 09:08:42.225126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.695 [2024-05-15 09:08:42.225234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.695 [2024-05-15 09:08:42.225266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.695 qpair failed and we were unable to recover it. 00:42:47.695 [2024-05-15 09:08:42.225422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.695 [2024-05-15 09:08:42.225550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.695 [2024-05-15 09:08:42.225575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.695 qpair failed and we were unable to recover it. 00:42:47.695 [2024-05-15 09:08:42.225669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.695 [2024-05-15 09:08:42.225786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.695 [2024-05-15 09:08:42.225812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.695 qpair failed and we were unable to recover it. 00:42:47.695 [2024-05-15 09:08:42.225914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.696 [2024-05-15 09:08:42.226064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.696 [2024-05-15 09:08:42.226089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.696 qpair failed and we were unable to recover it. 00:42:47.696 [2024-05-15 09:08:42.226214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.696 [2024-05-15 09:08:42.226382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.696 [2024-05-15 09:08:42.226410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.696 qpair failed and we were unable to recover it. 00:42:47.696 [2024-05-15 09:08:42.226587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.696 [2024-05-15 09:08:42.226712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.696 [2024-05-15 09:08:42.226736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.696 qpair failed and we were unable to recover it. 00:42:47.696 [2024-05-15 09:08:42.226872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.696 [2024-05-15 09:08:42.226981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.696 [2024-05-15 09:08:42.227008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.696 qpair failed and we were unable to recover it. 00:42:47.696 [2024-05-15 09:08:42.227124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.696 [2024-05-15 09:08:42.227298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.696 [2024-05-15 09:08:42.227326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.696 qpair failed and we were unable to recover it. 00:42:47.696 [2024-05-15 09:08:42.227483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.696 [2024-05-15 09:08:42.227638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.696 [2024-05-15 09:08:42.227663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.696 qpair failed and we were unable to recover it. 00:42:47.696 [2024-05-15 09:08:42.227834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.696 [2024-05-15 09:08:42.227945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.696 [2024-05-15 09:08:42.227973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.696 qpair failed and we were unable to recover it. 00:42:47.696 [2024-05-15 09:08:42.228153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.696 [2024-05-15 09:08:42.228250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.696 [2024-05-15 09:08:42.228274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.696 qpair failed and we were unable to recover it. 00:42:47.696 [2024-05-15 09:08:42.228376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.696 [2024-05-15 09:08:42.228525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.696 [2024-05-15 09:08:42.228549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.696 qpair failed and we were unable to recover it. 00:42:47.696 [2024-05-15 09:08:42.228730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.696 [2024-05-15 09:08:42.228844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.696 [2024-05-15 09:08:42.228871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.696 qpair failed and we were unable to recover it. 00:42:47.696 [2024-05-15 09:08:42.229009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.696 [2024-05-15 09:08:42.229181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.696 [2024-05-15 09:08:42.229208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.696 qpair failed and we were unable to recover it. 00:42:47.696 [2024-05-15 09:08:42.229333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.696 [2024-05-15 09:08:42.229485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.696 [2024-05-15 09:08:42.229525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.696 qpair failed and we were unable to recover it. 00:42:47.696 [2024-05-15 09:08:42.229643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.696 [2024-05-15 09:08:42.229776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.696 [2024-05-15 09:08:42.229801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.696 qpair failed and we were unable to recover it. 00:42:47.696 [2024-05-15 09:08:42.229931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.696 [2024-05-15 09:08:42.230107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.696 [2024-05-15 09:08:42.230131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.696 qpair failed and we were unable to recover it. 00:42:47.696 [2024-05-15 09:08:42.230265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.696 [2024-05-15 09:08:42.230419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.696 [2024-05-15 09:08:42.230444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.696 qpair failed and we were unable to recover it. 00:42:47.696 [2024-05-15 09:08:42.230590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.696 [2024-05-15 09:08:42.230740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.696 [2024-05-15 09:08:42.230765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.696 qpair failed and we were unable to recover it. 00:42:47.696 [2024-05-15 09:08:42.230895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.696 [2024-05-15 09:08:42.231021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.696 [2024-05-15 09:08:42.231050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.696 qpair failed and we were unable to recover it. 00:42:47.696 [2024-05-15 09:08:42.231203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.696 [2024-05-15 09:08:42.231364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.696 [2024-05-15 09:08:42.231389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.696 qpair failed and we were unable to recover it. 00:42:47.696 [2024-05-15 09:08:42.231524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.696 [2024-05-15 09:08:42.231694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.696 [2024-05-15 09:08:42.231721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.696 qpair failed and we were unable to recover it. 00:42:47.696 [2024-05-15 09:08:42.231892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.696 [2024-05-15 09:08:42.232024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.696 [2024-05-15 09:08:42.232049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.696 qpair failed and we were unable to recover it. 00:42:47.696 [2024-05-15 09:08:42.232185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.696 [2024-05-15 09:08:42.232294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.696 [2024-05-15 09:08:42.232320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.696 qpair failed and we were unable to recover it. 00:42:47.696 [2024-05-15 09:08:42.232436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.696 [2024-05-15 09:08:42.232602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.696 [2024-05-15 09:08:42.232630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.696 qpair failed and we were unable to recover it. 00:42:47.696 [2024-05-15 09:08:42.232744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.696 [2024-05-15 09:08:42.232906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.696 [2024-05-15 09:08:42.232933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.696 qpair failed and we were unable to recover it. 00:42:47.696 [2024-05-15 09:08:42.233063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.696 [2024-05-15 09:08:42.233223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.696 [2024-05-15 09:08:42.233248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.696 qpair failed and we were unable to recover it. 00:42:47.696 [2024-05-15 09:08:42.233416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.697 [2024-05-15 09:08:42.233550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.697 [2024-05-15 09:08:42.233591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.697 qpair failed and we were unable to recover it. 00:42:47.697 [2024-05-15 09:08:42.233717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.697 [2024-05-15 09:08:42.233860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.697 [2024-05-15 09:08:42.233888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.697 qpair failed and we were unable to recover it. 00:42:47.697 [2024-05-15 09:08:42.234062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.697 [2024-05-15 09:08:42.234211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.697 [2024-05-15 09:08:42.234278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.697 qpair failed and we were unable to recover it. 00:42:47.697 [2024-05-15 09:08:42.234420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.697 [2024-05-15 09:08:42.234565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.697 [2024-05-15 09:08:42.234592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.697 qpair failed and we were unable to recover it. 00:42:47.697 [2024-05-15 09:08:42.234733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.697 [2024-05-15 09:08:42.234899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.697 [2024-05-15 09:08:42.234926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.697 qpair failed and we were unable to recover it. 00:42:47.697 [2024-05-15 09:08:42.235077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.697 [2024-05-15 09:08:42.235182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.697 [2024-05-15 09:08:42.235205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.697 qpair failed and we were unable to recover it. 00:42:47.697 [2024-05-15 09:08:42.235338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.697 [2024-05-15 09:08:42.235474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.697 [2024-05-15 09:08:42.235498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.697 qpair failed and we were unable to recover it. 00:42:47.697 [2024-05-15 09:08:42.235602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.697 [2024-05-15 09:08:42.235716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.697 [2024-05-15 09:08:42.235743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.697 qpair failed and we were unable to recover it. 00:42:47.697 [2024-05-15 09:08:42.235891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.697 [2024-05-15 09:08:42.236023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.697 [2024-05-15 09:08:42.236047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.697 qpair failed and we were unable to recover it. 00:42:47.697 [2024-05-15 09:08:42.236200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.697 [2024-05-15 09:08:42.236306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.697 [2024-05-15 09:08:42.236331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.697 qpair failed and we were unable to recover it. 00:42:47.697 [2024-05-15 09:08:42.236481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.697 [2024-05-15 09:08:42.236595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.697 [2024-05-15 09:08:42.236622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.697 qpair failed and we were unable to recover it. 00:42:47.697 [2024-05-15 09:08:42.236800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.697 [2024-05-15 09:08:42.236929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.697 [2024-05-15 09:08:42.236971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.697 qpair failed and we were unable to recover it. 00:42:47.697 [2024-05-15 09:08:42.237094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.697 [2024-05-15 09:08:42.237265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.697 [2024-05-15 09:08:42.237290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.697 qpair failed and we were unable to recover it. 00:42:47.697 [2024-05-15 09:08:42.237397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.697 [2024-05-15 09:08:42.237550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.697 [2024-05-15 09:08:42.237577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.697 qpair failed and we were unable to recover it. 00:42:47.697 [2024-05-15 09:08:42.237707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.697 [2024-05-15 09:08:42.237815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.697 [2024-05-15 09:08:42.237839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.697 qpair failed and we were unable to recover it. 00:42:47.697 [2024-05-15 09:08:42.237946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.697 [2024-05-15 09:08:42.238097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.697 [2024-05-15 09:08:42.238120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.697 qpair failed and we were unable to recover it. 00:42:47.697 [2024-05-15 09:08:42.238360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.697 [2024-05-15 09:08:42.238527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.697 [2024-05-15 09:08:42.238558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.697 qpair failed and we were unable to recover it. 00:42:47.697 [2024-05-15 09:08:42.238685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.697 [2024-05-15 09:08:42.238816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.697 [2024-05-15 09:08:42.238841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.697 qpair failed and we were unable to recover it. 00:42:47.697 [2024-05-15 09:08:42.239017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.697 [2024-05-15 09:08:42.239131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.697 [2024-05-15 09:08:42.239159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.697 qpair failed and we were unable to recover it. 00:42:47.697 [2024-05-15 09:08:42.239311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.697 [2024-05-15 09:08:42.239453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.697 [2024-05-15 09:08:42.239478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.697 qpair failed and we were unable to recover it. 00:42:47.697 [2024-05-15 09:08:42.239608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.697 [2024-05-15 09:08:42.239740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.697 [2024-05-15 09:08:42.239764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.697 qpair failed and we were unable to recover it. 00:42:47.697 [2024-05-15 09:08:42.239915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.697 [2024-05-15 09:08:42.240033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.697 [2024-05-15 09:08:42.240059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.697 qpair failed and we were unable to recover it. 00:42:47.697 [2024-05-15 09:08:42.240183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.697 [2024-05-15 09:08:42.240327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.697 [2024-05-15 09:08:42.240355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.697 qpair failed and we were unable to recover it. 00:42:47.697 [2024-05-15 09:08:42.240470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.697 [2024-05-15 09:08:42.240622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.697 [2024-05-15 09:08:42.240647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.697 qpair failed and we were unable to recover it. 00:42:47.697 [2024-05-15 09:08:42.240748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.697 [2024-05-15 09:08:42.240852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.697 [2024-05-15 09:08:42.240876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.697 qpair failed and we were unable to recover it. 00:42:47.697 [2024-05-15 09:08:42.240987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.697 [2024-05-15 09:08:42.241119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.697 [2024-05-15 09:08:42.241143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.697 qpair failed and we were unable to recover it. 00:42:47.697 [2024-05-15 09:08:42.241271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.697 [2024-05-15 09:08:42.241401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.697 [2024-05-15 09:08:42.241426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.698 qpair failed and we were unable to recover it. 00:42:47.698 [2024-05-15 09:08:42.241591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.698 [2024-05-15 09:08:42.241740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.698 [2024-05-15 09:08:42.241768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.698 qpair failed and we were unable to recover it. 00:42:47.698 [2024-05-15 09:08:42.241885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.698 [2024-05-15 09:08:42.242043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.698 [2024-05-15 09:08:42.242067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.698 qpair failed and we were unable to recover it. 00:42:47.698 [2024-05-15 09:08:42.242205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.698 [2024-05-15 09:08:42.242346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.698 [2024-05-15 09:08:42.242387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.698 qpair failed and we were unable to recover it. 00:42:47.698 [2024-05-15 09:08:42.242527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.698 [2024-05-15 09:08:42.242661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.698 [2024-05-15 09:08:42.242702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.698 qpair failed and we were unable to recover it. 00:42:47.698 [2024-05-15 09:08:42.242805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.698 [2024-05-15 09:08:42.242985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.698 [2024-05-15 09:08:42.243013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.698 qpair failed and we were unable to recover it. 00:42:47.698 [2024-05-15 09:08:42.243146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.698 [2024-05-15 09:08:42.243276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.698 [2024-05-15 09:08:42.243301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.698 qpair failed and we were unable to recover it. 00:42:47.698 [2024-05-15 09:08:42.243451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.698 [2024-05-15 09:08:42.243616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.698 [2024-05-15 09:08:42.243643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.698 qpair failed and we were unable to recover it. 00:42:47.698 [2024-05-15 09:08:42.243791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.698 [2024-05-15 09:08:42.243962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.698 [2024-05-15 09:08:42.243989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.698 qpair failed and we were unable to recover it. 00:42:47.698 [2024-05-15 09:08:42.244127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.698 [2024-05-15 09:08:42.244281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.698 [2024-05-15 09:08:42.244306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.698 qpair failed and we were unable to recover it. 00:42:47.698 [2024-05-15 09:08:42.244413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.698 [2024-05-15 09:08:42.244562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.698 [2024-05-15 09:08:42.244590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.698 qpair failed and we were unable to recover it. 00:42:47.698 [2024-05-15 09:08:42.244736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.698 [2024-05-15 09:08:42.244908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.698 [2024-05-15 09:08:42.244933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.698 qpair failed and we were unable to recover it. 00:42:47.698 [2024-05-15 09:08:42.245062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.698 [2024-05-15 09:08:42.245224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.698 [2024-05-15 09:08:42.245265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.698 qpair failed and we were unable to recover it. 00:42:47.698 [2024-05-15 09:08:42.245433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.698 [2024-05-15 09:08:42.245632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.698 [2024-05-15 09:08:42.245679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.698 qpair failed and we were unable to recover it. 00:42:47.698 [2024-05-15 09:08:42.245861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.698 [2024-05-15 09:08:42.245991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.698 [2024-05-15 09:08:42.246015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.698 qpair failed and we were unable to recover it. 00:42:47.698 [2024-05-15 09:08:42.246168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.698 [2024-05-15 09:08:42.246299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.698 [2024-05-15 09:08:42.246342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.698 qpair failed and we were unable to recover it. 00:42:47.698 [2024-05-15 09:08:42.246526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.698 [2024-05-15 09:08:42.246655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.698 [2024-05-15 09:08:42.246679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.698 qpair failed and we were unable to recover it. 00:42:47.698 [2024-05-15 09:08:42.246859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.698 [2024-05-15 09:08:42.246996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.698 [2024-05-15 09:08:42.247022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.698 qpair failed and we were unable to recover it. 00:42:47.698 [2024-05-15 09:08:42.247163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.698 [2024-05-15 09:08:42.247280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.698 [2024-05-15 09:08:42.247306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.698 qpair failed and we were unable to recover it. 00:42:47.698 [2024-05-15 09:08:42.247460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.698 [2024-05-15 09:08:42.247667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.698 [2024-05-15 09:08:42.247716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.698 qpair failed and we were unable to recover it. 00:42:47.698 [2024-05-15 09:08:42.247858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.698 [2024-05-15 09:08:42.247999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.698 [2024-05-15 09:08:42.248026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.698 qpair failed and we were unable to recover it. 00:42:47.698 [2024-05-15 09:08:42.248158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.698 [2024-05-15 09:08:42.248288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.698 [2024-05-15 09:08:42.248313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.698 qpair failed and we were unable to recover it. 00:42:47.698 [2024-05-15 09:08:42.248416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.698 [2024-05-15 09:08:42.248517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.698 [2024-05-15 09:08:42.248558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.698 qpair failed and we were unable to recover it. 00:42:47.698 [2024-05-15 09:08:42.248725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.698 [2024-05-15 09:08:42.248953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.698 [2024-05-15 09:08:42.249007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.698 qpair failed and we were unable to recover it. 00:42:47.698 [2024-05-15 09:08:42.249159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.698 [2024-05-15 09:08:42.249299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.698 [2024-05-15 09:08:42.249323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.698 qpair failed and we were unable to recover it. 00:42:47.698 [2024-05-15 09:08:42.249526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.698 [2024-05-15 09:08:42.249675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.698 [2024-05-15 09:08:42.249699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.698 qpair failed and we were unable to recover it. 00:42:47.698 [2024-05-15 09:08:42.249827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.698 [2024-05-15 09:08:42.249996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.698 [2024-05-15 09:08:42.250021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.698 qpair failed and we were unable to recover it. 00:42:47.698 [2024-05-15 09:08:42.250149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.698 [2024-05-15 09:08:42.250303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.698 [2024-05-15 09:08:42.250328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.698 qpair failed and we were unable to recover it. 00:42:47.698 [2024-05-15 09:08:42.250477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.699 [2024-05-15 09:08:42.250634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.699 [2024-05-15 09:08:42.250659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.699 qpair failed and we were unable to recover it. 00:42:47.699 [2024-05-15 09:08:42.250801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.699 [2024-05-15 09:08:42.250970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.699 [2024-05-15 09:08:42.250997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.699 qpair failed and we were unable to recover it. 00:42:47.699 [2024-05-15 09:08:42.251202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.699 [2024-05-15 09:08:42.251350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.699 [2024-05-15 09:08:42.251374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.699 qpair failed and we were unable to recover it. 00:42:47.699 [2024-05-15 09:08:42.251477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.699 [2024-05-15 09:08:42.251643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.699 [2024-05-15 09:08:42.251670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.699 qpair failed and we were unable to recover it. 00:42:47.699 [2024-05-15 09:08:42.251803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.699 [2024-05-15 09:08:42.251947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.699 [2024-05-15 09:08:42.251975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.699 qpair failed and we were unable to recover it. 00:42:47.699 [2024-05-15 09:08:42.252101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.699 [2024-05-15 09:08:42.252246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.699 [2024-05-15 09:08:42.252272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.699 qpair failed and we were unable to recover it. 00:42:47.699 [2024-05-15 09:08:42.252433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.699 [2024-05-15 09:08:42.252597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.699 [2024-05-15 09:08:42.252624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.699 qpair failed and we were unable to recover it. 00:42:47.699 [2024-05-15 09:08:42.252744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.699 [2024-05-15 09:08:42.252896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.699 [2024-05-15 09:08:42.252923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.699 qpair failed and we were unable to recover it. 00:42:47.699 [2024-05-15 09:08:42.253071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.699 [2024-05-15 09:08:42.253182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.699 [2024-05-15 09:08:42.253206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.699 qpair failed and we were unable to recover it. 00:42:47.699 [2024-05-15 09:08:42.253320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.699 [2024-05-15 09:08:42.253423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.699 [2024-05-15 09:08:42.253463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.699 qpair failed and we were unable to recover it. 00:42:47.699 [2024-05-15 09:08:42.253608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.699 [2024-05-15 09:08:42.253735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.699 [2024-05-15 09:08:42.253760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.699 qpair failed and we were unable to recover it. 00:42:47.699 [2024-05-15 09:08:42.253884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.699 [2024-05-15 09:08:42.254036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.699 [2024-05-15 09:08:42.254060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.699 qpair failed and we were unable to recover it. 00:42:47.699 [2024-05-15 09:08:42.254185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.699 [2024-05-15 09:08:42.254331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.699 [2024-05-15 09:08:42.254359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.699 qpair failed and we were unable to recover it. 00:42:47.699 [2024-05-15 09:08:42.254497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.699 [2024-05-15 09:08:42.254639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.699 [2024-05-15 09:08:42.254671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.699 qpair failed and we were unable to recover it. 00:42:47.699 [2024-05-15 09:08:42.254820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.699 [2024-05-15 09:08:42.254954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.699 [2024-05-15 09:08:42.254978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.699 qpair failed and we were unable to recover it. 00:42:47.699 [2024-05-15 09:08:42.255097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.699 [2024-05-15 09:08:42.255239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.699 [2024-05-15 09:08:42.255267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.699 qpair failed and we were unable to recover it. 00:42:47.699 [2024-05-15 09:08:42.255408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.699 [2024-05-15 09:08:42.255561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.699 [2024-05-15 09:08:42.255586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.699 qpair failed and we were unable to recover it. 00:42:47.699 [2024-05-15 09:08:42.255720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.699 [2024-05-15 09:08:42.255822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.699 [2024-05-15 09:08:42.255846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.699 qpair failed and we were unable to recover it. 00:42:47.699 [2024-05-15 09:08:42.255971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.699 [2024-05-15 09:08:42.256123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.699 [2024-05-15 09:08:42.256151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.699 qpair failed and we were unable to recover it. 00:42:47.699 [2024-05-15 09:08:42.256266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.699 [2024-05-15 09:08:42.256414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.699 [2024-05-15 09:08:42.256438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.699 qpair failed and we were unable to recover it. 00:42:47.699 [2024-05-15 09:08:42.256547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.699 [2024-05-15 09:08:42.256668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.699 [2024-05-15 09:08:42.256691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.699 qpair failed and we were unable to recover it. 00:42:47.699 [2024-05-15 09:08:42.256817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.699 [2024-05-15 09:08:42.256984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.699 [2024-05-15 09:08:42.257012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.699 qpair failed and we were unable to recover it. 00:42:47.699 [2024-05-15 09:08:42.257187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.699 [2024-05-15 09:08:42.257335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.699 [2024-05-15 09:08:42.257364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.699 qpair failed and we were unable to recover it. 00:42:47.699 [2024-05-15 09:08:42.257515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.699 [2024-05-15 09:08:42.257695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.699 [2024-05-15 09:08:42.257727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.699 qpair failed and we were unable to recover it. 00:42:47.699 [2024-05-15 09:08:42.257852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.699 [2024-05-15 09:08:42.257995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.699 [2024-05-15 09:08:42.258023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.699 qpair failed and we were unable to recover it. 00:42:47.699 [2024-05-15 09:08:42.258164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.699 [2024-05-15 09:08:42.258346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.699 [2024-05-15 09:08:42.258374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.699 qpair failed and we were unable to recover it. 00:42:47.699 [2024-05-15 09:08:42.258510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.699 [2024-05-15 09:08:42.258667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.699 [2024-05-15 09:08:42.258714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.699 qpair failed and we were unable to recover it. 00:42:47.699 [2024-05-15 09:08:42.258865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.699 [2024-05-15 09:08:42.259002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.699 [2024-05-15 09:08:42.259030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.699 qpair failed and we were unable to recover it. 00:42:47.699 [2024-05-15 09:08:42.259224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.699 [2024-05-15 09:08:42.259360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.699 [2024-05-15 09:08:42.259387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.700 qpair failed and we were unable to recover it. 00:42:47.700 [2024-05-15 09:08:42.259535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.700 [2024-05-15 09:08:42.259697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.700 [2024-05-15 09:08:42.259745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.700 qpair failed and we were unable to recover it. 00:42:47.700 [2024-05-15 09:08:42.259902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.700 [2024-05-15 09:08:42.260030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.700 [2024-05-15 09:08:42.260059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.700 qpair failed and we were unable to recover it. 00:42:47.700 [2024-05-15 09:08:42.260186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.700 [2024-05-15 09:08:42.260357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.700 [2024-05-15 09:08:42.260388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.700 qpair failed and we were unable to recover it. 00:42:47.700 [2024-05-15 09:08:42.260531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.700 [2024-05-15 09:08:42.260689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.700 [2024-05-15 09:08:42.260717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.700 qpair failed and we were unable to recover it. 00:42:47.700 [2024-05-15 09:08:42.260831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.700 [2024-05-15 09:08:42.260934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.700 [2024-05-15 09:08:42.260963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.700 qpair failed and we were unable to recover it. 00:42:47.700 [2024-05-15 09:08:42.261117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.700 [2024-05-15 09:08:42.261274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.700 [2024-05-15 09:08:42.261307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.700 qpair failed and we were unable to recover it. 00:42:47.700 [2024-05-15 09:08:42.261454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.700 [2024-05-15 09:08:42.261603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.700 [2024-05-15 09:08:42.261632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.700 qpair failed and we were unable to recover it. 00:42:47.700 [2024-05-15 09:08:42.261796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.700 [2024-05-15 09:08:42.261946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.700 [2024-05-15 09:08:42.261979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.700 qpair failed and we were unable to recover it. 00:42:47.700 [2024-05-15 09:08:42.262168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.700 [2024-05-15 09:08:42.262368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.700 [2024-05-15 09:08:42.262399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.700 qpair failed and we were unable to recover it. 00:42:47.700 [2024-05-15 09:08:42.262564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.700 [2024-05-15 09:08:42.262698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.700 [2024-05-15 09:08:42.262745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.700 qpair failed and we were unable to recover it. 00:42:47.700 [2024-05-15 09:08:42.262901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.700 [2024-05-15 09:08:42.263018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.700 [2024-05-15 09:08:42.263051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.700 qpair failed and we were unable to recover it. 00:42:47.700 [2024-05-15 09:08:42.263177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.700 [2024-05-15 09:08:42.263356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.700 [2024-05-15 09:08:42.263388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.700 qpair failed and we were unable to recover it. 00:42:47.700 [2024-05-15 09:08:42.263528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.700 [2024-05-15 09:08:42.263666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.700 [2024-05-15 09:08:42.263695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.700 qpair failed and we were unable to recover it. 00:42:47.700 [2024-05-15 09:08:42.263890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.700 [2024-05-15 09:08:42.264009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.700 [2024-05-15 09:08:42.264055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.700 qpair failed and we were unable to recover it. 00:42:47.700 [2024-05-15 09:08:42.264199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.700 [2024-05-15 09:08:42.264322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.700 [2024-05-15 09:08:42.264353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.700 qpair failed and we were unable to recover it. 00:42:47.700 [2024-05-15 09:08:42.264479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.700 [2024-05-15 09:08:42.264618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.700 [2024-05-15 09:08:42.264648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.700 qpair failed and we were unable to recover it. 00:42:47.700 [2024-05-15 09:08:42.264793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.700 [2024-05-15 09:08:42.264930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.700 [2024-05-15 09:08:42.264962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.700 qpair failed and we were unable to recover it. 00:42:47.700 [2024-05-15 09:08:42.265139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.700 [2024-05-15 09:08:42.265295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.700 [2024-05-15 09:08:42.265326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.700 qpair failed and we were unable to recover it. 00:42:47.700 [2024-05-15 09:08:42.265451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.700 [2024-05-15 09:08:42.265591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.700 [2024-05-15 09:08:42.265619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.700 qpair failed and we were unable to recover it. 00:42:47.700 [2024-05-15 09:08:42.265806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.700 [2024-05-15 09:08:42.265953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.700 [2024-05-15 09:08:42.265985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.700 qpair failed and we were unable to recover it. 00:42:47.700 [2024-05-15 09:08:42.266118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.700 [2024-05-15 09:08:42.266268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.700 [2024-05-15 09:08:42.266296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.700 qpair failed and we were unable to recover it. 00:42:47.700 [2024-05-15 09:08:42.266436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.700 [2024-05-15 09:08:42.266562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.700 [2024-05-15 09:08:42.266591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.700 qpair failed and we were unable to recover it. 00:42:47.700 [2024-05-15 09:08:42.266700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.700 [2024-05-15 09:08:42.266838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.700 [2024-05-15 09:08:42.266865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.700 qpair failed and we were unable to recover it. 00:42:47.700 [2024-05-15 09:08:42.267023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.700 [2024-05-15 09:08:42.267199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.700 [2024-05-15 09:08:42.267241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.700 qpair failed and we were unable to recover it. 00:42:47.700 [2024-05-15 09:08:42.267403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.700 [2024-05-15 09:08:42.267540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.700 [2024-05-15 09:08:42.267569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.700 qpair failed and we were unable to recover it. 00:42:47.700 [2024-05-15 09:08:42.267680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.700 [2024-05-15 09:08:42.267823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.701 [2024-05-15 09:08:42.267850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.701 qpair failed and we were unable to recover it. 00:42:47.701 [2024-05-15 09:08:42.267965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.701 [2024-05-15 09:08:42.268075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.701 [2024-05-15 09:08:42.268105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.701 qpair failed and we were unable to recover it. 00:42:47.701 [2024-05-15 09:08:42.268222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.701 [2024-05-15 09:08:42.268327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.701 [2024-05-15 09:08:42.268358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.701 qpair failed and we were unable to recover it. 00:42:47.701 [2024-05-15 09:08:42.268487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.701 [2024-05-15 09:08:42.268649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.701 [2024-05-15 09:08:42.268680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.701 qpair failed and we were unable to recover it. 00:42:47.701 [2024-05-15 09:08:42.268821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.701 [2024-05-15 09:08:42.268980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.701 [2024-05-15 09:08:42.269013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.701 qpair failed and we were unable to recover it. 00:42:47.701 [2024-05-15 09:08:42.269157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.701 [2024-05-15 09:08:42.269269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.701 [2024-05-15 09:08:42.269297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.701 qpair failed and we were unable to recover it. 00:42:47.701 [2024-05-15 09:08:42.269439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.701 [2024-05-15 09:08:42.269565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.701 [2024-05-15 09:08:42.269602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.701 qpair failed and we were unable to recover it. 00:42:47.701 [2024-05-15 09:08:42.269762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.701 [2024-05-15 09:08:42.269912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.701 [2024-05-15 09:08:42.269943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.701 qpair failed and we were unable to recover it. 00:42:47.701 [2024-05-15 09:08:42.270104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.701 [2024-05-15 09:08:42.270255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.701 [2024-05-15 09:08:42.270288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.701 qpair failed and we were unable to recover it. 00:42:47.701 [2024-05-15 09:08:42.270428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.701 [2024-05-15 09:08:42.270572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.701 [2024-05-15 09:08:42.270602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.701 qpair failed and we were unable to recover it. 00:42:47.701 [2024-05-15 09:08:42.270740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.701 [2024-05-15 09:08:42.270887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.701 [2024-05-15 09:08:42.270924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.701 qpair failed and we were unable to recover it. 00:42:47.701 [2024-05-15 09:08:42.271065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.701 [2024-05-15 09:08:42.271227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.701 [2024-05-15 09:08:42.271261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.701 qpair failed and we were unable to recover it. 00:42:47.701 [2024-05-15 09:08:42.271420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.701 [2024-05-15 09:08:42.271554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.701 [2024-05-15 09:08:42.271582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.701 qpair failed and we were unable to recover it. 00:42:47.701 [2024-05-15 09:08:42.271726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.701 [2024-05-15 09:08:42.271853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.701 [2024-05-15 09:08:42.271885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.701 qpair failed and we were unable to recover it. 00:42:47.701 [2024-05-15 09:08:42.272046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.701 [2024-05-15 09:08:42.272210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.701 [2024-05-15 09:08:42.272244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.701 qpair failed and we were unable to recover it. 00:42:47.701 [2024-05-15 09:08:42.272360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.701 [2024-05-15 09:08:42.272505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.701 [2024-05-15 09:08:42.272536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.701 qpair failed and we were unable to recover it. 00:42:47.701 [2024-05-15 09:08:42.272660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.701 [2024-05-15 09:08:42.272810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.701 [2024-05-15 09:08:42.272839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.701 qpair failed and we were unable to recover it. 00:42:47.701 [2024-05-15 09:08:42.272976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.701 [2024-05-15 09:08:42.273117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.701 [2024-05-15 09:08:42.273144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.701 qpair failed and we were unable to recover it. 00:42:47.701 [2024-05-15 09:08:42.273326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.701 [2024-05-15 09:08:42.273465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.701 [2024-05-15 09:08:42.273515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.701 qpair failed and we were unable to recover it. 00:42:47.701 [2024-05-15 09:08:42.273631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.701 [2024-05-15 09:08:42.273786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.701 [2024-05-15 09:08:42.273822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.701 qpair failed and we were unable to recover it. 00:42:47.701 [2024-05-15 09:08:42.273982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.701 [2024-05-15 09:08:42.274123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.701 [2024-05-15 09:08:42.274153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.701 qpair failed and we were unable to recover it. 00:42:47.701 [2024-05-15 09:08:42.274293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.701 [2024-05-15 09:08:42.274450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.701 [2024-05-15 09:08:42.274480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.701 qpair failed and we were unable to recover it. 00:42:47.701 [2024-05-15 09:08:42.274624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.701 [2024-05-15 09:08:42.274815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.701 [2024-05-15 09:08:42.274846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.701 qpair failed and we were unable to recover it. 00:42:47.701 [2024-05-15 09:08:42.274991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.701 [2024-05-15 09:08:42.275127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.701 [2024-05-15 09:08:42.275171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.701 qpair failed and we were unable to recover it. 00:42:47.702 [2024-05-15 09:08:42.275367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.702 [2024-05-15 09:08:42.275513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.702 [2024-05-15 09:08:42.275547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.702 qpair failed and we were unable to recover it. 00:42:47.702 [2024-05-15 09:08:42.275722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.702 [2024-05-15 09:08:42.275861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.702 [2024-05-15 09:08:42.275891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.702 qpair failed and we were unable to recover it. 00:42:47.702 [2024-05-15 09:08:42.276083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.702 [2024-05-15 09:08:42.276187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.702 [2024-05-15 09:08:42.276214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.702 qpair failed and we were unable to recover it. 00:42:47.702 [2024-05-15 09:08:42.276372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.702 [2024-05-15 09:08:42.276502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.702 [2024-05-15 09:08:42.276535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.702 qpair failed and we were unable to recover it. 00:42:47.702 [2024-05-15 09:08:42.276667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.702 [2024-05-15 09:08:42.276819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.702 [2024-05-15 09:08:42.276853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.702 qpair failed and we were unable to recover it. 00:42:47.702 [2024-05-15 09:08:42.277024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.702 [2024-05-15 09:08:42.277130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.702 [2024-05-15 09:08:42.277160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.702 qpair failed and we were unable to recover it. 00:42:47.702 [2024-05-15 09:08:42.277335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.702 [2024-05-15 09:08:42.277465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.702 [2024-05-15 09:08:42.277502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.702 qpair failed and we were unable to recover it. 00:42:47.702 [2024-05-15 09:08:42.277690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.702 [2024-05-15 09:08:42.277831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.702 [2024-05-15 09:08:42.277863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.702 qpair failed and we were unable to recover it. 00:42:47.702 [2024-05-15 09:08:42.278010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.702 [2024-05-15 09:08:42.278179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.702 [2024-05-15 09:08:42.278210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.702 qpair failed and we were unable to recover it. 00:42:47.702 [2024-05-15 09:08:42.278362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.702 [2024-05-15 09:08:42.278485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.702 [2024-05-15 09:08:42.278515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.702 qpair failed and we were unable to recover it. 00:42:47.702 [2024-05-15 09:08:42.278684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.702 [2024-05-15 09:08:42.278821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.702 [2024-05-15 09:08:42.278852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.702 qpair failed and we were unable to recover it. 00:42:47.702 [2024-05-15 09:08:42.279015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.702 [2024-05-15 09:08:42.279119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.702 [2024-05-15 09:08:42.279149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.702 qpair failed and we were unable to recover it. 00:42:47.702 [2024-05-15 09:08:42.279287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.702 [2024-05-15 09:08:42.279471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.702 [2024-05-15 09:08:42.279503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.702 qpair failed and we were unable to recover it. 00:42:47.702 [2024-05-15 09:08:42.279644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.702 [2024-05-15 09:08:42.279807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.702 [2024-05-15 09:08:42.279839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.702 qpair failed and we were unable to recover it. 00:42:47.702 [2024-05-15 09:08:42.279988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.702 [2024-05-15 09:08:42.280092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.702 [2024-05-15 09:08:42.280121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.702 qpair failed and we were unable to recover it. 00:42:47.702 [2024-05-15 09:08:42.280285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.702 [2024-05-15 09:08:42.280412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.702 [2024-05-15 09:08:42.280446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.702 qpair failed and we were unable to recover it. 00:42:47.702 [2024-05-15 09:08:42.280595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.702 [2024-05-15 09:08:42.280770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.702 [2024-05-15 09:08:42.280804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.702 qpair failed and we were unable to recover it. 00:42:47.702 [2024-05-15 09:08:42.280978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.702 [2024-05-15 09:08:42.281109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.702 [2024-05-15 09:08:42.281157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.702 qpair failed and we were unable to recover it. 00:42:47.702 [2024-05-15 09:08:42.281316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.702 [2024-05-15 09:08:42.281491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.702 [2024-05-15 09:08:42.281527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.702 qpair failed and we were unable to recover it. 00:42:47.702 [2024-05-15 09:08:42.281693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.702 [2024-05-15 09:08:42.281880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.702 [2024-05-15 09:08:42.281977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.702 qpair failed and we were unable to recover it. 00:42:47.702 [2024-05-15 09:08:42.282160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.702 [2024-05-15 09:08:42.282315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.702 [2024-05-15 09:08:42.282361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.702 qpair failed and we were unable to recover it. 00:42:47.702 [2024-05-15 09:08:42.282474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.702 [2024-05-15 09:08:42.282599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.702 [2024-05-15 09:08:42.282629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.702 qpair failed and we were unable to recover it. 00:42:47.702 [2024-05-15 09:08:42.282778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.702 [2024-05-15 09:08:42.282955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.702 [2024-05-15 09:08:42.282983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.702 qpair failed and we were unable to recover it. 00:42:47.702 [2024-05-15 09:08:42.283208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.702 [2024-05-15 09:08:42.283349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.702 [2024-05-15 09:08:42.283378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.702 qpair failed and we were unable to recover it. 00:42:47.702 [2024-05-15 09:08:42.283549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.702 [2024-05-15 09:08:42.283691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.702 [2024-05-15 09:08:42.283724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.702 qpair failed and we were unable to recover it. 00:42:47.702 [2024-05-15 09:08:42.283877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.702 [2024-05-15 09:08:42.284043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.703 [2024-05-15 09:08:42.284076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.703 qpair failed and we were unable to recover it. 00:42:47.703 [2024-05-15 09:08:42.284246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.703 [2024-05-15 09:08:42.284391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.703 [2024-05-15 09:08:42.284440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.703 qpair failed and we were unable to recover it. 00:42:47.703 [2024-05-15 09:08:42.284584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.703 [2024-05-15 09:08:42.284718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.703 [2024-05-15 09:08:42.284750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.703 qpair failed and we were unable to recover it. 00:42:47.703 [2024-05-15 09:08:42.284907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.703 [2024-05-15 09:08:42.285074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.703 [2024-05-15 09:08:42.285105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.703 qpair failed and we were unable to recover it. 00:42:47.703 [2024-05-15 09:08:42.285298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.703 [2024-05-15 09:08:42.285410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.703 [2024-05-15 09:08:42.285440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.703 qpair failed and we were unable to recover it. 00:42:47.703 [2024-05-15 09:08:42.285593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.703 [2024-05-15 09:08:42.285746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.703 [2024-05-15 09:08:42.285778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.703 qpair failed and we were unable to recover it. 00:42:47.703 [2024-05-15 09:08:42.286003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.703 [2024-05-15 09:08:42.286151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.703 [2024-05-15 09:08:42.286184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.703 qpair failed and we were unable to recover it. 00:42:47.703 [2024-05-15 09:08:42.286339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.703 [2024-05-15 09:08:42.286452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.703 [2024-05-15 09:08:42.286483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.703 qpair failed and we were unable to recover it. 00:42:47.703 [2024-05-15 09:08:42.286645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.703 [2024-05-15 09:08:42.286750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.703 [2024-05-15 09:08:42.286780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.703 qpair failed and we were unable to recover it. 00:42:47.703 [2024-05-15 09:08:42.286965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.703 [2024-05-15 09:08:42.287115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.703 [2024-05-15 09:08:42.287148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.703 qpair failed and we were unable to recover it. 00:42:47.703 [2024-05-15 09:08:42.287385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.703 [2024-05-15 09:08:42.287517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.703 [2024-05-15 09:08:42.287547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.703 qpair failed and we were unable to recover it. 00:42:47.703 [2024-05-15 09:08:42.287737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.703 [2024-05-15 09:08:42.287875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.703 [2024-05-15 09:08:42.287906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.703 qpair failed and we were unable to recover it. 00:42:47.703 [2024-05-15 09:08:42.288050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.703 [2024-05-15 09:08:42.288197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.703 [2024-05-15 09:08:42.288240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.703 qpair failed and we were unable to recover it. 00:42:47.703 [2024-05-15 09:08:42.288408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.703 [2024-05-15 09:08:42.288515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.703 [2024-05-15 09:08:42.288545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.703 qpair failed and we were unable to recover it. 00:42:47.703 [2024-05-15 09:08:42.288730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.703 [2024-05-15 09:08:42.288898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.703 [2024-05-15 09:08:42.288931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.703 qpair failed and we were unable to recover it. 00:42:47.703 [2024-05-15 09:08:42.289071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.703 [2024-05-15 09:08:42.289185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.703 [2024-05-15 09:08:42.289225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.703 qpair failed and we were unable to recover it. 00:42:47.703 [2024-05-15 09:08:42.289350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.703 [2024-05-15 09:08:42.289510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.703 [2024-05-15 09:08:42.289558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.703 qpair failed and we were unable to recover it. 00:42:47.703 [2024-05-15 09:08:42.289710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.703 [2024-05-15 09:08:42.289848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.703 [2024-05-15 09:08:42.289877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.703 qpair failed and we were unable to recover it. 00:42:47.703 [2024-05-15 09:08:42.289993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.703 [2024-05-15 09:08:42.290152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.703 [2024-05-15 09:08:42.290185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.703 qpair failed and we were unable to recover it. 00:42:47.703 [2024-05-15 09:08:42.290328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.703 [2024-05-15 09:08:42.290461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.703 [2024-05-15 09:08:42.290492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.703 qpair failed and we were unable to recover it. 00:42:47.703 [2024-05-15 09:08:42.290603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.703 [2024-05-15 09:08:42.290752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.703 [2024-05-15 09:08:42.290786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.703 qpair failed and we were unable to recover it. 00:42:47.703 [2024-05-15 09:08:42.290908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.703 [2024-05-15 09:08:42.291038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.703 [2024-05-15 09:08:42.291077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.703 qpair failed and we were unable to recover it. 00:42:47.703 [2024-05-15 09:08:42.291204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.703 [2024-05-15 09:08:42.291331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.703 [2024-05-15 09:08:42.291366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.703 qpair failed and we were unable to recover it. 00:42:47.703 [2024-05-15 09:08:42.291481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.703 [2024-05-15 09:08:42.291621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.703 [2024-05-15 09:08:42.291650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.703 qpair failed and we were unable to recover it. 00:42:47.703 [2024-05-15 09:08:42.291766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.703 [2024-05-15 09:08:42.291920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.703 [2024-05-15 09:08:42.291954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.703 qpair failed and we were unable to recover it. 00:42:47.703 [2024-05-15 09:08:42.292102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.703 [2024-05-15 09:08:42.292258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.703 [2024-05-15 09:08:42.292289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.703 qpair failed and we were unable to recover it. 00:42:47.703 [2024-05-15 09:08:42.292467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.703 [2024-05-15 09:08:42.292623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.703 [2024-05-15 09:08:42.292652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.703 qpair failed and we were unable to recover it. 00:42:47.703 [2024-05-15 09:08:42.292783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.703 [2024-05-15 09:08:42.292926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.704 [2024-05-15 09:08:42.292958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.704 qpair failed and we were unable to recover it. 00:42:47.704 [2024-05-15 09:08:42.293118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.704 [2024-05-15 09:08:42.293262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.704 [2024-05-15 09:08:42.293293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.704 qpair failed and we were unable to recover it. 00:42:47.704 [2024-05-15 09:08:42.293423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.704 [2024-05-15 09:08:42.293561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.704 [2024-05-15 09:08:42.293594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.704 qpair failed and we were unable to recover it. 00:42:47.704 [2024-05-15 09:08:42.293726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.704 [2024-05-15 09:08:42.293909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.704 [2024-05-15 09:08:42.293941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.704 qpair failed and we were unable to recover it. 00:42:47.704 [2024-05-15 09:08:42.294087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.704 [2024-05-15 09:08:42.294205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.704 [2024-05-15 09:08:42.294244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.704 qpair failed and we were unable to recover it. 00:42:47.704 [2024-05-15 09:08:42.294357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.704 [2024-05-15 09:08:42.294471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.704 [2024-05-15 09:08:42.294501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.704 qpair failed and we were unable to recover it. 00:42:47.704 [2024-05-15 09:08:42.294671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.704 [2024-05-15 09:08:42.294802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.704 [2024-05-15 09:08:42.294831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.704 qpair failed and we were unable to recover it. 00:42:47.704 [2024-05-15 09:08:42.294988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.704 [2024-05-15 09:08:42.295123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.704 [2024-05-15 09:08:42.295150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.704 qpair failed and we were unable to recover it. 00:42:47.704 [2024-05-15 09:08:42.295339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.704 [2024-05-15 09:08:42.295465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.704 [2024-05-15 09:08:42.295499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.704 qpair failed and we were unable to recover it. 00:42:47.704 [2024-05-15 09:08:42.295625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.704 [2024-05-15 09:08:42.295815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.704 [2024-05-15 09:08:42.295846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.704 qpair failed and we were unable to recover it. 00:42:47.704 [2024-05-15 09:08:42.295959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.704 [2024-05-15 09:08:42.296098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.704 [2024-05-15 09:08:42.296125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.704 qpair failed and we were unable to recover it. 00:42:47.704 [2024-05-15 09:08:42.296235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.704 [2024-05-15 09:08:42.296361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.704 [2024-05-15 09:08:42.296390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.704 qpair failed and we were unable to recover it. 00:42:47.704 [2024-05-15 09:08:42.296525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.704 [2024-05-15 09:08:42.296675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.704 [2024-05-15 09:08:42.296710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.704 qpair failed and we were unable to recover it. 00:42:47.704 [2024-05-15 09:08:42.296877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.704 [2024-05-15 09:08:42.297009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.704 [2024-05-15 09:08:42.297037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.704 qpair failed and we were unable to recover it. 00:42:47.704 [2024-05-15 09:08:42.297221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.704 [2024-05-15 09:08:42.297364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.704 [2024-05-15 09:08:42.297395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.704 qpair failed and we were unable to recover it. 00:42:47.704 [2024-05-15 09:08:42.297504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.704 [2024-05-15 09:08:42.297669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.704 [2024-05-15 09:08:42.297700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.704 qpair failed and we were unable to recover it. 00:42:47.704 [2024-05-15 09:08:42.297866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.704 [2024-05-15 09:08:42.298030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.704 [2024-05-15 09:08:42.298058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.704 qpair failed and we were unable to recover it. 00:42:47.704 [2024-05-15 09:08:42.298266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.704 [2024-05-15 09:08:42.298403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.704 [2024-05-15 09:08:42.298432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.704 qpair failed and we were unable to recover it. 00:42:47.704 [2024-05-15 09:08:42.298578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.704 [2024-05-15 09:08:42.298761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.704 [2024-05-15 09:08:42.298799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.704 qpair failed and we were unable to recover it. 00:42:47.704 [2024-05-15 09:08:42.298940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.704 [2024-05-15 09:08:42.299094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.704 [2024-05-15 09:08:42.299129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.704 qpair failed and we were unable to recover it. 00:42:47.704 [2024-05-15 09:08:42.299297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.704 [2024-05-15 09:08:42.299404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.704 [2024-05-15 09:08:42.299435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.704 qpair failed and we were unable to recover it. 00:42:47.704 [2024-05-15 09:08:42.299564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.704 [2024-05-15 09:08:42.299718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.704 [2024-05-15 09:08:42.299752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.704 qpair failed and we were unable to recover it. 00:42:47.704 [2024-05-15 09:08:42.299892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.704 [2024-05-15 09:08:42.300027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.704 [2024-05-15 09:08:42.300057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.704 qpair failed and we were unable to recover it. 00:42:47.704 [2024-05-15 09:08:42.300250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.704 [2024-05-15 09:08:42.300368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.704 [2024-05-15 09:08:42.300399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.704 qpair failed and we were unable to recover it. 00:42:47.704 [2024-05-15 09:08:42.300533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.704 [2024-05-15 09:08:42.300678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.704 [2024-05-15 09:08:42.300712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.704 qpair failed and we were unable to recover it. 00:42:47.704 [2024-05-15 09:08:42.300875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.704 [2024-05-15 09:08:42.300998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.704 [2024-05-15 09:08:42.301030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.704 qpair failed and we were unable to recover it. 00:42:47.704 [2024-05-15 09:08:42.301238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.704 [2024-05-15 09:08:42.301351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.704 [2024-05-15 09:08:42.301382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.704 qpair failed and we were unable to recover it. 00:42:47.704 [2024-05-15 09:08:42.301527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.704 [2024-05-15 09:08:42.301629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.704 [2024-05-15 09:08:42.301660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.705 qpair failed and we were unable to recover it. 00:42:47.705 [2024-05-15 09:08:42.301772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.705 [2024-05-15 09:08:42.302006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.705 [2024-05-15 09:08:42.302034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.705 qpair failed and we were unable to recover it. 00:42:47.705 [2024-05-15 09:08:42.302211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.705 [2024-05-15 09:08:42.302355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.705 [2024-05-15 09:08:42.302382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.705 qpair failed and we were unable to recover it. 00:42:47.705 [2024-05-15 09:08:42.302498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.705 [2024-05-15 09:08:42.302641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.705 [2024-05-15 09:08:42.302670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.705 qpair failed and we were unable to recover it. 00:42:47.705 [2024-05-15 09:08:42.302780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.705 [2024-05-15 09:08:42.302889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.705 [2024-05-15 09:08:42.302919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.705 qpair failed and we were unable to recover it. 00:42:47.705 [2024-05-15 09:08:42.303062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.705 [2024-05-15 09:08:42.303208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.705 [2024-05-15 09:08:42.303250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.705 qpair failed and we were unable to recover it. 00:42:47.705 [2024-05-15 09:08:42.303375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.705 [2024-05-15 09:08:42.303524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.705 [2024-05-15 09:08:42.303555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.705 qpair failed and we were unable to recover it. 00:42:47.705 [2024-05-15 09:08:42.303676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.705 [2024-05-15 09:08:42.303835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.705 [2024-05-15 09:08:42.303881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.705 qpair failed and we were unable to recover it. 00:42:47.705 [2024-05-15 09:08:42.304004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.705 [2024-05-15 09:08:42.304148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.705 [2024-05-15 09:08:42.304177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.705 qpair failed and we were unable to recover it. 00:42:47.705 [2024-05-15 09:08:42.304333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.705 [2024-05-15 09:08:42.304479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.705 [2024-05-15 09:08:42.304523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.705 qpair failed and we were unable to recover it. 00:42:47.705 [2024-05-15 09:08:42.304688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.705 [2024-05-15 09:08:42.304836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.705 [2024-05-15 09:08:42.304866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.705 qpair failed and we were unable to recover it. 00:42:47.705 [2024-05-15 09:08:42.305005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.705 [2024-05-15 09:08:42.305187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.705 [2024-05-15 09:08:42.305257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.705 qpair failed and we were unable to recover it. 00:42:47.705 [2024-05-15 09:08:42.305385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.705 [2024-05-15 09:08:42.305553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.705 [2024-05-15 09:08:42.305587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.705 qpair failed and we were unable to recover it. 00:42:47.705 [2024-05-15 09:08:42.305745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.705 [2024-05-15 09:08:42.305916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.705 [2024-05-15 09:08:42.305960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.705 qpair failed and we were unable to recover it. 00:42:47.705 [2024-05-15 09:08:42.306118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.705 [2024-05-15 09:08:42.306271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.705 [2024-05-15 09:08:42.306308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.705 qpair failed and we were unable to recover it. 00:42:47.705 [2024-05-15 09:08:42.306471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.705 [2024-05-15 09:08:42.306604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.705 [2024-05-15 09:08:42.306637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.705 qpair failed and we were unable to recover it. 00:42:47.705 [2024-05-15 09:08:42.306775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.705 [2024-05-15 09:08:42.306914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.705 [2024-05-15 09:08:42.306945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.705 qpair failed and we were unable to recover it. 00:42:47.705 [2024-05-15 09:08:42.307066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.705 [2024-05-15 09:08:42.307205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.705 [2024-05-15 09:08:42.307245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.705 qpair failed and we were unable to recover it. 00:42:47.705 [2024-05-15 09:08:42.307379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.705 [2024-05-15 09:08:42.307520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.705 [2024-05-15 09:08:42.307567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.705 qpair failed and we were unable to recover it. 00:42:47.705 [2024-05-15 09:08:42.307710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.705 [2024-05-15 09:08:42.307851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.705 [2024-05-15 09:08:42.307883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.705 qpair failed and we were unable to recover it. 00:42:47.705 [2024-05-15 09:08:42.308075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.705 [2024-05-15 09:08:42.308245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.705 [2024-05-15 09:08:42.308280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.705 qpair failed and we were unable to recover it. 00:42:47.705 [2024-05-15 09:08:42.308436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.705 [2024-05-15 09:08:42.308571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.705 [2024-05-15 09:08:42.308602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.705 qpair failed and we were unable to recover it. 00:42:47.705 [2024-05-15 09:08:42.308737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.705 [2024-05-15 09:08:42.308876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.705 [2024-05-15 09:08:42.308905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.705 qpair failed and we were unable to recover it. 00:42:47.705 [2024-05-15 09:08:42.309038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.705 [2024-05-15 09:08:42.309190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.705 [2024-05-15 09:08:42.309236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.705 qpair failed and we were unable to recover it. 00:42:47.705 [2024-05-15 09:08:42.309405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.705 [2024-05-15 09:08:42.309538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.705 [2024-05-15 09:08:42.309567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.705 qpair failed and we were unable to recover it. 00:42:47.705 [2024-05-15 09:08:42.309712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.705 [2024-05-15 09:08:42.309845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.705 [2024-05-15 09:08:42.309875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.705 qpair failed and we were unable to recover it. 00:42:47.705 [2024-05-15 09:08:42.310011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.705 [2024-05-15 09:08:42.310177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.705 [2024-05-15 09:08:42.310207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.705 qpair failed and we were unable to recover it. 00:42:47.705 [2024-05-15 09:08:42.310336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.705 [2024-05-15 09:08:42.310498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.706 [2024-05-15 09:08:42.310530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.706 qpair failed and we were unable to recover it. 00:42:47.706 [2024-05-15 09:08:42.310670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.706 [2024-05-15 09:08:42.310778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.706 [2024-05-15 09:08:42.310809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.706 qpair failed and we were unable to recover it. 00:42:47.706 [2024-05-15 09:08:42.310916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.706 [2024-05-15 09:08:42.311058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.706 [2024-05-15 09:08:42.311093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.706 qpair failed and we were unable to recover it. 00:42:47.706 [2024-05-15 09:08:42.311264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.706 [2024-05-15 09:08:42.311409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.706 [2024-05-15 09:08:42.311443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.706 qpair failed and we were unable to recover it. 00:42:47.706 [2024-05-15 09:08:42.311599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.706 [2024-05-15 09:08:42.311751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.706 [2024-05-15 09:08:42.311786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.706 qpair failed and we were unable to recover it. 00:42:47.706 [2024-05-15 09:08:42.311987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.706 [2024-05-15 09:08:42.312164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.706 [2024-05-15 09:08:42.312198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.706 qpair failed and we were unable to recover it. 00:42:47.706 [2024-05-15 09:08:42.312347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.706 [2024-05-15 09:08:42.312500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.706 [2024-05-15 09:08:42.312530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.706 qpair failed and we were unable to recover it. 00:42:47.706 [2024-05-15 09:08:42.312663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.706 [2024-05-15 09:08:42.312888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.706 [2024-05-15 09:08:42.312918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.706 qpair failed and we were unable to recover it. 00:42:47.706 [2024-05-15 09:08:42.313119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.706 [2024-05-15 09:08:42.313250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.706 [2024-05-15 09:08:42.313278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.706 qpair failed and we were unable to recover it. 00:42:47.706 [2024-05-15 09:08:42.313444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.706 [2024-05-15 09:08:42.313596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.706 [2024-05-15 09:08:42.313632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.706 qpair failed and we were unable to recover it. 00:42:47.706 [2024-05-15 09:08:42.313790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.706 [2024-05-15 09:08:42.313903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.706 [2024-05-15 09:08:42.313931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.706 qpair failed and we were unable to recover it. 00:42:47.706 [2024-05-15 09:08:42.314052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.706 [2024-05-15 09:08:42.314185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.706 [2024-05-15 09:08:42.314213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.706 qpair failed and we were unable to recover it. 00:42:47.706 [2024-05-15 09:08:42.314380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.706 [2024-05-15 09:08:42.314549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.706 [2024-05-15 09:08:42.314579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.706 qpair failed and we were unable to recover it. 00:42:47.706 [2024-05-15 09:08:42.314774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.706 [2024-05-15 09:08:42.314909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.706 [2024-05-15 09:08:42.314954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.706 qpair failed and we were unable to recover it. 00:42:47.706 [2024-05-15 09:08:42.315101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.706 [2024-05-15 09:08:42.315246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.706 [2024-05-15 09:08:42.315281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.706 qpair failed and we were unable to recover it. 00:42:47.706 [2024-05-15 09:08:42.315405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.706 [2024-05-15 09:08:42.315537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.706 [2024-05-15 09:08:42.315571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.706 qpair failed and we were unable to recover it. 00:42:47.706 [2024-05-15 09:08:42.315719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.706 [2024-05-15 09:08:42.315861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.706 [2024-05-15 09:08:42.315893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.706 qpair failed and we were unable to recover it. 00:42:47.706 [2024-05-15 09:08:42.316055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.706 [2024-05-15 09:08:42.316197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.706 [2024-05-15 09:08:42.316242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.706 qpair failed and we were unable to recover it. 00:42:47.706 [2024-05-15 09:08:42.316374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.706 [2024-05-15 09:08:42.316521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.706 [2024-05-15 09:08:42.316552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.706 qpair failed and we were unable to recover it. 00:42:47.706 [2024-05-15 09:08:42.316718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.706 [2024-05-15 09:08:42.316855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.706 [2024-05-15 09:08:42.316902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.706 qpair failed and we were unable to recover it. 00:42:47.706 [2024-05-15 09:08:42.317058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.706 [2024-05-15 09:08:42.317184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.706 [2024-05-15 09:08:42.317224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.706 qpair failed and we were unable to recover it. 00:42:47.706 [2024-05-15 09:08:42.317400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.706 [2024-05-15 09:08:42.317551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.706 [2024-05-15 09:08:42.317585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.706 qpair failed and we were unable to recover it. 00:42:47.706 [2024-05-15 09:08:42.317722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.706 [2024-05-15 09:08:42.317884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.706 [2024-05-15 09:08:42.317915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.707 qpair failed and we were unable to recover it. 00:42:47.707 [2024-05-15 09:08:42.318080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.707 [2024-05-15 09:08:42.318212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.707 [2024-05-15 09:08:42.318266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.707 qpair failed and we were unable to recover it. 00:42:47.707 [2024-05-15 09:08:42.318378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.707 [2024-05-15 09:08:42.318564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.707 [2024-05-15 09:08:42.318594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.707 qpair failed and we were unable to recover it. 00:42:47.707 [2024-05-15 09:08:42.318747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.707 [2024-05-15 09:08:42.318886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.707 [2024-05-15 09:08:42.318929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.707 qpair failed and we were unable to recover it. 00:42:47.707 [2024-05-15 09:08:42.319080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.707 [2024-05-15 09:08:42.319251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.707 [2024-05-15 09:08:42.319287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.707 qpair failed and we were unable to recover it. 00:42:47.707 [2024-05-15 09:08:42.319419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.707 [2024-05-15 09:08:42.319572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.707 [2024-05-15 09:08:42.319606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.707 qpair failed and we were unable to recover it. 00:42:47.707 [2024-05-15 09:08:42.319746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.707 [2024-05-15 09:08:42.319861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.707 [2024-05-15 09:08:42.319894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.707 qpair failed and we were unable to recover it. 00:42:47.707 [2024-05-15 09:08:42.320032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.707 [2024-05-15 09:08:42.320192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.707 [2024-05-15 09:08:42.320236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.707 qpair failed and we were unable to recover it. 00:42:47.707 [2024-05-15 09:08:42.320395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.707 [2024-05-15 09:08:42.320533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.707 [2024-05-15 09:08:42.320561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.707 qpair failed and we were unable to recover it. 00:42:47.707 [2024-05-15 09:08:42.320702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.707 [2024-05-15 09:08:42.320809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.707 [2024-05-15 09:08:42.320837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.707 qpair failed and we were unable to recover it. 00:42:47.707 [2024-05-15 09:08:42.320944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.707 [2024-05-15 09:08:42.321075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.707 [2024-05-15 09:08:42.321102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.707 qpair failed and we were unable to recover it. 00:42:47.707 [2024-05-15 09:08:42.321223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.707 [2024-05-15 09:08:42.321388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.707 [2024-05-15 09:08:42.321418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.707 qpair failed and we were unable to recover it. 00:42:47.707 [2024-05-15 09:08:42.321583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.707 [2024-05-15 09:08:42.321714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.707 [2024-05-15 09:08:42.321741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.707 qpair failed and we were unable to recover it. 00:42:47.707 [2024-05-15 09:08:42.321905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.707 [2024-05-15 09:08:42.322033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.707 [2024-05-15 09:08:42.322063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.707 qpair failed and we were unable to recover it. 00:42:47.707 [2024-05-15 09:08:42.322201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.707 [2024-05-15 09:08:42.322351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.707 [2024-05-15 09:08:42.322384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.707 qpair failed and we were unable to recover it. 00:42:47.707 [2024-05-15 09:08:42.322517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.707 [2024-05-15 09:08:42.322630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.707 [2024-05-15 09:08:42.322662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.707 qpair failed and we were unable to recover it. 00:42:47.707 [2024-05-15 09:08:42.322805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.707 [2024-05-15 09:08:42.322934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.707 [2024-05-15 09:08:42.322969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.707 qpair failed and we were unable to recover it. 00:42:47.707 [2024-05-15 09:08:42.323082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.707 [2024-05-15 09:08:42.323237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.707 [2024-05-15 09:08:42.323270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.707 qpair failed and we were unable to recover it. 00:42:47.707 [2024-05-15 09:08:42.323388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.707 [2024-05-15 09:08:42.323502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.707 [2024-05-15 09:08:42.323529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.707 qpair failed and we were unable to recover it. 00:42:47.707 [2024-05-15 09:08:42.323666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.707 [2024-05-15 09:08:42.323777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.707 [2024-05-15 09:08:42.323804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.707 qpair failed and we were unable to recover it. 00:42:47.707 [2024-05-15 09:08:42.323964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.707 [2024-05-15 09:08:42.324084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.707 [2024-05-15 09:08:42.324121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.707 qpair failed and we were unable to recover it. 00:42:47.707 [2024-05-15 09:08:42.324291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.707 [2024-05-15 09:08:42.324470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.707 [2024-05-15 09:08:42.324501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.707 qpair failed and we were unable to recover it. 00:42:47.707 [2024-05-15 09:08:42.324675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.707 [2024-05-15 09:08:42.324799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.707 [2024-05-15 09:08:42.324837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.707 qpair failed and we were unable to recover it. 00:42:47.707 [2024-05-15 09:08:42.325005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.707 [2024-05-15 09:08:42.325153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.707 [2024-05-15 09:08:42.325182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.707 qpair failed and we were unable to recover it. 00:42:47.707 [2024-05-15 09:08:42.325328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.707 [2024-05-15 09:08:42.325438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.707 [2024-05-15 09:08:42.325466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.707 qpair failed and we were unable to recover it. 00:42:47.707 [2024-05-15 09:08:42.325584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.707 [2024-05-15 09:08:42.325720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.707 [2024-05-15 09:08:42.325749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.707 qpair failed and we were unable to recover it. 00:42:47.707 [2024-05-15 09:08:42.325876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.707 [2024-05-15 09:08:42.326003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.707 [2024-05-15 09:08:42.326037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.707 qpair failed and we were unable to recover it. 00:42:47.707 [2024-05-15 09:08:42.326185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.707 [2024-05-15 09:08:42.326303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.707 [2024-05-15 09:08:42.326336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.707 qpair failed and we were unable to recover it. 00:42:47.707 [2024-05-15 09:08:42.326518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.707 [2024-05-15 09:08:42.326640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.707 [2024-05-15 09:08:42.326669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.707 qpair failed and we were unable to recover it. 00:42:47.707 [2024-05-15 09:08:42.326854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.707 [2024-05-15 09:08:42.326983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.708 [2024-05-15 09:08:42.327015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.708 qpair failed and we were unable to recover it. 00:42:47.708 [2024-05-15 09:08:42.327162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.708 [2024-05-15 09:08:42.327343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.708 [2024-05-15 09:08:42.327370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.708 qpair failed and we were unable to recover it. 00:42:47.708 [2024-05-15 09:08:42.327530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.708 [2024-05-15 09:08:42.327661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.708 [2024-05-15 09:08:42.327713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.708 qpair failed and we were unable to recover it. 00:42:47.708 [2024-05-15 09:08:42.327850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.708 [2024-05-15 09:08:42.328044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.708 [2024-05-15 09:08:42.328075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.708 qpair failed and we were unable to recover it. 00:42:47.708 [2024-05-15 09:08:42.328224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.708 [2024-05-15 09:08:42.328400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.708 [2024-05-15 09:08:42.328430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.708 qpair failed and we were unable to recover it. 00:42:47.708 [2024-05-15 09:08:42.328581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.708 [2024-05-15 09:08:42.328731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.708 [2024-05-15 09:08:42.328768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.708 qpair failed and we were unable to recover it. 00:42:47.708 [2024-05-15 09:08:42.328986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.708 [2024-05-15 09:08:42.329111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.708 [2024-05-15 09:08:42.329144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.708 qpair failed and we were unable to recover it. 00:42:47.708 [2024-05-15 09:08:42.329279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.708 [2024-05-15 09:08:42.329418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.708 [2024-05-15 09:08:42.329448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.708 qpair failed and we were unable to recover it. 00:42:47.708 [2024-05-15 09:08:42.329594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.708 [2024-05-15 09:08:42.329774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.708 [2024-05-15 09:08:42.329804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.708 qpair failed and we were unable to recover it. 00:42:47.708 [2024-05-15 09:08:42.329941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.708 [2024-05-15 09:08:42.330107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.708 [2024-05-15 09:08:42.330138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.708 qpair failed and we were unable to recover it. 00:42:47.708 [2024-05-15 09:08:42.330271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.708 [2024-05-15 09:08:42.330383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.708 [2024-05-15 09:08:42.330410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.708 qpair failed and we were unable to recover it. 00:42:47.708 [2024-05-15 09:08:42.330595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.708 [2024-05-15 09:08:42.330711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.708 [2024-05-15 09:08:42.330741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.708 qpair failed and we were unable to recover it. 00:42:47.708 [2024-05-15 09:08:42.330893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.708 [2024-05-15 09:08:42.331027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.708 [2024-05-15 09:08:42.331059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.708 qpair failed and we were unable to recover it. 00:42:47.708 [2024-05-15 09:08:42.331239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.708 [2024-05-15 09:08:42.331375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.708 [2024-05-15 09:08:42.331402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.708 qpair failed and we were unable to recover it. 00:42:47.708 [2024-05-15 09:08:42.331598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.708 [2024-05-15 09:08:42.331735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.708 [2024-05-15 09:08:42.331767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.708 qpair failed and we were unable to recover it. 00:42:47.708 [2024-05-15 09:08:42.331926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.708 [2024-05-15 09:08:42.332074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.708 [2024-05-15 09:08:42.332104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.708 qpair failed and we were unable to recover it. 00:42:47.708 [2024-05-15 09:08:42.332269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.708 [2024-05-15 09:08:42.332390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.708 [2024-05-15 09:08:42.332421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.708 qpair failed and we were unable to recover it. 00:42:47.708 [2024-05-15 09:08:42.332558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.708 [2024-05-15 09:08:42.332682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.708 [2024-05-15 09:08:42.332713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.708 qpair failed and we were unable to recover it. 00:42:47.708 [2024-05-15 09:08:42.332860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.708 [2024-05-15 09:08:42.333009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.708 [2024-05-15 09:08:42.333041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.708 qpair failed and we were unable to recover it. 00:42:47.708 [2024-05-15 09:08:42.333200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.708 [2024-05-15 09:08:42.333326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.708 [2024-05-15 09:08:42.333377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.708 qpair failed and we were unable to recover it. 00:42:47.708 [2024-05-15 09:08:42.333546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.708 [2024-05-15 09:08:42.333662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.708 [2024-05-15 09:08:42.333693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.708 qpair failed and we were unable to recover it. 00:42:47.708 [2024-05-15 09:08:42.333809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.708 [2024-05-15 09:08:42.333948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.708 [2024-05-15 09:08:42.333976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.708 qpair failed and we were unable to recover it. 00:42:47.708 [2024-05-15 09:08:42.334102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.708 [2024-05-15 09:08:42.334228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.708 [2024-05-15 09:08:42.334257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.708 qpair failed and we were unable to recover it. 00:42:47.708 [2024-05-15 09:08:42.334456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.708 [2024-05-15 09:08:42.334607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.708 [2024-05-15 09:08:42.334639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.708 qpair failed and we were unable to recover it. 00:42:47.708 [2024-05-15 09:08:42.334785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.708 [2024-05-15 09:08:42.334951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.708 [2024-05-15 09:08:42.334981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.708 qpair failed and we were unable to recover it. 00:42:47.708 [2024-05-15 09:08:42.335128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.708 [2024-05-15 09:08:42.335278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.708 [2024-05-15 09:08:42.335310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.708 qpair failed and we were unable to recover it. 00:42:47.708 [2024-05-15 09:08:42.335428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.708 [2024-05-15 09:08:42.335535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.708 [2024-05-15 09:08:42.335561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.708 qpair failed and we were unable to recover it. 00:42:47.708 [2024-05-15 09:08:42.335673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.708 [2024-05-15 09:08:42.335784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.708 [2024-05-15 09:08:42.335812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.708 qpair failed and we were unable to recover it. 00:42:47.708 [2024-05-15 09:08:42.335957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.708 [2024-05-15 09:08:42.336083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.708 [2024-05-15 09:08:42.336111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.708 qpair failed and we were unable to recover it. 00:42:47.708 [2024-05-15 09:08:42.336298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.708 [2024-05-15 09:08:42.336429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.709 [2024-05-15 09:08:42.336459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.709 qpair failed and we were unable to recover it. 00:42:47.709 [2024-05-15 09:08:42.336570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.709 [2024-05-15 09:08:42.336678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.709 [2024-05-15 09:08:42.336709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.709 qpair failed and we were unable to recover it. 00:42:47.709 [2024-05-15 09:08:42.336853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.709 [2024-05-15 09:08:42.336993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.709 [2024-05-15 09:08:42.337021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.709 qpair failed and we were unable to recover it. 00:42:47.709 [2024-05-15 09:08:42.337178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.709 [2024-05-15 09:08:42.337348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.709 [2024-05-15 09:08:42.337379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:47.709 qpair failed and we were unable to recover it. 00:42:47.709 [2024-05-15 09:08:42.337560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.709 [2024-05-15 09:08:42.337731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.709 [2024-05-15 09:08:42.337761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.709 qpair failed and we were unable to recover it. 00:42:47.709 [2024-05-15 09:08:42.337871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.709 [2024-05-15 09:08:42.338001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.709 [2024-05-15 09:08:42.338028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.709 qpair failed and we were unable to recover it. 00:42:47.709 [2024-05-15 09:08:42.338140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.709 [2024-05-15 09:08:42.338278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.709 [2024-05-15 09:08:42.338309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.709 qpair failed and we were unable to recover it. 00:42:47.709 [2024-05-15 09:08:42.338470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.709 [2024-05-15 09:08:42.338606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.709 [2024-05-15 09:08:42.338636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.709 qpair failed and we were unable to recover it. 00:42:47.709 [2024-05-15 09:08:42.338797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.709 [2024-05-15 09:08:42.338904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.709 [2024-05-15 09:08:42.338931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.709 qpair failed and we were unable to recover it. 00:42:47.709 [2024-05-15 09:08:42.339069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.709 [2024-05-15 09:08:42.339179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.709 [2024-05-15 09:08:42.339208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.709 qpair failed and we were unable to recover it. 00:42:47.709 [2024-05-15 09:08:42.339345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.709 [2024-05-15 09:08:42.339473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.709 [2024-05-15 09:08:42.339500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.709 qpair failed and we were unable to recover it. 00:42:47.709 [2024-05-15 09:08:42.339648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.709 [2024-05-15 09:08:42.339796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.709 [2024-05-15 09:08:42.339822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.709 qpair failed and we were unable to recover it. 00:42:47.709 [2024-05-15 09:08:42.339974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.709 [2024-05-15 09:08:42.340125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.709 [2024-05-15 09:08:42.340153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.709 qpair failed and we were unable to recover it. 00:42:47.709 [2024-05-15 09:08:42.340297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.709 [2024-05-15 09:08:42.340431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.709 [2024-05-15 09:08:42.340458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.709 qpair failed and we were unable to recover it. 00:42:47.709 [2024-05-15 09:08:42.340623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.709 [2024-05-15 09:08:42.340771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.709 [2024-05-15 09:08:42.340797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.709 qpair failed and we were unable to recover it. 00:42:47.709 [2024-05-15 09:08:42.340915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.709 [2024-05-15 09:08:42.341042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.709 [2024-05-15 09:08:42.341071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.709 qpair failed and we were unable to recover it. 00:42:47.709 [2024-05-15 09:08:42.341191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.709 [2024-05-15 09:08:42.341367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.709 [2024-05-15 09:08:42.341394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.709 qpair failed and we were unable to recover it. 00:42:47.709 [2024-05-15 09:08:42.341509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.709 [2024-05-15 09:08:42.341609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.709 [2024-05-15 09:08:42.341635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.709 qpair failed and we were unable to recover it. 00:42:47.709 [2024-05-15 09:08:42.341743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.709 [2024-05-15 09:08:42.341853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.709 [2024-05-15 09:08:42.341881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.709 qpair failed and we were unable to recover it. 00:42:47.709 [2024-05-15 09:08:42.342020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.709 [2024-05-15 09:08:42.342210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.709 [2024-05-15 09:08:42.342249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.709 qpair failed and we were unable to recover it. 00:42:47.709 [2024-05-15 09:08:42.342360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.709 [2024-05-15 09:08:42.342495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.709 [2024-05-15 09:08:42.342522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.709 qpair failed and we were unable to recover it. 00:42:47.709 [2024-05-15 09:08:42.342677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.709 [2024-05-15 09:08:42.342843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.709 [2024-05-15 09:08:42.342869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.709 qpair failed and we were unable to recover it. 00:42:47.709 [2024-05-15 09:08:42.343026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.709 [2024-05-15 09:08:42.343210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.709 [2024-05-15 09:08:42.343275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.709 qpair failed and we were unable to recover it. 00:42:47.709 [2024-05-15 09:08:42.343407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.709 [2024-05-15 09:08:42.343537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.709 [2024-05-15 09:08:42.343564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.709 qpair failed and we were unable to recover it. 00:42:47.709 [2024-05-15 09:08:42.343707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.709 [2024-05-15 09:08:42.343885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.709 [2024-05-15 09:08:42.343912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.709 qpair failed and we were unable to recover it. 00:42:47.709 [2024-05-15 09:08:42.344013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.709 [2024-05-15 09:08:42.344164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.709 [2024-05-15 09:08:42.344190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.709 qpair failed and we were unable to recover it. 00:42:47.709 [2024-05-15 09:08:42.344318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.709 [2024-05-15 09:08:42.344424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.709 [2024-05-15 09:08:42.344451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.709 qpair failed and we were unable to recover it. 00:42:47.709 [2024-05-15 09:08:42.344569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.709 [2024-05-15 09:08:42.344664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.709 [2024-05-15 09:08:42.344690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.709 qpair failed and we were unable to recover it. 00:42:47.709 [2024-05-15 09:08:42.344862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.709 [2024-05-15 09:08:42.344994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.709 [2024-05-15 09:08:42.345020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.709 qpair failed and we were unable to recover it. 00:42:47.710 [2024-05-15 09:08:42.345133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.710 [2024-05-15 09:08:42.345265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.710 [2024-05-15 09:08:42.345292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.710 qpair failed and we were unable to recover it. 00:42:47.710 [2024-05-15 09:08:42.345428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.710 [2024-05-15 09:08:42.345545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.710 [2024-05-15 09:08:42.345576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.710 qpair failed and we were unable to recover it. 00:42:47.710 [2024-05-15 09:08:42.345732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.710 [2024-05-15 09:08:42.345902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.710 [2024-05-15 09:08:42.345933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.710 qpair failed and we were unable to recover it. 00:42:47.710 [2024-05-15 09:08:42.346085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.710 [2024-05-15 09:08:42.346197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.710 [2024-05-15 09:08:42.346229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.710 qpair failed and we were unable to recover it. 00:42:47.710 [2024-05-15 09:08:42.346337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.710 [2024-05-15 09:08:42.346470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.710 [2024-05-15 09:08:42.346497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.710 qpair failed and we were unable to recover it. 00:42:47.710 [2024-05-15 09:08:42.346602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.710 [2024-05-15 09:08:42.346753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.710 [2024-05-15 09:08:42.346779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.710 qpair failed and we were unable to recover it. 00:42:47.710 [2024-05-15 09:08:42.346909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.710 [2024-05-15 09:08:42.347032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.710 [2024-05-15 09:08:42.347059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.710 qpair failed and we were unable to recover it. 00:42:47.710 [2024-05-15 09:08:42.347222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.710 [2024-05-15 09:08:42.347398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.710 [2024-05-15 09:08:42.347425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.710 qpair failed and we were unable to recover it. 00:42:47.710 [2024-05-15 09:08:42.347532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.710 [2024-05-15 09:08:42.347680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.710 [2024-05-15 09:08:42.347709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.710 qpair failed and we were unable to recover it. 00:42:47.710 [2024-05-15 09:08:42.347845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.710 [2024-05-15 09:08:42.347989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.710 [2024-05-15 09:08:42.348016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.710 qpair failed and we were unable to recover it. 00:42:47.710 [2024-05-15 09:08:42.348140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.710 [2024-05-15 09:08:42.348247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.710 [2024-05-15 09:08:42.348291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.710 qpair failed and we were unable to recover it. 00:42:47.710 [2024-05-15 09:08:42.348424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.710 [2024-05-15 09:08:42.348613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.710 [2024-05-15 09:08:42.348639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.710 qpair failed and we were unable to recover it. 00:42:47.710 [2024-05-15 09:08:42.348779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.710 [2024-05-15 09:08:42.348872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.710 [2024-05-15 09:08:42.348898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.710 qpair failed and we were unable to recover it. 00:42:47.710 [2024-05-15 09:08:42.349048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.710 [2024-05-15 09:08:42.349203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.710 [2024-05-15 09:08:42.349237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.710 qpair failed and we were unable to recover it. 00:42:47.710 [2024-05-15 09:08:42.349398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.710 [2024-05-15 09:08:42.349496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.710 [2024-05-15 09:08:42.349522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.710 qpair failed and we were unable to recover it. 00:42:47.710 [2024-05-15 09:08:42.349628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.710 [2024-05-15 09:08:42.349751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.710 [2024-05-15 09:08:42.349781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.710 qpair failed and we were unable to recover it. 00:42:47.710 [2024-05-15 09:08:42.349933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.710 [2024-05-15 09:08:42.350046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.710 [2024-05-15 09:08:42.350072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.710 qpair failed and we were unable to recover it. 00:42:47.710 [2024-05-15 09:08:42.350188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.710 [2024-05-15 09:08:42.350319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.710 [2024-05-15 09:08:42.350346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.710 qpair failed and we were unable to recover it. 00:42:47.710 [2024-05-15 09:08:42.350482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.710 [2024-05-15 09:08:42.350617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.710 [2024-05-15 09:08:42.350659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.710 qpair failed and we were unable to recover it. 00:42:47.710 [2024-05-15 09:08:42.350796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.710 [2024-05-15 09:08:42.350964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.710 [2024-05-15 09:08:42.350994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.710 qpair failed and we were unable to recover it. 00:42:47.710 [2024-05-15 09:08:42.351122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.710 [2024-05-15 09:08:42.351336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.710 [2024-05-15 09:08:42.351364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.710 qpair failed and we were unable to recover it. 00:42:47.710 [2024-05-15 09:08:42.351510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.710 [2024-05-15 09:08:42.351716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.710 [2024-05-15 09:08:42.351742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.710 qpair failed and we were unable to recover it. 00:42:47.710 [2024-05-15 09:08:42.351898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.710 [2024-05-15 09:08:42.352049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.710 [2024-05-15 09:08:42.352076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.710 qpair failed and we were unable to recover it. 00:42:47.710 [2024-05-15 09:08:42.352183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.710 [2024-05-15 09:08:42.352298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.710 [2024-05-15 09:08:42.352325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.710 qpair failed and we were unable to recover it. 00:42:47.710 [2024-05-15 09:08:42.352460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.710 [2024-05-15 09:08:42.352587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.710 [2024-05-15 09:08:42.352613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.710 qpair failed and we were unable to recover it. 00:42:47.710 [2024-05-15 09:08:42.352777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.710 [2024-05-15 09:08:42.352926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.710 [2024-05-15 09:08:42.352960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.710 qpair failed and we were unable to recover it. 00:42:47.710 [2024-05-15 09:08:42.353075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.710 [2024-05-15 09:08:42.353184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.710 [2024-05-15 09:08:42.353213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.710 qpair failed and we were unable to recover it. 00:42:47.710 [2024-05-15 09:08:42.353342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.710 [2024-05-15 09:08:42.353445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.710 [2024-05-15 09:08:42.353472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.710 qpair failed and we were unable to recover it. 00:42:47.710 [2024-05-15 09:08:42.353596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.710 [2024-05-15 09:08:42.353758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.711 [2024-05-15 09:08:42.353785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.711 qpair failed and we were unable to recover it. 00:42:47.711 [2024-05-15 09:08:42.353897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.711 [2024-05-15 09:08:42.354022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.711 [2024-05-15 09:08:42.354049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.711 qpair failed and we were unable to recover it. 00:42:47.711 [2024-05-15 09:08:42.354256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.711 [2024-05-15 09:08:42.354361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.711 [2024-05-15 09:08:42.354387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.711 qpair failed and we were unable to recover it. 00:42:47.711 [2024-05-15 09:08:42.354537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.711 [2024-05-15 09:08:42.354659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.711 [2024-05-15 09:08:42.354688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.711 qpair failed and we were unable to recover it. 00:42:47.711 [2024-05-15 09:08:42.354833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.711 [2024-05-15 09:08:42.354950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.711 [2024-05-15 09:08:42.354980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.711 qpair failed and we were unable to recover it. 00:42:47.711 [2024-05-15 09:08:42.355111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.711 [2024-05-15 09:08:42.355228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.711 [2024-05-15 09:08:42.355254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.711 qpair failed and we were unable to recover it. 00:42:47.711 [2024-05-15 09:08:42.355361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.711 [2024-05-15 09:08:42.355465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.711 [2024-05-15 09:08:42.355491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.711 qpair failed and we were unable to recover it. 00:42:47.711 [2024-05-15 09:08:42.355613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.711 [2024-05-15 09:08:42.355781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.711 [2024-05-15 09:08:42.355813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.711 qpair failed and we were unable to recover it. 00:42:47.711 [2024-05-15 09:08:42.356022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.711 [2024-05-15 09:08:42.356128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.711 [2024-05-15 09:08:42.356154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.711 qpair failed and we were unable to recover it. 00:42:47.711 [2024-05-15 09:08:42.356288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.711 [2024-05-15 09:08:42.356392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.711 [2024-05-15 09:08:42.356418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.711 qpair failed and we were unable to recover it. 00:42:47.711 [2024-05-15 09:08:42.356642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.711 [2024-05-15 09:08:42.356781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.711 [2024-05-15 09:08:42.356810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.711 qpair failed and we were unable to recover it. 00:42:47.711 [2024-05-15 09:08:42.356986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.711 [2024-05-15 09:08:42.357120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.711 [2024-05-15 09:08:42.357163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.711 qpair failed and we were unable to recover it. 00:42:47.711 [2024-05-15 09:08:42.357330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.711 [2024-05-15 09:08:42.357431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.711 [2024-05-15 09:08:42.357457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.711 qpair failed and we were unable to recover it. 00:42:47.711 [2024-05-15 09:08:42.357638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.711 [2024-05-15 09:08:42.357779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.711 [2024-05-15 09:08:42.357808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.711 qpair failed and we were unable to recover it. 00:42:47.711 [2024-05-15 09:08:42.357960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.711 [2024-05-15 09:08:42.358089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.711 [2024-05-15 09:08:42.358116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.711 qpair failed and we were unable to recover it. 00:42:47.711 [2024-05-15 09:08:42.358303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.711 [2024-05-15 09:08:42.358412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.711 [2024-05-15 09:08:42.358439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.711 qpair failed and we were unable to recover it. 00:42:47.711 [2024-05-15 09:08:42.358541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.711 [2024-05-15 09:08:42.358688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.711 [2024-05-15 09:08:42.358718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.711 qpair failed and we were unable to recover it. 00:42:47.711 [2024-05-15 09:08:42.358842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.711 [2024-05-15 09:08:42.358981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.711 [2024-05-15 09:08:42.359012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.711 qpair failed and we were unable to recover it. 00:42:47.711 [2024-05-15 09:08:42.359149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.711 [2024-05-15 09:08:42.359252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.711 [2024-05-15 09:08:42.359279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.711 qpair failed and we were unable to recover it. 00:42:47.711 [2024-05-15 09:08:42.359404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.711 [2024-05-15 09:08:42.359615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.711 [2024-05-15 09:08:42.359672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.711 qpair failed and we were unable to recover it. 00:42:47.711 [2024-05-15 09:08:42.359892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.711 [2024-05-15 09:08:42.360050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.711 [2024-05-15 09:08:42.360080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.711 qpair failed and we were unable to recover it. 00:42:47.711 [2024-05-15 09:08:42.360194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.711 [2024-05-15 09:08:42.360347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.711 [2024-05-15 09:08:42.360375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.711 qpair failed and we were unable to recover it. 00:42:47.711 [2024-05-15 09:08:42.360552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.711 [2024-05-15 09:08:42.360701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.711 [2024-05-15 09:08:42.360731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.711 qpair failed and we were unable to recover it. 00:42:47.711 [2024-05-15 09:08:42.360883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.711 [2024-05-15 09:08:42.361038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.712 [2024-05-15 09:08:42.361081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.712 qpair failed and we were unable to recover it. 00:42:47.712 [2024-05-15 09:08:42.361248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.712 [2024-05-15 09:08:42.361383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.712 [2024-05-15 09:08:42.361409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.712 qpair failed and we were unable to recover it. 00:42:47.712 [2024-05-15 09:08:42.361581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.712 [2024-05-15 09:08:42.361713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.712 [2024-05-15 09:08:42.361742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.712 qpair failed and we were unable to recover it. 00:42:47.712 [2024-05-15 09:08:42.361873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.712 [2024-05-15 09:08:42.361977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.712 [2024-05-15 09:08:42.362003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.712 qpair failed and we were unable to recover it. 00:42:47.712 [2024-05-15 09:08:42.362111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.712 [2024-05-15 09:08:42.362260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.712 [2024-05-15 09:08:42.362289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.712 qpair failed and we were unable to recover it. 00:42:47.712 [2024-05-15 09:08:42.362438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.712 [2024-05-15 09:08:42.362605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.712 [2024-05-15 09:08:42.362635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.712 qpair failed and we were unable to recover it. 00:42:47.712 [2024-05-15 09:08:42.362780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.712 [2024-05-15 09:08:42.362887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.712 [2024-05-15 09:08:42.362914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.712 qpair failed and we were unable to recover it. 00:42:47.712 [2024-05-15 09:08:42.363091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.712 [2024-05-15 09:08:42.363265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.712 [2024-05-15 09:08:42.363294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.712 qpair failed and we were unable to recover it. 00:42:47.712 [2024-05-15 09:08:42.363406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.712 [2024-05-15 09:08:42.363517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.712 [2024-05-15 09:08:42.363547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.712 qpair failed and we were unable to recover it. 00:42:47.712 [2024-05-15 09:08:42.363704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.712 [2024-05-15 09:08:42.363854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.712 [2024-05-15 09:08:42.363880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.712 qpair failed and we were unable to recover it. 00:42:47.712 [2024-05-15 09:08:42.364067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.712 [2024-05-15 09:08:42.364190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.712 [2024-05-15 09:08:42.364223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.712 qpair failed and we were unable to recover it. 00:42:47.712 [2024-05-15 09:08:42.364356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.712 [2024-05-15 09:08:42.364558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.712 [2024-05-15 09:08:42.364619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.712 qpair failed and we were unable to recover it. 00:42:47.712 [2024-05-15 09:08:42.364791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.712 [2024-05-15 09:08:42.364915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.712 [2024-05-15 09:08:42.364941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.712 qpair failed and we were unable to recover it. 00:42:47.712 [2024-05-15 09:08:42.365052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.712 [2024-05-15 09:08:42.365150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.712 [2024-05-15 09:08:42.365177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.712 qpair failed and we were unable to recover it. 00:42:47.712 [2024-05-15 09:08:42.365401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.712 [2024-05-15 09:08:42.365589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.712 [2024-05-15 09:08:42.365644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.712 qpair failed and we were unable to recover it. 00:42:47.712 [2024-05-15 09:08:42.365806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.712 [2024-05-15 09:08:42.365936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.712 [2024-05-15 09:08:42.365980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.712 qpair failed and we were unable to recover it. 00:42:47.712 [2024-05-15 09:08:42.366099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.712 [2024-05-15 09:08:42.366221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.712 [2024-05-15 09:08:42.366267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.712 qpair failed and we were unable to recover it. 00:42:47.712 [2024-05-15 09:08:42.366475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.712 [2024-05-15 09:08:42.366657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.712 [2024-05-15 09:08:42.366686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.712 qpair failed and we were unable to recover it. 00:42:47.712 [2024-05-15 09:08:42.366839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.712 [2024-05-15 09:08:42.366997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.712 [2024-05-15 09:08:42.367024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.712 qpair failed and we were unable to recover it. 00:42:47.712 [2024-05-15 09:08:42.367172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.712 [2024-05-15 09:08:42.367399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.712 [2024-05-15 09:08:42.367429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.712 qpair failed and we were unable to recover it. 00:42:47.712 [2024-05-15 09:08:42.367573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.712 [2024-05-15 09:08:42.367721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.712 [2024-05-15 09:08:42.367751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.712 qpair failed and we were unable to recover it. 00:42:47.712 [2024-05-15 09:08:42.367927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.712 [2024-05-15 09:08:42.368054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.712 [2024-05-15 09:08:42.368096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.712 qpair failed and we were unable to recover it. 00:42:47.712 [2024-05-15 09:08:42.368280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.712 [2024-05-15 09:08:42.368393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.712 [2024-05-15 09:08:42.368421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.712 qpair failed and we were unable to recover it. 00:42:47.712 [2024-05-15 09:08:42.368600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.712 [2024-05-15 09:08:42.368773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.712 [2024-05-15 09:08:42.368802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.712 qpair failed and we were unable to recover it. 00:42:47.712 [2024-05-15 09:08:42.368955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.712 [2024-05-15 09:08:42.369083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.712 [2024-05-15 09:08:42.369109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.712 qpair failed and we were unable to recover it. 00:42:47.712 [2024-05-15 09:08:42.369281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.712 [2024-05-15 09:08:42.369403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.712 [2024-05-15 09:08:42.369429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.712 qpair failed and we were unable to recover it. 00:42:47.712 [2024-05-15 09:08:42.369577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.712 [2024-05-15 09:08:42.369732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.712 [2024-05-15 09:08:42.369759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.712 qpair failed and we were unable to recover it. 00:42:47.712 [2024-05-15 09:08:42.369901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.712 [2024-05-15 09:08:42.370053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.712 [2024-05-15 09:08:42.370079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.712 qpair failed and we were unable to recover it. 00:42:47.712 [2024-05-15 09:08:42.370179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.712 [2024-05-15 09:08:42.370307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.712 [2024-05-15 09:08:42.370337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.712 qpair failed and we were unable to recover it. 00:42:47.713 [2024-05-15 09:08:42.370485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.713 [2024-05-15 09:08:42.370653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.713 [2024-05-15 09:08:42.370682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.713 qpair failed and we were unable to recover it. 00:42:47.713 [2024-05-15 09:08:42.370831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.713 [2024-05-15 09:08:42.370952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.713 [2024-05-15 09:08:42.370978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.713 qpair failed and we were unable to recover it. 00:42:47.713 [2024-05-15 09:08:42.371099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.713 [2024-05-15 09:08:42.371205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.713 [2024-05-15 09:08:42.371239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.713 qpair failed and we were unable to recover it. 00:42:47.713 [2024-05-15 09:08:42.371376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.713 [2024-05-15 09:08:42.371504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.713 [2024-05-15 09:08:42.371534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.713 qpair failed and we were unable to recover it. 00:42:47.713 [2024-05-15 09:08:42.371709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.713 [2024-05-15 09:08:42.371811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.713 [2024-05-15 09:08:42.371838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.713 qpair failed and we were unable to recover it. 00:42:47.713 [2024-05-15 09:08:42.371981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.713 [2024-05-15 09:08:42.372150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.713 [2024-05-15 09:08:42.372179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.713 qpair failed and we were unable to recover it. 00:42:47.713 [2024-05-15 09:08:42.372373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.713 [2024-05-15 09:08:42.372486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.713 [2024-05-15 09:08:42.372513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.713 qpair failed and we were unable to recover it. 00:42:47.713 [2024-05-15 09:08:42.372720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.713 [2024-05-15 09:08:42.372887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.713 [2024-05-15 09:08:42.372917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.713 qpair failed and we were unable to recover it. 00:42:47.713 [2024-05-15 09:08:42.373056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.713 [2024-05-15 09:08:42.373247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.713 [2024-05-15 09:08:42.373278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.713 qpair failed and we were unable to recover it. 00:42:47.713 [2024-05-15 09:08:42.373430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.713 [2024-05-15 09:08:42.373561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.713 [2024-05-15 09:08:42.373589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.713 qpair failed and we were unable to recover it. 00:42:47.713 [2024-05-15 09:08:42.373721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.713 [2024-05-15 09:08:42.373854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.713 [2024-05-15 09:08:42.373880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.713 qpair failed and we were unable to recover it. 00:42:47.713 [2024-05-15 09:08:42.374087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.713 [2024-05-15 09:08:42.374228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.713 [2024-05-15 09:08:42.374272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.713 qpair failed and we were unable to recover it. 00:42:47.713 [2024-05-15 09:08:42.374411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.713 [2024-05-15 09:08:42.374573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.713 [2024-05-15 09:08:42.374602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.713 qpair failed and we were unable to recover it. 00:42:47.713 [2024-05-15 09:08:42.374749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.713 [2024-05-15 09:08:42.374892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.713 [2024-05-15 09:08:42.374918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.713 qpair failed and we were unable to recover it. 00:42:47.713 [2024-05-15 09:08:42.375089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.713 [2024-05-15 09:08:42.375221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.713 [2024-05-15 09:08:42.375249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.713 qpair failed and we were unable to recover it. 00:42:47.713 [2024-05-15 09:08:42.375351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.713 [2024-05-15 09:08:42.375475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.713 [2024-05-15 09:08:42.375505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.713 qpair failed and we were unable to recover it. 00:42:47.713 [2024-05-15 09:08:42.375685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.713 [2024-05-15 09:08:42.375791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.713 [2024-05-15 09:08:42.375820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.713 qpair failed and we were unable to recover it. 00:42:47.713 [2024-05-15 09:08:42.375929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.713 [2024-05-15 09:08:42.376109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.713 [2024-05-15 09:08:42.376138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.713 qpair failed and we were unable to recover it. 00:42:47.713 [2024-05-15 09:08:42.376277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.713 [2024-05-15 09:08:42.376456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.713 [2024-05-15 09:08:42.376483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.713 qpair failed and we were unable to recover it. 00:42:47.713 [2024-05-15 09:08:42.376641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.713 [2024-05-15 09:08:42.376770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.713 [2024-05-15 09:08:42.376814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.713 qpair failed and we were unable to recover it. 00:42:47.713 [2024-05-15 09:08:42.376963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.713 [2024-05-15 09:08:42.377078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.713 [2024-05-15 09:08:42.377106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.713 qpair failed and we were unable to recover it. 00:42:47.713 [2024-05-15 09:08:42.377258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.713 [2024-05-15 09:08:42.377373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.713 [2024-05-15 09:08:42.377404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.713 qpair failed and we were unable to recover it. 00:42:47.713 [2024-05-15 09:08:42.377548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.713 [2024-05-15 09:08:42.377676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.713 [2024-05-15 09:08:42.377703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.713 qpair failed and we were unable to recover it. 00:42:47.713 [2024-05-15 09:08:42.377851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.713 [2024-05-15 09:08:42.378000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.713 [2024-05-15 09:08:42.378030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.713 qpair failed and we were unable to recover it. 00:42:47.713 [2024-05-15 09:08:42.378169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.713 [2024-05-15 09:08:42.378334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.713 [2024-05-15 09:08:42.378361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.713 qpair failed and we were unable to recover it. 00:42:47.713 [2024-05-15 09:08:42.378484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.713 [2024-05-15 09:08:42.378637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.713 [2024-05-15 09:08:42.378679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.713 qpair failed and we were unable to recover it. 00:42:47.713 [2024-05-15 09:08:42.378825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.713 [2024-05-15 09:08:42.378987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.713 [2024-05-15 09:08:42.379014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.713 qpair failed and we were unable to recover it. 00:42:47.713 [2024-05-15 09:08:42.379149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.713 [2024-05-15 09:08:42.379353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.713 [2024-05-15 09:08:42.379381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.713 qpair failed and we were unable to recover it. 00:42:47.713 [2024-05-15 09:08:42.379533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.713 [2024-05-15 09:08:42.379670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.714 [2024-05-15 09:08:42.379719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.714 qpair failed and we were unable to recover it. 00:42:47.714 [2024-05-15 09:08:42.379872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.714 [2024-05-15 09:08:42.380013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.714 [2024-05-15 09:08:42.380042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.714 qpair failed and we were unable to recover it. 00:42:47.714 [2024-05-15 09:08:42.380183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.714 [2024-05-15 09:08:42.380375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.714 [2024-05-15 09:08:42.380401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.714 qpair failed and we were unable to recover it. 00:42:47.714 [2024-05-15 09:08:42.380532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.714 [2024-05-15 09:08:42.380696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.714 [2024-05-15 09:08:42.380739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.714 qpair failed and we were unable to recover it. 00:42:47.714 [2024-05-15 09:08:42.380852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.714 [2024-05-15 09:08:42.381003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.714 [2024-05-15 09:08:42.381032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.714 qpair failed and we were unable to recover it. 00:42:47.714 [2024-05-15 09:08:42.381197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.714 [2024-05-15 09:08:42.381347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.714 [2024-05-15 09:08:42.381377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.714 qpair failed and we were unable to recover it. 00:42:47.714 [2024-05-15 09:08:42.381551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.714 [2024-05-15 09:08:42.381677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.714 [2024-05-15 09:08:42.381704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.714 qpair failed and we were unable to recover it. 00:42:47.714 [2024-05-15 09:08:42.381887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.714 [2024-05-15 09:08:42.382031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.714 [2024-05-15 09:08:42.382061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.714 qpair failed and we were unable to recover it. 00:42:47.714 [2024-05-15 09:08:42.382178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.714 [2024-05-15 09:08:42.382306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.714 [2024-05-15 09:08:42.382338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.714 qpair failed and we were unable to recover it. 00:42:47.714 [2024-05-15 09:08:42.382493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.714 [2024-05-15 09:08:42.382619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.714 [2024-05-15 09:08:42.382646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.714 qpair failed and we were unable to recover it. 00:42:47.714 [2024-05-15 09:08:42.382753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.714 [2024-05-15 09:08:42.382869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.714 [2024-05-15 09:08:42.382899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.714 qpair failed and we were unable to recover it. 00:42:47.714 [2024-05-15 09:08:42.383017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.714 [2024-05-15 09:08:42.383160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.714 [2024-05-15 09:08:42.383191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.714 qpair failed and we were unable to recover it. 00:42:47.714 [2024-05-15 09:08:42.383319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.714 [2024-05-15 09:08:42.383456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.714 [2024-05-15 09:08:42.383482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.714 qpair failed and we were unable to recover it. 00:42:47.714 [2024-05-15 09:08:42.383637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.714 [2024-05-15 09:08:42.383818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.714 [2024-05-15 09:08:42.383848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.714 qpair failed and we were unable to recover it. 00:42:47.714 [2024-05-15 09:08:42.383990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.714 [2024-05-15 09:08:42.384152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.714 [2024-05-15 09:08:42.384181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.714 qpair failed and we were unable to recover it. 00:42:47.714 [2024-05-15 09:08:42.384334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.714 [2024-05-15 09:08:42.384487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.714 [2024-05-15 09:08:42.384530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.714 qpair failed and we were unable to recover it. 00:42:47.714 [2024-05-15 09:08:42.384653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.714 [2024-05-15 09:08:42.384868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.714 [2024-05-15 09:08:42.384897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.714 qpair failed and we were unable to recover it. 00:42:47.714 [2024-05-15 09:08:42.385065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.714 [2024-05-15 09:08:42.385259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.714 [2024-05-15 09:08:42.385286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.714 qpair failed and we were unable to recover it. 00:42:47.714 [2024-05-15 09:08:42.385428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.714 [2024-05-15 09:08:42.385557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.714 [2024-05-15 09:08:42.385584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.714 qpair failed and we were unable to recover it. 00:42:47.714 [2024-05-15 09:08:42.385723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.714 [2024-05-15 09:08:42.385829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.714 [2024-05-15 09:08:42.385857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.714 qpair failed and we were unable to recover it. 00:42:47.714 [2024-05-15 09:08:42.386082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.714 [2024-05-15 09:08:42.386201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.714 [2024-05-15 09:08:42.386239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.714 qpair failed and we were unable to recover it. 00:42:47.714 [2024-05-15 09:08:42.386389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.714 [2024-05-15 09:08:42.386544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.714 [2024-05-15 09:08:42.386586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.714 qpair failed and we were unable to recover it. 00:42:47.714 [2024-05-15 09:08:42.386733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.714 [2024-05-15 09:08:42.386911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.714 [2024-05-15 09:08:42.386938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.714 qpair failed and we were unable to recover it. 00:42:47.714 [2024-05-15 09:08:42.387069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.714 [2024-05-15 09:08:42.387203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.714 [2024-05-15 09:08:42.387236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.714 qpair failed and we were unable to recover it. 00:42:47.714 [2024-05-15 09:08:42.387394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.714 [2024-05-15 09:08:42.387529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.714 [2024-05-15 09:08:42.387571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.714 qpair failed and we were unable to recover it. 00:42:47.714 [2024-05-15 09:08:42.387717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.714 [2024-05-15 09:08:42.387858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.714 [2024-05-15 09:08:42.387887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.714 qpair failed and we were unable to recover it. 00:42:47.714 [2024-05-15 09:08:42.388024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.714 [2024-05-15 09:08:42.388241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.714 [2024-05-15 09:08:42.388272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.714 qpair failed and we were unable to recover it. 00:42:47.714 [2024-05-15 09:08:42.388431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.714 [2024-05-15 09:08:42.388563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.714 [2024-05-15 09:08:42.388590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.714 qpair failed and we were unable to recover it. 00:42:47.714 [2024-05-15 09:08:42.388758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.714 [2024-05-15 09:08:42.388877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.714 [2024-05-15 09:08:42.388906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.714 qpair failed and we were unable to recover it. 00:42:47.714 [2024-05-15 09:08:42.389047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.715 [2024-05-15 09:08:42.389190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.715 [2024-05-15 09:08:42.389244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.715 qpair failed and we were unable to recover it. 00:42:47.715 [2024-05-15 09:08:42.389388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.715 [2024-05-15 09:08:42.389540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.715 [2024-05-15 09:08:42.389567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.715 qpair failed and we were unable to recover it. 00:42:47.715 [2024-05-15 09:08:42.389660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.715 [2024-05-15 09:08:42.389812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.715 [2024-05-15 09:08:42.389839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.715 qpair failed and we were unable to recover it. 00:42:47.715 [2024-05-15 09:08:42.389968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.715 [2024-05-15 09:08:42.390100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.715 [2024-05-15 09:08:42.390129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.715 qpair failed and we were unable to recover it. 00:42:47.715 [2024-05-15 09:08:42.390300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.715 [2024-05-15 09:08:42.390427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.715 [2024-05-15 09:08:42.390453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.715 qpair failed and we were unable to recover it. 00:42:47.715 [2024-05-15 09:08:42.390665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.715 [2024-05-15 09:08:42.390820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.715 [2024-05-15 09:08:42.390847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.715 qpair failed and we were unable to recover it. 00:42:47.715 [2024-05-15 09:08:42.390970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.715 [2024-05-15 09:08:42.391098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.715 [2024-05-15 09:08:42.391124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.715 qpair failed and we were unable to recover it. 00:42:47.715 [2024-05-15 09:08:42.391250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.715 [2024-05-15 09:08:42.391389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.715 [2024-05-15 09:08:42.391417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.715 qpair failed and we were unable to recover it. 00:42:47.715 [2024-05-15 09:08:42.391581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.715 [2024-05-15 09:08:42.391703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.715 [2024-05-15 09:08:42.391733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.715 qpair failed and we were unable to recover it. 00:42:47.715 [2024-05-15 09:08:42.391873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.715 [2024-05-15 09:08:42.392042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.715 [2024-05-15 09:08:42.392069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.715 qpair failed and we were unable to recover it. 00:42:47.715 [2024-05-15 09:08:42.392188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.715 [2024-05-15 09:08:42.392369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.715 [2024-05-15 09:08:42.392396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.715 qpair failed and we were unable to recover it. 00:42:47.715 [2024-05-15 09:08:42.392501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.715 [2024-05-15 09:08:42.392649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.715 [2024-05-15 09:08:42.392676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.715 qpair failed and we were unable to recover it. 00:42:47.715 [2024-05-15 09:08:42.392808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.715 [2024-05-15 09:08:42.392960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.715 [2024-05-15 09:08:42.392989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.715 qpair failed and we were unable to recover it. 00:42:47.715 [2024-05-15 09:08:42.393117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.715 [2024-05-15 09:08:42.393272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.715 [2024-05-15 09:08:42.393299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.715 qpair failed and we were unable to recover it. 00:42:47.715 [2024-05-15 09:08:42.393446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.715 [2024-05-15 09:08:42.393603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.715 [2024-05-15 09:08:42.393630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.715 qpair failed and we were unable to recover it. 00:42:47.715 [2024-05-15 09:08:42.393764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.715 [2024-05-15 09:08:42.393925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.715 [2024-05-15 09:08:42.393951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.715 qpair failed and we were unable to recover it. 00:42:47.715 [2024-05-15 09:08:42.394104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.715 [2024-05-15 09:08:42.394259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.715 [2024-05-15 09:08:42.394287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.715 qpair failed and we were unable to recover it. 00:42:47.715 [2024-05-15 09:08:42.394417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.715 [2024-05-15 09:08:42.394562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.715 [2024-05-15 09:08:42.394592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.715 qpair failed and we were unable to recover it. 00:42:47.715 [2024-05-15 09:08:42.394707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.715 [2024-05-15 09:08:42.394817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.715 [2024-05-15 09:08:42.394846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.715 qpair failed and we were unable to recover it. 00:42:47.715 [2024-05-15 09:08:42.394998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.715 [2024-05-15 09:08:42.395131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.715 [2024-05-15 09:08:42.395158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.715 qpair failed and we were unable to recover it. 00:42:47.715 [2024-05-15 09:08:42.395286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.715 [2024-05-15 09:08:42.395445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.715 [2024-05-15 09:08:42.395474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.715 qpair failed and we were unable to recover it. 00:42:47.715 [2024-05-15 09:08:42.395583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.715 [2024-05-15 09:08:42.395758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.715 [2024-05-15 09:08:42.395785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.715 qpair failed and we were unable to recover it. 00:42:47.715 [2024-05-15 09:08:42.395950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.715 [2024-05-15 09:08:42.396159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.715 [2024-05-15 09:08:42.396188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.715 qpair failed and we were unable to recover it. 00:42:47.715 [2024-05-15 09:08:42.396359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.715 [2024-05-15 09:08:42.396526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.715 [2024-05-15 09:08:42.396556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.715 qpair failed and we were unable to recover it. 00:42:47.715 [2024-05-15 09:08:42.396720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.715 [2024-05-15 09:08:42.396834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.716 [2024-05-15 09:08:42.396864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.716 qpair failed and we were unable to recover it. 00:42:47.716 [2024-05-15 09:08:42.397016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.716 [2024-05-15 09:08:42.397121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.716 [2024-05-15 09:08:42.397149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.716 qpair failed and we were unable to recover it. 00:42:47.716 [2024-05-15 09:08:42.397302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.716 [2024-05-15 09:08:42.397417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.716 [2024-05-15 09:08:42.397448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.716 qpair failed and we were unable to recover it. 00:42:47.716 [2024-05-15 09:08:42.397596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.716 [2024-05-15 09:08:42.397735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.716 [2024-05-15 09:08:42.397765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.716 qpair failed and we were unable to recover it. 00:42:47.716 [2024-05-15 09:08:42.397931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.716 [2024-05-15 09:08:42.398139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.716 [2024-05-15 09:08:42.398165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.716 qpair failed and we were unable to recover it. 00:42:47.716 [2024-05-15 09:08:42.398345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.716 [2024-05-15 09:08:42.398464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.716 [2024-05-15 09:08:42.398491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.716 qpair failed and we were unable to recover it. 00:42:47.716 [2024-05-15 09:08:42.398644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.716 [2024-05-15 09:08:42.398825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.716 [2024-05-15 09:08:42.398851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.716 qpair failed and we were unable to recover it. 00:42:47.716 [2024-05-15 09:08:42.398987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.716 [2024-05-15 09:08:42.399128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.716 [2024-05-15 09:08:42.399171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.716 qpair failed and we were unable to recover it. 00:42:47.716 [2024-05-15 09:08:42.399369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.716 [2024-05-15 09:08:42.399501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.716 [2024-05-15 09:08:42.399528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.716 qpair failed and we were unable to recover it. 00:42:47.716 [2024-05-15 09:08:42.399630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.716 [2024-05-15 09:08:42.399785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.716 [2024-05-15 09:08:42.399812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.716 qpair failed and we were unable to recover it. 00:42:47.716 [2024-05-15 09:08:42.399945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.716 [2024-05-15 09:08:42.400075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.716 [2024-05-15 09:08:42.400103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.716 qpair failed and we were unable to recover it. 00:42:47.716 [2024-05-15 09:08:42.400239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.716 [2024-05-15 09:08:42.400372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.716 [2024-05-15 09:08:42.400402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.716 qpair failed and we were unable to recover it. 00:42:47.716 [2024-05-15 09:08:42.400574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.716 [2024-05-15 09:08:42.400783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.716 [2024-05-15 09:08:42.400841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.716 qpair failed and we were unable to recover it. 00:42:47.716 [2024-05-15 09:08:42.401062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.716 [2024-05-15 09:08:42.401204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.716 [2024-05-15 09:08:42.401241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.716 qpair failed and we were unable to recover it. 00:42:47.716 [2024-05-15 09:08:42.401456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.716 [2024-05-15 09:08:42.401636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.716 [2024-05-15 09:08:42.401698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.716 qpair failed and we were unable to recover it. 00:42:47.716 [2024-05-15 09:08:42.401866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.716 [2024-05-15 09:08:42.402006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.716 [2024-05-15 09:08:42.402040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.716 qpair failed and we were unable to recover it. 00:42:47.716 [2024-05-15 09:08:42.402261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.716 [2024-05-15 09:08:42.402428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.716 [2024-05-15 09:08:42.402483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.716 qpair failed and we were unable to recover it. 00:42:47.716 [2024-05-15 09:08:42.402622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.716 [2024-05-15 09:08:42.402787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.716 [2024-05-15 09:08:42.402817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.716 qpair failed and we were unable to recover it. 00:42:47.716 [2024-05-15 09:08:42.402926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.716 [2024-05-15 09:08:42.403087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.716 [2024-05-15 09:08:42.403114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.716 qpair failed and we were unable to recover it. 00:42:47.716 [2024-05-15 09:08:42.403258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.716 [2024-05-15 09:08:42.403390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.716 [2024-05-15 09:08:42.403434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.716 qpair failed and we were unable to recover it. 00:42:47.716 [2024-05-15 09:08:42.403613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.716 [2024-05-15 09:08:42.403741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.716 [2024-05-15 09:08:42.403768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.716 qpair failed and we were unable to recover it. 00:42:47.716 [2024-05-15 09:08:42.403934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.716 [2024-05-15 09:08:42.404114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.716 [2024-05-15 09:08:42.404143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.716 qpair failed and we were unable to recover it. 00:42:47.716 [2024-05-15 09:08:42.404287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.716 [2024-05-15 09:08:42.404418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.716 [2024-05-15 09:08:42.404445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.716 qpair failed and we were unable to recover it. 00:42:47.716 [2024-05-15 09:08:42.404574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.716 [2024-05-15 09:08:42.404706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.716 [2024-05-15 09:08:42.404733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.716 qpair failed and we were unable to recover it. 00:42:47.716 [2024-05-15 09:08:42.404886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.716 [2024-05-15 09:08:42.405023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.716 [2024-05-15 09:08:42.405054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.716 qpair failed and we were unable to recover it. 00:42:47.716 [2024-05-15 09:08:42.405204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.716 [2024-05-15 09:08:42.405346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.716 [2024-05-15 09:08:42.405376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.716 qpair failed and we were unable to recover it. 00:42:47.716 [2024-05-15 09:08:42.405564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.716 [2024-05-15 09:08:42.405715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.716 [2024-05-15 09:08:42.405742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.716 qpair failed and we were unable to recover it. 00:42:47.716 [2024-05-15 09:08:42.405869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.716 [2024-05-15 09:08:42.406052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.716 [2024-05-15 09:08:42.406081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.716 qpair failed and we were unable to recover it. 00:42:47.716 [2024-05-15 09:08:42.406209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.716 [2024-05-15 09:08:42.406372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.716 [2024-05-15 09:08:42.406399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.716 qpair failed and we were unable to recover it. 00:42:47.717 [2024-05-15 09:08:42.406565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.717 [2024-05-15 09:08:42.406697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.717 [2024-05-15 09:08:42.406723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.717 qpair failed and we were unable to recover it. 00:42:47.717 [2024-05-15 09:08:42.406881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.717 [2024-05-15 09:08:42.407063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.717 [2024-05-15 09:08:42.407092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.717 qpair failed and we were unable to recover it. 00:42:47.717 [2024-05-15 09:08:42.407270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.717 [2024-05-15 09:08:42.407378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.717 [2024-05-15 09:08:42.407405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.717 qpair failed and we were unable to recover it. 00:42:47.717 [2024-05-15 09:08:42.407518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.717 [2024-05-15 09:08:42.407661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.717 [2024-05-15 09:08:42.407688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.717 qpair failed and we were unable to recover it. 00:42:47.717 [2024-05-15 09:08:42.407837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.717 [2024-05-15 09:08:42.407975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.717 [2024-05-15 09:08:42.408004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.717 qpair failed and we were unable to recover it. 00:42:47.717 [2024-05-15 09:08:42.408178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.717 [2024-05-15 09:08:42.408360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.717 [2024-05-15 09:08:42.408387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.717 qpair failed and we were unable to recover it. 00:42:47.717 [2024-05-15 09:08:42.408569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.717 [2024-05-15 09:08:42.408721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.717 [2024-05-15 09:08:42.408752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.717 qpair failed and we were unable to recover it. 00:42:47.717 [2024-05-15 09:08:42.408864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.717 [2024-05-15 09:08:42.408962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.717 [2024-05-15 09:08:42.408988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.717 qpair failed and we were unable to recover it. 00:42:47.717 [2024-05-15 09:08:42.409118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.717 [2024-05-15 09:08:42.409247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.717 [2024-05-15 09:08:42.409275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.717 qpair failed and we were unable to recover it. 00:42:47.717 [2024-05-15 09:08:42.409438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.717 [2024-05-15 09:08:42.409566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.717 [2024-05-15 09:08:42.409593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.717 qpair failed and we were unable to recover it. 00:42:47.717 [2024-05-15 09:08:42.409719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.717 [2024-05-15 09:08:42.409843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.717 [2024-05-15 09:08:42.409872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.717 qpair failed and we were unable to recover it. 00:42:47.717 [2024-05-15 09:08:42.410022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.717 [2024-05-15 09:08:42.410154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.717 [2024-05-15 09:08:42.410180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.717 qpair failed and we were unable to recover it. 00:42:47.717 [2024-05-15 09:08:42.410318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.717 [2024-05-15 09:08:42.410455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.717 [2024-05-15 09:08:42.410484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.717 qpair failed and we were unable to recover it. 00:42:47.717 [2024-05-15 09:08:42.410653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.717 [2024-05-15 09:08:42.410770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.717 [2024-05-15 09:08:42.410800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.717 qpair failed and we were unable to recover it. 00:42:47.717 [2024-05-15 09:08:42.410971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.717 [2024-05-15 09:08:42.411115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.717 [2024-05-15 09:08:42.411141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.717 qpair failed and we were unable to recover it. 00:42:47.717 [2024-05-15 09:08:42.411307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.717 [2024-05-15 09:08:42.411424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.717 [2024-05-15 09:08:42.411453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.717 qpair failed and we were unable to recover it. 00:42:47.717 [2024-05-15 09:08:42.411634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.717 [2024-05-15 09:08:42.411775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.717 [2024-05-15 09:08:42.411806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.717 qpair failed and we were unable to recover it. 00:42:47.717 [2024-05-15 09:08:42.411945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.717 [2024-05-15 09:08:42.412103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.717 [2024-05-15 09:08:42.412130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.717 qpair failed and we were unable to recover it. 00:42:47.717 [2024-05-15 09:08:42.412242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.717 [2024-05-15 09:08:42.412424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.717 [2024-05-15 09:08:42.412453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.717 qpair failed and we were unable to recover it. 00:42:47.717 [2024-05-15 09:08:42.412563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.717 [2024-05-15 09:08:42.412740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.717 [2024-05-15 09:08:42.412766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.717 qpair failed and we were unable to recover it. 00:42:47.717 [2024-05-15 09:08:42.412898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.717 [2024-05-15 09:08:42.413082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.717 [2024-05-15 09:08:42.413108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.717 qpair failed and we were unable to recover it. 00:42:47.717 [2024-05-15 09:08:42.413264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.717 [2024-05-15 09:08:42.413377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.717 [2024-05-15 09:08:42.413417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.717 qpair failed and we were unable to recover it. 00:42:47.717 [2024-05-15 09:08:42.413576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.717 [2024-05-15 09:08:42.413762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.717 [2024-05-15 09:08:42.413791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.717 qpair failed and we were unable to recover it. 00:42:47.717 [2024-05-15 09:08:42.413939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.717 [2024-05-15 09:08:42.414039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.717 [2024-05-15 09:08:42.414075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.717 qpair failed and we were unable to recover it. 00:42:47.717 [2024-05-15 09:08:42.414239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.717 [2024-05-15 09:08:42.414395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.717 [2024-05-15 09:08:42.414424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.717 qpair failed and we were unable to recover it. 00:42:47.717 [2024-05-15 09:08:42.414555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.717 [2024-05-15 09:08:42.414720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.717 [2024-05-15 09:08:42.414749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.717 qpair failed and we were unable to recover it. 00:42:47.717 [2024-05-15 09:08:42.414906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.717 [2024-05-15 09:08:42.415037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.717 [2024-05-15 09:08:42.415063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.717 qpair failed and we were unable to recover it. 00:42:47.717 [2024-05-15 09:08:42.415165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.717 [2024-05-15 09:08:42.415290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.717 [2024-05-15 09:08:42.415318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.717 qpair failed and we were unable to recover it. 00:42:47.717 [2024-05-15 09:08:42.415471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.717 [2024-05-15 09:08:42.415573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.718 [2024-05-15 09:08:42.415600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.718 qpair failed and we were unable to recover it. 00:42:47.718 [2024-05-15 09:08:42.415732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.718 [2024-05-15 09:08:42.415862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.718 [2024-05-15 09:08:42.415889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.718 qpair failed and we were unable to recover it. 00:42:47.718 [2024-05-15 09:08:42.415983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.718 [2024-05-15 09:08:42.416130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.718 [2024-05-15 09:08:42.416161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.718 qpair failed and we were unable to recover it. 00:42:47.718 [2024-05-15 09:08:42.416389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.718 [2024-05-15 09:08:42.416539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.718 [2024-05-15 09:08:42.416565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.718 qpair failed and we were unable to recover it. 00:42:47.718 [2024-05-15 09:08:42.416759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.718 [2024-05-15 09:08:42.416895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.718 [2024-05-15 09:08:42.416922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.718 qpair failed and we were unable to recover it. 00:42:47.718 [2024-05-15 09:08:42.417035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.718 [2024-05-15 09:08:42.417161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.718 [2024-05-15 09:08:42.417190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.718 qpair failed and we were unable to recover it. 00:42:47.718 [2024-05-15 09:08:42.417340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.718 [2024-05-15 09:08:42.417451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.718 [2024-05-15 09:08:42.417477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.718 qpair failed and we were unable to recover it. 00:42:47.718 [2024-05-15 09:08:42.417632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.718 [2024-05-15 09:08:42.417747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.718 [2024-05-15 09:08:42.417775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.718 qpair failed and we were unable to recover it. 00:42:47.718 [2024-05-15 09:08:42.417902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.718 [2024-05-15 09:08:42.418045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.718 [2024-05-15 09:08:42.418074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.718 qpair failed and we were unable to recover it. 00:42:47.718 [2024-05-15 09:08:42.418227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.718 [2024-05-15 09:08:42.418369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.718 [2024-05-15 09:08:42.418399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.718 qpair failed and we were unable to recover it. 00:42:47.718 [2024-05-15 09:08:42.418555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.718 [2024-05-15 09:08:42.418686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.718 [2024-05-15 09:08:42.418712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.718 qpair failed and we were unable to recover it. 00:42:47.718 [2024-05-15 09:08:42.418855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.718 [2024-05-15 09:08:42.419033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.718 [2024-05-15 09:08:42.419060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.718 qpair failed and we were unable to recover it. 00:42:47.718 [2024-05-15 09:08:42.419211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.718 [2024-05-15 09:08:42.419413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.718 [2024-05-15 09:08:42.419440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.718 qpair failed and we were unable to recover it. 00:42:47.718 [2024-05-15 09:08:42.419579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.718 [2024-05-15 09:08:42.419706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.718 [2024-05-15 09:08:42.419732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.718 qpair failed and we were unable to recover it. 00:42:47.718 [2024-05-15 09:08:42.419835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.718 [2024-05-15 09:08:42.419978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.718 [2024-05-15 09:08:42.420007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.718 qpair failed and we were unable to recover it. 00:42:47.718 [2024-05-15 09:08:42.420174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.718 [2024-05-15 09:08:42.420315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.718 [2024-05-15 09:08:42.420346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.718 qpair failed and we were unable to recover it. 00:42:47.718 [2024-05-15 09:08:42.420487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.718 [2024-05-15 09:08:42.420593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.718 [2024-05-15 09:08:42.420619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.718 qpair failed and we were unable to recover it. 00:42:47.718 [2024-05-15 09:08:42.420745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.718 [2024-05-15 09:08:42.420874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.718 [2024-05-15 09:08:42.420900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.718 qpair failed and we were unable to recover it. 00:42:47.718 [2024-05-15 09:08:42.421057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.718 [2024-05-15 09:08:42.421201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.718 [2024-05-15 09:08:42.421241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.718 qpair failed and we were unable to recover it. 00:42:47.718 [2024-05-15 09:08:42.421390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.718 [2024-05-15 09:08:42.421533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.718 [2024-05-15 09:08:42.421560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.718 qpair failed and we were unable to recover it. 00:42:47.718 [2024-05-15 09:08:42.421681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.718 [2024-05-15 09:08:42.421786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.718 [2024-05-15 09:08:42.421814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.718 qpair failed and we were unable to recover it. 00:42:47.718 [2024-05-15 09:08:42.422023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.718 [2024-05-15 09:08:42.422164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.718 [2024-05-15 09:08:42.422193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.718 qpair failed and we were unable to recover it. 00:42:47.718 [2024-05-15 09:08:42.422327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.718 [2024-05-15 09:08:42.422439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.718 [2024-05-15 09:08:42.422465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.718 qpair failed and we were unable to recover it. 00:42:47.718 [2024-05-15 09:08:42.422693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.718 [2024-05-15 09:08:42.422974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.718 [2024-05-15 09:08:42.423035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.718 qpair failed and we were unable to recover it. 00:42:47.718 [2024-05-15 09:08:42.423176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.718 [2024-05-15 09:08:42.423307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.718 [2024-05-15 09:08:42.423332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.718 qpair failed and we were unable to recover it. 00:42:47.718 [2024-05-15 09:08:42.423458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.718 [2024-05-15 09:08:42.423586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.718 [2024-05-15 09:08:42.423613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.718 qpair failed and we were unable to recover it. 00:42:47.718 [2024-05-15 09:08:42.423762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.718 [2024-05-15 09:08:42.423914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.718 [2024-05-15 09:08:42.423943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.718 qpair failed and we were unable to recover it. 00:42:47.718 [2024-05-15 09:08:42.424108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.718 [2024-05-15 09:08:42.424275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.718 [2024-05-15 09:08:42.424305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.718 qpair failed and we were unable to recover it. 00:42:47.718 [2024-05-15 09:08:42.424459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.718 [2024-05-15 09:08:42.424615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.718 [2024-05-15 09:08:42.424658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.718 qpair failed and we were unable to recover it. 00:42:47.718 [2024-05-15 09:08:42.424830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.719 [2024-05-15 09:08:42.424980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.719 [2024-05-15 09:08:42.425006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.719 qpair failed and we were unable to recover it. 00:42:47.719 [2024-05-15 09:08:42.425138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.719 [2024-05-15 09:08:42.425308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.719 [2024-05-15 09:08:42.425337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.719 qpair failed and we were unable to recover it. 00:42:47.719 [2024-05-15 09:08:42.425461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.719 [2024-05-15 09:08:42.425617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.719 [2024-05-15 09:08:42.425643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.719 qpair failed and we were unable to recover it. 00:42:47.719 [2024-05-15 09:08:42.425798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.719 [2024-05-15 09:08:42.425920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.719 [2024-05-15 09:08:42.425950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.719 qpair failed and we were unable to recover it. 00:42:47.719 [2024-05-15 09:08:42.426139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.719 [2024-05-15 09:08:42.426250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.719 [2024-05-15 09:08:42.426278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.719 qpair failed and we were unable to recover it. 00:42:47.719 [2024-05-15 09:08:42.426373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.719 [2024-05-15 09:08:42.426524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.719 [2024-05-15 09:08:42.426551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.719 qpair failed and we were unable to recover it. 00:42:47.719 [2024-05-15 09:08:42.426704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.719 [2024-05-15 09:08:42.426843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.719 [2024-05-15 09:08:42.426874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.719 qpair failed and we were unable to recover it. 00:42:47.719 [2024-05-15 09:08:42.426994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.719 [2024-05-15 09:08:42.427139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.719 [2024-05-15 09:08:42.427169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.719 qpair failed and we were unable to recover it. 00:42:47.719 [2024-05-15 09:08:42.427356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.719 [2024-05-15 09:08:42.427492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.719 [2024-05-15 09:08:42.427519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.719 qpair failed and we were unable to recover it. 00:42:47.719 [2024-05-15 09:08:42.427652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.719 [2024-05-15 09:08:42.427783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.719 [2024-05-15 09:08:42.427809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.719 qpair failed and we were unable to recover it. 00:42:47.719 [2024-05-15 09:08:42.427960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.719 [2024-05-15 09:08:42.428090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.719 [2024-05-15 09:08:42.428119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.719 qpair failed and we were unable to recover it. 00:42:47.719 [2024-05-15 09:08:42.428300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.719 [2024-05-15 09:08:42.428452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.719 [2024-05-15 09:08:42.428478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.719 qpair failed and we were unable to recover it. 00:42:47.719 [2024-05-15 09:08:42.428637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.719 [2024-05-15 09:08:42.428788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.719 [2024-05-15 09:08:42.428814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.719 qpair failed and we were unable to recover it. 00:42:47.719 [2024-05-15 09:08:42.428915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.719 [2024-05-15 09:08:42.429042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.719 [2024-05-15 09:08:42.429068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.719 qpair failed and we were unable to recover it. 00:42:47.719 [2024-05-15 09:08:42.429221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.719 [2024-05-15 09:08:42.429351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.719 [2024-05-15 09:08:42.429378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.719 qpair failed and we were unable to recover it. 00:42:47.719 [2024-05-15 09:08:42.429531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.719 [2024-05-15 09:08:42.429677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.719 [2024-05-15 09:08:42.429705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.719 qpair failed and we were unable to recover it. 00:42:47.719 [2024-05-15 09:08:42.429837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.719 [2024-05-15 09:08:42.429983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.719 [2024-05-15 09:08:42.430012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.719 qpair failed and we were unable to recover it. 00:42:47.719 [2024-05-15 09:08:42.430130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.719 [2024-05-15 09:08:42.430235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.719 [2024-05-15 09:08:42.430266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.719 qpair failed and we were unable to recover it. 00:42:47.719 [2024-05-15 09:08:42.430401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.719 [2024-05-15 09:08:42.430547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.719 [2024-05-15 09:08:42.430576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.719 qpair failed and we were unable to recover it. 00:42:47.719 [2024-05-15 09:08:42.430694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.719 [2024-05-15 09:08:42.430849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.719 [2024-05-15 09:08:42.430875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.719 qpair failed and we were unable to recover it. 00:42:47.719 [2024-05-15 09:08:42.431008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.719 [2024-05-15 09:08:42.431139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.719 [2024-05-15 09:08:42.431166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.719 qpair failed and we were unable to recover it. 00:42:47.719 [2024-05-15 09:08:42.431302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.719 [2024-05-15 09:08:42.431425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.719 [2024-05-15 09:08:42.431454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.719 qpair failed and we were unable to recover it. 00:42:47.719 [2024-05-15 09:08:42.431617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.719 [2024-05-15 09:08:42.431768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.719 [2024-05-15 09:08:42.431795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.719 qpair failed and we were unable to recover it. 00:42:47.719 [2024-05-15 09:08:42.431894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.719 [2024-05-15 09:08:42.432029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.719 [2024-05-15 09:08:42.432055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.719 qpair failed and we were unable to recover it. 00:42:47.719 [2024-05-15 09:08:42.432152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.719 [2024-05-15 09:08:42.432304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.719 [2024-05-15 09:08:42.432335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.719 qpair failed and we were unable to recover it. 00:42:47.719 [2024-05-15 09:08:42.432475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.719 [2024-05-15 09:08:42.432636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.719 [2024-05-15 09:08:42.432663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.719 qpair failed and we were unable to recover it. 00:42:47.719 [2024-05-15 09:08:42.432765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.719 [2024-05-15 09:08:42.432917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.719 [2024-05-15 09:08:42.432943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.719 qpair failed and we were unable to recover it. 00:42:47.719 [2024-05-15 09:08:42.433088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.719 [2024-05-15 09:08:42.433278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.719 [2024-05-15 09:08:42.433305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.719 qpair failed and we were unable to recover it. 00:42:47.719 [2024-05-15 09:08:42.433434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.719 [2024-05-15 09:08:42.433539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.719 [2024-05-15 09:08:42.433566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.720 qpair failed and we were unable to recover it. 00:42:47.720 [2024-05-15 09:08:42.433693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.720 [2024-05-15 09:08:42.433798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.720 [2024-05-15 09:08:42.433826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.720 qpair failed and we were unable to recover it. 00:42:47.720 [2024-05-15 09:08:42.433956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.720 [2024-05-15 09:08:42.434095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.720 [2024-05-15 09:08:42.434124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.720 qpair failed and we were unable to recover it. 00:42:47.720 [2024-05-15 09:08:42.434270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.720 [2024-05-15 09:08:42.434426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.720 [2024-05-15 09:08:42.434453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.720 qpair failed and we were unable to recover it. 00:42:47.720 [2024-05-15 09:08:42.434612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.720 [2024-05-15 09:08:42.434742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.720 [2024-05-15 09:08:42.434786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.720 qpair failed and we were unable to recover it. 00:42:47.720 [2024-05-15 09:08:42.434938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.720 [2024-05-15 09:08:42.435093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.720 [2024-05-15 09:08:42.435120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.720 qpair failed and we were unable to recover it. 00:42:47.720 [2024-05-15 09:08:42.435252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.720 [2024-05-15 09:08:42.435382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.720 [2024-05-15 09:08:42.435408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.720 qpair failed and we were unable to recover it. 00:42:47.720 [2024-05-15 09:08:42.435540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.720 [2024-05-15 09:08:42.435640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.720 [2024-05-15 09:08:42.435666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.720 qpair failed and we were unable to recover it. 00:42:47.720 [2024-05-15 09:08:42.435799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.720 [2024-05-15 09:08:42.436011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.720 [2024-05-15 09:08:42.436040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.720 qpair failed and we were unable to recover it. 00:42:47.720 [2024-05-15 09:08:42.436228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.720 [2024-05-15 09:08:42.436361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.720 [2024-05-15 09:08:42.436387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.720 qpair failed and we were unable to recover it. 00:42:47.720 [2024-05-15 09:08:42.436516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.720 [2024-05-15 09:08:42.436649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.720 [2024-05-15 09:08:42.436677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.720 qpair failed and we were unable to recover it. 00:42:47.720 [2024-05-15 09:08:42.436836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.720 [2024-05-15 09:08:42.436985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.720 [2024-05-15 09:08:42.437015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.720 qpair failed and we were unable to recover it. 00:42:47.720 [2024-05-15 09:08:42.437207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.720 [2024-05-15 09:08:42.437345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.720 [2024-05-15 09:08:42.437372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.720 qpair failed and we were unable to recover it. 00:42:47.720 [2024-05-15 09:08:42.437500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.720 [2024-05-15 09:08:42.437662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.720 [2024-05-15 09:08:42.437690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.720 qpair failed and we were unable to recover it. 00:42:47.720 [2024-05-15 09:08:42.437845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.720 [2024-05-15 09:08:42.437991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.720 [2024-05-15 09:08:42.438021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.720 qpair failed and we were unable to recover it. 00:42:47.720 [2024-05-15 09:08:42.438238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.720 [2024-05-15 09:08:42.438379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.720 [2024-05-15 09:08:42.438408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.720 qpair failed and we were unable to recover it. 00:42:47.720 [2024-05-15 09:08:42.438551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.720 [2024-05-15 09:08:42.438702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.720 [2024-05-15 09:08:42.438745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.720 qpair failed and we were unable to recover it. 00:42:47.720 [2024-05-15 09:08:42.438887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.720 [2024-05-15 09:08:42.439027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.720 [2024-05-15 09:08:42.439057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.720 qpair failed and we were unable to recover it. 00:42:47.720 [2024-05-15 09:08:42.439228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.720 [2024-05-15 09:08:42.439351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.720 [2024-05-15 09:08:42.439378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.720 qpair failed and we were unable to recover it. 00:42:47.720 [2024-05-15 09:08:42.439473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.720 [2024-05-15 09:08:42.439604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.720 [2024-05-15 09:08:42.439631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.720 qpair failed and we were unable to recover it. 00:42:47.720 [2024-05-15 09:08:42.439780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.720 [2024-05-15 09:08:42.439915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.720 [2024-05-15 09:08:42.439945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.720 qpair failed and we were unable to recover it. 00:42:47.720 [2024-05-15 09:08:42.440066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.720 [2024-05-15 09:08:42.440210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.720 [2024-05-15 09:08:42.440267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.720 qpair failed and we were unable to recover it. 00:42:47.720 [2024-05-15 09:08:42.440369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.720 [2024-05-15 09:08:42.440505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.720 [2024-05-15 09:08:42.440532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.720 qpair failed and we were unable to recover it. 00:42:47.720 [2024-05-15 09:08:42.440691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.720 [2024-05-15 09:08:42.440827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.720 [2024-05-15 09:08:42.440853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.720 qpair failed and we were unable to recover it. 00:42:47.720 [2024-05-15 09:08:42.440991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.721 [2024-05-15 09:08:42.441088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.721 [2024-05-15 09:08:42.441114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.721 qpair failed and we were unable to recover it. 00:42:47.721 [2024-05-15 09:08:42.441270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.721 [2024-05-15 09:08:42.441371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.721 [2024-05-15 09:08:42.441398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.721 qpair failed and we were unable to recover it. 00:42:47.721 [2024-05-15 09:08:42.441491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.721 [2024-05-15 09:08:42.441587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.721 [2024-05-15 09:08:42.441614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.721 qpair failed and we were unable to recover it. 00:42:47.721 [2024-05-15 09:08:42.441788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.721 [2024-05-15 09:08:42.441929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.721 [2024-05-15 09:08:42.441959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.721 qpair failed and we were unable to recover it. 00:42:47.721 [2024-05-15 09:08:42.442131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.721 [2024-05-15 09:08:42.442310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.721 [2024-05-15 09:08:42.442339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.721 qpair failed and we were unable to recover it. 00:42:47.721 [2024-05-15 09:08:42.442482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.721 [2024-05-15 09:08:42.442595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.721 [2024-05-15 09:08:42.442625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.721 qpair failed and we were unable to recover it. 00:42:47.721 [2024-05-15 09:08:42.442793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.721 [2024-05-15 09:08:42.442935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.721 [2024-05-15 09:08:42.442964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.721 qpair failed and we were unable to recover it. 00:42:47.721 [2024-05-15 09:08:42.443139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.721 [2024-05-15 09:08:42.443272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.721 [2024-05-15 09:08:42.443317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.721 qpair failed and we were unable to recover it. 00:42:47.721 [2024-05-15 09:08:42.443439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.721 [2024-05-15 09:08:42.443611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.721 [2024-05-15 09:08:42.443641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.721 qpair failed and we were unable to recover it. 00:42:47.721 [2024-05-15 09:08:42.443766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.721 [2024-05-15 09:08:42.443869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.721 [2024-05-15 09:08:42.443897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.721 qpair failed and we were unable to recover it. 00:42:47.721 [2024-05-15 09:08:42.444054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.721 [2024-05-15 09:08:42.444151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.721 [2024-05-15 09:08:42.444177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.721 qpair failed and we were unable to recover it. 00:42:47.721 [2024-05-15 09:08:42.444323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.721 [2024-05-15 09:08:42.444475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.721 [2024-05-15 09:08:42.444505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.721 qpair failed and we were unable to recover it. 00:42:47.721 [2024-05-15 09:08:42.444649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.721 [2024-05-15 09:08:42.444817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.721 [2024-05-15 09:08:42.444845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.721 qpair failed and we were unable to recover it. 00:42:47.721 [2024-05-15 09:08:42.445015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.721 [2024-05-15 09:08:42.445117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.721 [2024-05-15 09:08:42.445143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.721 qpair failed and we were unable to recover it. 00:42:47.721 [2024-05-15 09:08:42.445277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.721 [2024-05-15 09:08:42.445434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.721 [2024-05-15 09:08:42.445461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.721 qpair failed and we were unable to recover it. 00:42:47.721 [2024-05-15 09:08:42.445630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.721 [2024-05-15 09:08:42.445764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.721 [2024-05-15 09:08:42.445793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.721 qpair failed and we were unable to recover it. 00:42:47.721 [2024-05-15 09:08:42.445919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.721 [2024-05-15 09:08:42.446078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.721 [2024-05-15 09:08:42.446105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.721 qpair failed and we were unable to recover it. 00:42:47.721 [2024-05-15 09:08:42.446256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.721 [2024-05-15 09:08:42.446397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.721 [2024-05-15 09:08:42.446426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.721 qpair failed and we were unable to recover it. 00:42:47.721 [2024-05-15 09:08:42.446568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.721 [2024-05-15 09:08:42.446721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.721 [2024-05-15 09:08:42.446750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.721 qpair failed and we were unable to recover it. 00:42:47.721 [2024-05-15 09:08:42.446901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.721 [2024-05-15 09:08:42.447009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.721 [2024-05-15 09:08:42.447037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.721 qpair failed and we were unable to recover it. 00:42:47.721 [2024-05-15 09:08:42.447181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.721 [2024-05-15 09:08:42.447309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.721 [2024-05-15 09:08:42.447336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.721 qpair failed and we were unable to recover it. 00:42:47.721 [2024-05-15 09:08:42.447439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.721 [2024-05-15 09:08:42.447534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.721 [2024-05-15 09:08:42.447561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.721 qpair failed and we were unable to recover it. 00:42:47.721 [2024-05-15 09:08:42.447656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.721 [2024-05-15 09:08:42.447803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.721 [2024-05-15 09:08:42.447830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.721 qpair failed and we were unable to recover it. 00:42:47.721 [2024-05-15 09:08:42.447987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.721 [2024-05-15 09:08:42.448102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.721 [2024-05-15 09:08:42.448132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.721 qpair failed and we were unable to recover it. 00:42:47.721 [2024-05-15 09:08:42.448267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.721 [2024-05-15 09:08:42.448379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.721 [2024-05-15 09:08:42.448409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.721 qpair failed and we were unable to recover it. 00:42:47.721 [2024-05-15 09:08:42.448586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.721 [2024-05-15 09:08:42.448695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.721 [2024-05-15 09:08:42.448722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.721 qpair failed and we were unable to recover it. 00:42:47.721 [2024-05-15 09:08:42.448844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.721 [2024-05-15 09:08:42.448994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.721 [2024-05-15 09:08:42.449023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.721 qpair failed and we were unable to recover it. 00:42:47.721 [2024-05-15 09:08:42.449170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.721 [2024-05-15 09:08:42.449375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.721 [2024-05-15 09:08:42.449403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.721 qpair failed and we were unable to recover it. 00:42:47.721 [2024-05-15 09:08:42.449561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.721 [2024-05-15 09:08:42.449675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.721 [2024-05-15 09:08:42.449701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.721 qpair failed and we were unable to recover it. 00:42:47.722 [2024-05-15 09:08:42.449825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.722 [2024-05-15 09:08:42.449927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.722 [2024-05-15 09:08:42.449955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.722 qpair failed and we were unable to recover it. 00:42:47.722 [2024-05-15 09:08:42.450062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.722 [2024-05-15 09:08:42.450190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.722 [2024-05-15 09:08:42.450224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.722 qpair failed and we were unable to recover it. 00:42:47.722 [2024-05-15 09:08:42.450353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.722 [2024-05-15 09:08:42.450490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.722 [2024-05-15 09:08:42.450517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.722 qpair failed and we were unable to recover it. 00:42:47.722 [2024-05-15 09:08:42.450674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.722 [2024-05-15 09:08:42.450831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.722 [2024-05-15 09:08:42.450860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.722 qpair failed and we were unable to recover it. 00:42:47.722 [2024-05-15 09:08:42.450978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.722 [2024-05-15 09:08:42.451114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.722 [2024-05-15 09:08:42.451144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.722 qpair failed and we were unable to recover it. 00:42:47.722 [2024-05-15 09:08:42.451288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.722 [2024-05-15 09:08:42.451443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.722 [2024-05-15 09:08:42.451470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.722 qpair failed and we were unable to recover it. 00:42:47.722 [2024-05-15 09:08:42.451618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.722 [2024-05-15 09:08:42.451780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.722 [2024-05-15 09:08:42.451811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.722 qpair failed and we were unable to recover it. 00:42:47.722 [2024-05-15 09:08:42.451969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.722 [2024-05-15 09:08:42.452126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.722 [2024-05-15 09:08:42.452152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.722 qpair failed and we were unable to recover it. 00:42:47.722 [2024-05-15 09:08:42.452310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.722 [2024-05-15 09:08:42.452418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.722 [2024-05-15 09:08:42.452444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.722 qpair failed and we were unable to recover it. 00:42:47.722 [2024-05-15 09:08:42.452550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.722 [2024-05-15 09:08:42.452698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.722 [2024-05-15 09:08:42.452733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.722 qpair failed and we were unable to recover it. 00:42:47.722 [2024-05-15 09:08:42.452886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.722 [2024-05-15 09:08:42.453029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.722 [2024-05-15 09:08:42.453059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.722 qpair failed and we were unable to recover it. 00:42:47.722 [2024-05-15 09:08:42.453241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.722 [2024-05-15 09:08:42.453366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.722 [2024-05-15 09:08:42.453393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.722 qpair failed and we were unable to recover it. 00:42:47.722 [2024-05-15 09:08:42.453581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.722 [2024-05-15 09:08:42.453768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.722 [2024-05-15 09:08:42.453795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.722 qpair failed and we were unable to recover it. 00:42:47.722 [2024-05-15 09:08:42.453963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.722 [2024-05-15 09:08:42.454108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.722 [2024-05-15 09:08:42.454138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.722 qpair failed and we were unable to recover it. 00:42:47.722 [2024-05-15 09:08:42.454293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.722 [2024-05-15 09:08:42.454420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.722 [2024-05-15 09:08:42.454463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.722 qpair failed and we were unable to recover it. 00:42:47.722 [2024-05-15 09:08:42.454638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.722 [2024-05-15 09:08:42.454764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.722 [2024-05-15 09:08:42.454791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.722 qpair failed and we were unable to recover it. 00:42:47.722 [2024-05-15 09:08:42.454946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.722 [2024-05-15 09:08:42.455096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.722 [2024-05-15 09:08:42.455126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.722 qpair failed and we were unable to recover it. 00:42:47.722 [2024-05-15 09:08:42.455285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.722 [2024-05-15 09:08:42.455394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.722 [2024-05-15 09:08:42.455420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.722 qpair failed and we were unable to recover it. 00:42:47.722 [2024-05-15 09:08:42.455628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.722 [2024-05-15 09:08:42.455783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.722 [2024-05-15 09:08:42.455813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.722 qpair failed and we were unable to recover it. 00:42:47.722 [2024-05-15 09:08:42.455959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.722 [2024-05-15 09:08:42.456092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.722 [2024-05-15 09:08:42.456125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.722 qpair failed and we were unable to recover it. 00:42:47.722 [2024-05-15 09:08:42.456279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.722 [2024-05-15 09:08:42.456414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.722 [2024-05-15 09:08:42.456440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.722 qpair failed and we were unable to recover it. 00:42:47.722 [2024-05-15 09:08:42.456598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.722 [2024-05-15 09:08:42.456759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.722 [2024-05-15 09:08:42.456789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.722 qpair failed and we were unable to recover it. 00:42:47.722 [2024-05-15 09:08:42.456958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.722 [2024-05-15 09:08:42.457126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.722 [2024-05-15 09:08:42.457156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.722 qpair failed and we were unable to recover it. 00:42:47.722 [2024-05-15 09:08:42.457286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.722 [2024-05-15 09:08:42.457423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.722 [2024-05-15 09:08:42.457449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.722 qpair failed and we were unable to recover it. 00:42:47.722 [2024-05-15 09:08:42.457603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.722 [2024-05-15 09:08:42.457769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.722 [2024-05-15 09:08:42.457798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.722 qpair failed and we were unable to recover it. 00:42:47.722 [2024-05-15 09:08:42.457942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.722 [2024-05-15 09:08:42.458089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.722 [2024-05-15 09:08:42.458119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.722 qpair failed and we were unable to recover it. 00:42:47.722 [2024-05-15 09:08:42.458293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.722 [2024-05-15 09:08:42.458418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.722 [2024-05-15 09:08:42.458462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.722 qpair failed and we were unable to recover it. 00:42:47.722 [2024-05-15 09:08:42.458603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.722 [2024-05-15 09:08:42.458745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.722 [2024-05-15 09:08:42.458774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.722 qpair failed and we were unable to recover it. 00:42:47.722 [2024-05-15 09:08:42.458935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.722 [2024-05-15 09:08:42.459060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.723 [2024-05-15 09:08:42.459087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.723 qpair failed and we were unable to recover it. 00:42:47.723 [2024-05-15 09:08:42.459209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.723 [2024-05-15 09:08:42.459341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.723 [2024-05-15 09:08:42.459372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.723 qpair failed and we were unable to recover it. 00:42:47.723 [2024-05-15 09:08:42.459565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.723 [2024-05-15 09:08:42.459695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.723 [2024-05-15 09:08:42.459722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.723 qpair failed and we were unable to recover it. 00:42:47.723 [2024-05-15 09:08:42.459893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.723 [2024-05-15 09:08:42.460010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.723 [2024-05-15 09:08:42.460039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.723 qpair failed and we were unable to recover it. 00:42:47.723 [2024-05-15 09:08:42.460184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.723 [2024-05-15 09:08:42.460319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.723 [2024-05-15 09:08:42.460346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.723 qpair failed and we were unable to recover it. 00:42:47.723 [2024-05-15 09:08:42.460502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.723 [2024-05-15 09:08:42.460600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.723 [2024-05-15 09:08:42.460627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.723 qpair failed and we were unable to recover it. 00:42:47.723 [2024-05-15 09:08:42.460729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.723 [2024-05-15 09:08:42.460855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.723 [2024-05-15 09:08:42.460882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.723 qpair failed and we were unable to recover it. 00:42:47.723 [2024-05-15 09:08:42.461028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.723 [2024-05-15 09:08:42.461180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.723 [2024-05-15 09:08:42.461209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.723 qpair failed and we were unable to recover it. 00:42:47.723 [2024-05-15 09:08:42.461368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.723 [2024-05-15 09:08:42.461578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.723 [2024-05-15 09:08:42.461605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.723 qpair failed and we were unable to recover it. 00:42:47.723 [2024-05-15 09:08:42.461780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.723 [2024-05-15 09:08:42.461885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.723 [2024-05-15 09:08:42.461928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.723 qpair failed and we were unable to recover it. 00:42:47.723 [2024-05-15 09:08:42.462033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.723 [2024-05-15 09:08:42.462163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.723 [2024-05-15 09:08:42.462190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.723 qpair failed and we were unable to recover it. 00:42:47.723 [2024-05-15 09:08:42.462317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.723 [2024-05-15 09:08:42.462455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.723 [2024-05-15 09:08:42.462490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.723 qpair failed and we were unable to recover it. 00:42:47.723 [2024-05-15 09:08:42.462632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.723 [2024-05-15 09:08:42.462775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.723 [2024-05-15 09:08:42.462805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.723 qpair failed and we were unable to recover it. 00:42:47.723 [2024-05-15 09:08:42.462923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.723 [2024-05-15 09:08:42.463024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.723 [2024-05-15 09:08:42.463051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.723 qpair failed and we were unable to recover it. 00:42:47.723 [2024-05-15 09:08:42.463233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.723 [2024-05-15 09:08:42.463375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.723 [2024-05-15 09:08:42.463405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.723 qpair failed and we were unable to recover it. 00:42:47.723 [2024-05-15 09:08:42.463551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.723 [2024-05-15 09:08:42.463684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.723 [2024-05-15 09:08:42.463714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.723 qpair failed and we were unable to recover it. 00:42:47.723 [2024-05-15 09:08:42.463836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.723 [2024-05-15 09:08:42.464004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.723 [2024-05-15 09:08:42.464031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.723 qpair failed and we were unable to recover it. 00:42:47.723 [2024-05-15 09:08:42.464195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.723 [2024-05-15 09:08:42.464346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.723 [2024-05-15 09:08:42.464374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.723 qpair failed and we were unable to recover it. 00:42:47.723 [2024-05-15 09:08:42.464531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.723 [2024-05-15 09:08:42.464671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.723 [2024-05-15 09:08:42.464700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.723 qpair failed and we were unable to recover it. 00:42:47.723 [2024-05-15 09:08:42.464838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.723 [2024-05-15 09:08:42.464963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.723 [2024-05-15 09:08:42.464989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.723 qpair failed and we were unable to recover it. 00:42:47.723 [2024-05-15 09:08:42.465104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.723 [2024-05-15 09:08:42.465255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.723 [2024-05-15 09:08:42.465285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.723 qpair failed and we were unable to recover it. 00:42:47.723 [2024-05-15 09:08:42.465437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.723 [2024-05-15 09:08:42.465548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.723 [2024-05-15 09:08:42.465578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.723 qpair failed and we were unable to recover it. 00:42:47.723 [2024-05-15 09:08:42.465732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.723 [2024-05-15 09:08:42.465864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.723 [2024-05-15 09:08:42.465891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.723 qpair failed and we were unable to recover it. 00:42:47.723 [2024-05-15 09:08:42.466019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.723 [2024-05-15 09:08:42.466145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.723 [2024-05-15 09:08:42.466175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.723 qpair failed and we were unable to recover it. 00:42:47.723 [2024-05-15 09:08:42.466337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.723 [2024-05-15 09:08:42.466479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.723 [2024-05-15 09:08:42.466505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.723 qpair failed and we were unable to recover it. 00:42:47.723 [2024-05-15 09:08:42.466661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.723 [2024-05-15 09:08:42.466832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.723 [2024-05-15 09:08:42.466862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.723 qpair failed and we were unable to recover it. 00:42:47.723 [2024-05-15 09:08:42.466974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.723 [2024-05-15 09:08:42.467116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.723 [2024-05-15 09:08:42.467145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.723 qpair failed and we were unable to recover it. 00:42:47.723 [2024-05-15 09:08:42.467290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.723 [2024-05-15 09:08:42.467459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.723 [2024-05-15 09:08:42.467488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.723 qpair failed and we were unable to recover it. 00:42:47.723 [2024-05-15 09:08:42.467609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.723 [2024-05-15 09:08:42.467748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.723 [2024-05-15 09:08:42.467775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.723 qpair failed and we were unable to recover it. 00:42:47.723 [2024-05-15 09:08:42.467895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.724 [2024-05-15 09:08:42.468052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.724 [2024-05-15 09:08:42.468078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.724 qpair failed and we were unable to recover it. 00:42:47.724 [2024-05-15 09:08:42.468252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.724 [2024-05-15 09:08:42.468383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:47.724 [2024-05-15 09:08:42.468410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:47.724 qpair failed and we were unable to recover it. 00:42:48.004 [2024-05-15 09:08:42.468518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.004 [2024-05-15 09:08:42.468630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.004 [2024-05-15 09:08:42.468656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.004 qpair failed and we were unable to recover it. 00:42:48.004 [2024-05-15 09:08:42.468792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.004 [2024-05-15 09:08:42.468936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.004 [2024-05-15 09:08:42.468965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.004 qpair failed and we were unable to recover it. 00:42:48.004 [2024-05-15 09:08:42.469093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.004 [2024-05-15 09:08:42.469253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.004 [2024-05-15 09:08:42.469280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.004 qpair failed and we were unable to recover it. 00:42:48.004 [2024-05-15 09:08:42.469413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.004 [2024-05-15 09:08:42.469554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.004 [2024-05-15 09:08:42.469581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.004 qpair failed and we were unable to recover it. 00:42:48.004 [2024-05-15 09:08:42.469769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.004 [2024-05-15 09:08:42.469881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.004 [2024-05-15 09:08:42.469907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.004 qpair failed and we were unable to recover it. 00:42:48.004 [2024-05-15 09:08:42.470014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.004 [2024-05-15 09:08:42.470165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.004 [2024-05-15 09:08:42.470194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.004 qpair failed and we were unable to recover it. 00:42:48.004 [2024-05-15 09:08:42.470335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.004 [2024-05-15 09:08:42.470442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.004 [2024-05-15 09:08:42.470469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.004 qpair failed and we were unable to recover it. 00:42:48.004 [2024-05-15 09:08:42.470583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.004 [2024-05-15 09:08:42.470742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.004 [2024-05-15 09:08:42.470771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.004 qpair failed and we were unable to recover it. 00:42:48.004 [2024-05-15 09:08:42.470886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.004 [2024-05-15 09:08:42.471029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.004 [2024-05-15 09:08:42.471059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.004 qpair failed and we were unable to recover it. 00:42:48.004 [2024-05-15 09:08:42.471208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.004 [2024-05-15 09:08:42.471354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.004 [2024-05-15 09:08:42.471380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.004 qpair failed and we were unable to recover it. 00:42:48.004 [2024-05-15 09:08:42.471559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.004 [2024-05-15 09:08:42.471700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.004 [2024-05-15 09:08:42.471728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.004 qpair failed and we were unable to recover it. 00:42:48.004 [2024-05-15 09:08:42.471884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.004 [2024-05-15 09:08:42.472038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.004 [2024-05-15 09:08:42.472069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.004 qpair failed and we were unable to recover it. 00:42:48.004 [2024-05-15 09:08:42.472172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.004 [2024-05-15 09:08:42.472297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.004 [2024-05-15 09:08:42.472324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.004 qpair failed and we were unable to recover it. 00:42:48.004 [2024-05-15 09:08:42.472484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.004 [2024-05-15 09:08:42.472621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.004 [2024-05-15 09:08:42.472649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.004 qpair failed and we were unable to recover it. 00:42:48.004 [2024-05-15 09:08:42.472769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.004 [2024-05-15 09:08:42.472933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.004 [2024-05-15 09:08:42.472962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.004 qpair failed and we were unable to recover it. 00:42:48.004 [2024-05-15 09:08:42.473087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.004 [2024-05-15 09:08:42.473227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.004 [2024-05-15 09:08:42.473255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.004 qpair failed and we were unable to recover it. 00:42:48.004 [2024-05-15 09:08:42.473397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.004 [2024-05-15 09:08:42.473500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.004 [2024-05-15 09:08:42.473527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.004 qpair failed and we were unable to recover it. 00:42:48.004 [2024-05-15 09:08:42.473675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.004 [2024-05-15 09:08:42.473845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.004 [2024-05-15 09:08:42.473875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.004 qpair failed and we were unable to recover it. 00:42:48.005 [2024-05-15 09:08:42.474024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.005 [2024-05-15 09:08:42.474178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.005 [2024-05-15 09:08:42.474204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.005 qpair failed and we were unable to recover it. 00:42:48.005 [2024-05-15 09:08:42.474344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.005 [2024-05-15 09:08:42.474524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.005 [2024-05-15 09:08:42.474553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.005 qpair failed and we were unable to recover it. 00:42:48.005 [2024-05-15 09:08:42.474692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.005 [2024-05-15 09:08:42.474847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.005 [2024-05-15 09:08:42.474873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.005 qpair failed and we were unable to recover it. 00:42:48.005 [2024-05-15 09:08:42.475026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.005 [2024-05-15 09:08:42.475181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.005 [2024-05-15 09:08:42.475210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.005 qpair failed and we were unable to recover it. 00:42:48.005 [2024-05-15 09:08:42.475357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.005 [2024-05-15 09:08:42.475505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.005 [2024-05-15 09:08:42.475533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.005 qpair failed and we were unable to recover it. 00:42:48.005 [2024-05-15 09:08:42.475696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.005 [2024-05-15 09:08:42.475834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.005 [2024-05-15 09:08:42.475863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.005 qpair failed and we were unable to recover it. 00:42:48.005 [2024-05-15 09:08:42.476008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.005 [2024-05-15 09:08:42.476112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.005 [2024-05-15 09:08:42.476137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.005 qpair failed and we were unable to recover it. 00:42:48.005 [2024-05-15 09:08:42.476271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.005 [2024-05-15 09:08:42.476422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.005 [2024-05-15 09:08:42.476451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.005 qpair failed and we were unable to recover it. 00:42:48.005 [2024-05-15 09:08:42.476584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.005 [2024-05-15 09:08:42.476725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.005 [2024-05-15 09:08:42.476755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.005 qpair failed and we were unable to recover it. 00:42:48.005 [2024-05-15 09:08:42.476875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.005 [2024-05-15 09:08:42.477005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.005 [2024-05-15 09:08:42.477030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.005 qpair failed and we were unable to recover it. 00:42:48.005 [2024-05-15 09:08:42.477223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.005 [2024-05-15 09:08:42.477355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.005 [2024-05-15 09:08:42.477381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.005 qpair failed and we were unable to recover it. 00:42:48.005 [2024-05-15 09:08:42.477501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.005 [2024-05-15 09:08:42.477624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.005 [2024-05-15 09:08:42.477653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.005 qpair failed and we were unable to recover it. 00:42:48.005 [2024-05-15 09:08:42.477825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.005 [2024-05-15 09:08:42.477956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.005 [2024-05-15 09:08:42.477983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.005 qpair failed and we were unable to recover it. 00:42:48.005 [2024-05-15 09:08:42.478145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.005 [2024-05-15 09:08:42.478305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.005 [2024-05-15 09:08:42.478335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.005 qpair failed and we were unable to recover it. 00:42:48.005 [2024-05-15 09:08:42.478479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.005 [2024-05-15 09:08:42.478647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.005 [2024-05-15 09:08:42.478673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.005 qpair failed and we were unable to recover it. 00:42:48.005 [2024-05-15 09:08:42.478807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.005 [2024-05-15 09:08:42.478957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.005 [2024-05-15 09:08:42.478983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.005 qpair failed and we were unable to recover it. 00:42:48.005 [2024-05-15 09:08:42.479091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.005 [2024-05-15 09:08:42.479267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.005 [2024-05-15 09:08:42.479296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.005 qpair failed and we were unable to recover it. 00:42:48.005 [2024-05-15 09:08:42.479416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.005 [2024-05-15 09:08:42.479591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.005 [2024-05-15 09:08:42.479617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.005 qpair failed and we were unable to recover it. 00:42:48.005 [2024-05-15 09:08:42.479774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.005 [2024-05-15 09:08:42.479909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.005 [2024-05-15 09:08:42.479957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.005 qpair failed and we were unable to recover it. 00:42:48.005 [2024-05-15 09:08:42.480070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.005 [2024-05-15 09:08:42.480240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.005 [2024-05-15 09:08:42.480271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.005 qpair failed and we were unable to recover it. 00:42:48.005 [2024-05-15 09:08:42.480382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.005 [2024-05-15 09:08:42.480502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.005 [2024-05-15 09:08:42.480531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.005 qpair failed and we were unable to recover it. 00:42:48.005 [2024-05-15 09:08:42.480670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.005 [2024-05-15 09:08:42.480822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.005 [2024-05-15 09:08:42.480848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.005 qpair failed and we were unable to recover it. 00:42:48.005 [2024-05-15 09:08:42.480997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.005 [2024-05-15 09:08:42.481152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.005 [2024-05-15 09:08:42.481178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.005 qpair failed and we were unable to recover it. 00:42:48.005 [2024-05-15 09:08:42.481317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.005 [2024-05-15 09:08:42.481418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.005 [2024-05-15 09:08:42.481445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.005 qpair failed and we were unable to recover it. 00:42:48.005 [2024-05-15 09:08:42.481546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.005 [2024-05-15 09:08:42.481649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.005 [2024-05-15 09:08:42.481676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.005 qpair failed and we were unable to recover it. 00:42:48.005 [2024-05-15 09:08:42.481823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.005 [2024-05-15 09:08:42.481977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.005 [2024-05-15 09:08:42.482004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.005 qpair failed and we were unable to recover it. 00:42:48.005 [2024-05-15 09:08:42.482160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.005 [2024-05-15 09:08:42.482304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.005 [2024-05-15 09:08:42.482330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.005 qpair failed and we were unable to recover it. 00:42:48.005 [2024-05-15 09:08:42.482484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.005 [2024-05-15 09:08:42.482636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.005 [2024-05-15 09:08:42.482680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.005 qpair failed and we were unable to recover it. 00:42:48.005 [2024-05-15 09:08:42.482800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.005 [2024-05-15 09:08:42.482923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.006 [2024-05-15 09:08:42.482953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.006 qpair failed and we were unable to recover it. 00:42:48.006 [2024-05-15 09:08:42.483135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.006 [2024-05-15 09:08:42.483301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.006 [2024-05-15 09:08:42.483328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.006 qpair failed and we were unable to recover it. 00:42:48.006 [2024-05-15 09:08:42.483441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.006 [2024-05-15 09:08:42.483580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.006 [2024-05-15 09:08:42.483606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.006 qpair failed and we were unable to recover it. 00:42:48.006 [2024-05-15 09:08:42.483735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.006 [2024-05-15 09:08:42.483893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.006 [2024-05-15 09:08:42.483922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.006 qpair failed and we were unable to recover it. 00:42:48.006 [2024-05-15 09:08:42.484040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.006 [2024-05-15 09:08:42.484179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.006 [2024-05-15 09:08:42.484207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.006 qpair failed and we were unable to recover it. 00:42:48.006 [2024-05-15 09:08:42.484373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.006 [2024-05-15 09:08:42.484476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.006 [2024-05-15 09:08:42.484502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.006 qpair failed and we were unable to recover it. 00:42:48.006 [2024-05-15 09:08:42.484628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.006 [2024-05-15 09:08:42.484781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.006 [2024-05-15 09:08:42.484807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.006 qpair failed and we were unable to recover it. 00:42:48.006 [2024-05-15 09:08:42.484987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.006 [2024-05-15 09:08:42.485092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.006 [2024-05-15 09:08:42.485122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.006 qpair failed and we were unable to recover it. 00:42:48.006 [2024-05-15 09:08:42.485272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.006 [2024-05-15 09:08:42.485395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.006 [2024-05-15 09:08:42.485421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.006 qpair failed and we were unable to recover it. 00:42:48.006 [2024-05-15 09:08:42.485594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.006 [2024-05-15 09:08:42.485733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.006 [2024-05-15 09:08:42.485774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.006 qpair failed and we were unable to recover it. 00:42:48.006 [2024-05-15 09:08:42.485927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.006 [2024-05-15 09:08:42.486103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.006 [2024-05-15 09:08:42.486132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.006 qpair failed and we were unable to recover it. 00:42:48.006 [2024-05-15 09:08:42.486283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.006 [2024-05-15 09:08:42.486443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.006 [2024-05-15 09:08:42.486468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.006 qpair failed and we were unable to recover it. 00:42:48.006 [2024-05-15 09:08:42.486658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.006 [2024-05-15 09:08:42.486814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.006 [2024-05-15 09:08:42.486840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.006 qpair failed and we were unable to recover it. 00:42:48.006 [2024-05-15 09:08:42.486990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.006 [2024-05-15 09:08:42.487133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.006 [2024-05-15 09:08:42.487163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.006 qpair failed and we were unable to recover it. 00:42:48.006 [2024-05-15 09:08:42.487322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.006 [2024-05-15 09:08:42.487460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.006 [2024-05-15 09:08:42.487486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.006 qpair failed and we were unable to recover it. 00:42:48.006 [2024-05-15 09:08:42.487610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.006 [2024-05-15 09:08:42.487728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.006 [2024-05-15 09:08:42.487761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.006 qpair failed and we were unable to recover it. 00:42:48.006 [2024-05-15 09:08:42.487914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.006 [2024-05-15 09:08:42.488058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.006 [2024-05-15 09:08:42.488087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.006 qpair failed and we were unable to recover it. 00:42:48.006 [2024-05-15 09:08:42.488238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.006 [2024-05-15 09:08:42.488342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.006 [2024-05-15 09:08:42.488368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.006 qpair failed and we were unable to recover it. 00:42:48.006 [2024-05-15 09:08:42.488519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.006 [2024-05-15 09:08:42.488665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.006 [2024-05-15 09:08:42.488691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.006 qpair failed and we were unable to recover it. 00:42:48.006 [2024-05-15 09:08:42.488835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.006 [2024-05-15 09:08:42.488968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.006 [2024-05-15 09:08:42.488996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.006 qpair failed and we were unable to recover it. 00:42:48.006 [2024-05-15 09:08:42.489150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.006 [2024-05-15 09:08:42.489260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.006 [2024-05-15 09:08:42.489288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.006 qpair failed and we were unable to recover it. 00:42:48.006 [2024-05-15 09:08:42.489384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.006 [2024-05-15 09:08:42.489489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.006 [2024-05-15 09:08:42.489531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.006 qpair failed and we were unable to recover it. 00:42:48.006 [2024-05-15 09:08:42.489684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.006 [2024-05-15 09:08:42.489828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.006 [2024-05-15 09:08:42.489857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.006 qpair failed and we were unable to recover it. 00:42:48.006 [2024-05-15 09:08:42.490033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.006 [2024-05-15 09:08:42.490153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.006 [2024-05-15 09:08:42.490179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.006 qpair failed and we were unable to recover it. 00:42:48.006 [2024-05-15 09:08:42.490311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.006 [2024-05-15 09:08:42.490423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.006 [2024-05-15 09:08:42.490453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.006 qpair failed and we were unable to recover it. 00:42:48.006 [2024-05-15 09:08:42.490594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.006 [2024-05-15 09:08:42.490704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.006 [2024-05-15 09:08:42.490733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.006 qpair failed and we were unable to recover it. 00:42:48.006 [2024-05-15 09:08:42.490892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.006 [2024-05-15 09:08:42.491020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.006 [2024-05-15 09:08:42.491045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.006 qpair failed and we were unable to recover it. 00:42:48.006 [2024-05-15 09:08:42.491171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.006 [2024-05-15 09:08:42.491322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.006 [2024-05-15 09:08:42.491351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.006 qpair failed and we were unable to recover it. 00:42:48.006 [2024-05-15 09:08:42.491491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.006 [2024-05-15 09:08:42.491648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.006 [2024-05-15 09:08:42.491673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.006 qpair failed and we were unable to recover it. 00:42:48.006 [2024-05-15 09:08:42.491803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.007 [2024-05-15 09:08:42.491930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.007 [2024-05-15 09:08:42.491955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.007 qpair failed and we were unable to recover it. 00:42:48.007 [2024-05-15 09:08:42.492114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.007 [2024-05-15 09:08:42.492283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.007 [2024-05-15 09:08:42.492309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.007 qpair failed and we were unable to recover it. 00:42:48.007 [2024-05-15 09:08:42.492443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.007 [2024-05-15 09:08:42.492544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.007 [2024-05-15 09:08:42.492570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.007 qpair failed and we were unable to recover it. 00:42:48.007 [2024-05-15 09:08:42.492700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.007 [2024-05-15 09:08:42.492809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.007 [2024-05-15 09:08:42.492835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.007 qpair failed and we were unable to recover it. 00:42:48.007 [2024-05-15 09:08:42.492961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.007 [2024-05-15 09:08:42.493100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.007 [2024-05-15 09:08:42.493127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.007 qpair failed and we were unable to recover it. 00:42:48.007 [2024-05-15 09:08:42.493283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.007 [2024-05-15 09:08:42.493427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.007 [2024-05-15 09:08:42.493458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.007 qpair failed and we were unable to recover it. 00:42:48.007 [2024-05-15 09:08:42.493596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.007 [2024-05-15 09:08:42.493730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.007 [2024-05-15 09:08:42.493756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.007 qpair failed and we were unable to recover it. 00:42:48.007 [2024-05-15 09:08:42.493853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.007 [2024-05-15 09:08:42.493997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.007 [2024-05-15 09:08:42.494022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.007 qpair failed and we were unable to recover it. 00:42:48.007 [2024-05-15 09:08:42.494225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.007 [2024-05-15 09:08:42.494333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.007 [2024-05-15 09:08:42.494359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.007 qpair failed and we were unable to recover it. 00:42:48.007 [2024-05-15 09:08:42.494458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.007 [2024-05-15 09:08:42.494581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.007 [2024-05-15 09:08:42.494607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.007 qpair failed and we were unable to recover it. 00:42:48.007 [2024-05-15 09:08:42.494736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.007 [2024-05-15 09:08:42.494908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.007 [2024-05-15 09:08:42.494936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.007 qpair failed and we were unable to recover it. 00:42:48.007 [2024-05-15 09:08:42.495123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.007 [2024-05-15 09:08:42.495257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.007 [2024-05-15 09:08:42.495283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.007 qpair failed and we were unable to recover it. 00:42:48.007 [2024-05-15 09:08:42.495392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.007 [2024-05-15 09:08:42.495487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.007 [2024-05-15 09:08:42.495512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.007 qpair failed and we were unable to recover it. 00:42:48.007 [2024-05-15 09:08:42.495603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.007 [2024-05-15 09:08:42.495757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.007 [2024-05-15 09:08:42.495782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.007 qpair failed and we were unable to recover it. 00:42:48.007 [2024-05-15 09:08:42.495908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.007 [2024-05-15 09:08:42.496058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.007 [2024-05-15 09:08:42.496086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.007 qpair failed and we were unable to recover it. 00:42:48.007 [2024-05-15 09:08:42.496261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.007 [2024-05-15 09:08:42.496386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.007 [2024-05-15 09:08:42.496412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.007 qpair failed and we were unable to recover it. 00:42:48.007 [2024-05-15 09:08:42.496533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.007 [2024-05-15 09:08:42.496660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.007 [2024-05-15 09:08:42.496689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.007 qpair failed and we were unable to recover it. 00:42:48.007 [2024-05-15 09:08:42.496838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.007 [2024-05-15 09:08:42.496988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.007 [2024-05-15 09:08:42.497018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.007 qpair failed and we were unable to recover it. 00:42:48.007 [2024-05-15 09:08:42.497166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.007 [2024-05-15 09:08:42.497296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.007 [2024-05-15 09:08:42.497322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.007 qpair failed and we were unable to recover it. 00:42:48.007 [2024-05-15 09:08:42.497473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.007 [2024-05-15 09:08:42.497617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.007 [2024-05-15 09:08:42.497646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.007 qpair failed and we were unable to recover it. 00:42:48.007 [2024-05-15 09:08:42.497749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.007 [2024-05-15 09:08:42.497852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.007 [2024-05-15 09:08:42.497880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.007 qpair failed and we were unable to recover it. 00:42:48.007 [2024-05-15 09:08:42.498007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.007 [2024-05-15 09:08:42.498160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.007 [2024-05-15 09:08:42.498186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.007 qpair failed and we were unable to recover it. 00:42:48.007 [2024-05-15 09:08:42.498355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.007 [2024-05-15 09:08:42.498502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.007 [2024-05-15 09:08:42.498528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.007 qpair failed and we were unable to recover it. 00:42:48.007 [2024-05-15 09:08:42.498628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.007 [2024-05-15 09:08:42.498777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.007 [2024-05-15 09:08:42.498806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.007 qpair failed and we were unable to recover it. 00:42:48.007 [2024-05-15 09:08:42.498958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.007 [2024-05-15 09:08:42.499066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.007 [2024-05-15 09:08:42.499094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.007 qpair failed and we were unable to recover it. 00:42:48.007 [2024-05-15 09:08:42.499280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.007 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2441344 Killed "${NVMF_APP[@]}" "$@" 00:42:48.007 [2024-05-15 09:08:42.499386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.007 [2024-05-15 09:08:42.499413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.007 qpair failed and we were unable to recover it. 00:42:48.007 [2024-05-15 09:08:42.499554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.007 [2024-05-15 09:08:42.499700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.007 09:08:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:42:48.007 [2024-05-15 09:08:42.499730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.007 qpair failed and we were unable to recover it. 00:42:48.007 09:08:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:42:48.007 [2024-05-15 09:08:42.499901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.007 09:08:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:42:48.007 [2024-05-15 09:08:42.500052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.008 [2024-05-15 09:08:42.500095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.008 qpair failed and we were unable to recover it. 00:42:48.008 09:08:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@721 -- # xtrace_disable 00:42:48.008 [2024-05-15 09:08:42.500266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.008 09:08:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:42:48.008 [2024-05-15 09:08:42.500420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.008 [2024-05-15 09:08:42.500450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.008 qpair failed and we were unable to recover it. 00:42:48.008 [2024-05-15 09:08:42.500587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.008 [2024-05-15 09:08:42.500734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.008 [2024-05-15 09:08:42.500763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.008 qpair failed and we were unable to recover it. 00:42:48.008 [2024-05-15 09:08:42.500924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.008 [2024-05-15 09:08:42.501065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.008 [2024-05-15 09:08:42.501090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.008 qpair failed and we were unable to recover it. 00:42:48.008 [2024-05-15 09:08:42.501249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.008 [2024-05-15 09:08:42.501391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.008 [2024-05-15 09:08:42.501420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.008 qpair failed and we were unable to recover it. 00:42:48.008 [2024-05-15 09:08:42.501567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.008 [2024-05-15 09:08:42.501685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.008 [2024-05-15 09:08:42.501714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.008 qpair failed and we were unable to recover it. 00:42:48.008 [2024-05-15 09:08:42.501880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.008 [2024-05-15 09:08:42.502006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.008 [2024-05-15 09:08:42.502032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.008 qpair failed and we were unable to recover it. 00:42:48.008 [2024-05-15 09:08:42.502207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.008 [2024-05-15 09:08:42.502319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.008 [2024-05-15 09:08:42.502349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.008 qpair failed and we were unable to recover it. 00:42:48.008 [2024-05-15 09:08:42.502459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.008 [2024-05-15 09:08:42.502562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.008 [2024-05-15 09:08:42.502589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.008 qpair failed and we were unable to recover it. 00:42:48.008 [2024-05-15 09:08:42.502728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.008 [2024-05-15 09:08:42.502833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.008 [2024-05-15 09:08:42.502860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.008 qpair failed and we were unable to recover it. 00:42:48.008 [2024-05-15 09:08:42.502992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.008 [2024-05-15 09:08:42.503135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.008 [2024-05-15 09:08:42.503164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.008 qpair failed and we were unable to recover it. 00:42:48.008 [2024-05-15 09:08:42.503295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.008 [2024-05-15 09:08:42.503392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.008 [2024-05-15 09:08:42.503418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.008 qpair failed and we were unable to recover it. 00:42:48.008 [2024-05-15 09:08:42.503534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.008 [2024-05-15 09:08:42.503657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.008 [2024-05-15 09:08:42.503683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.008 qpair failed and we were unable to recover it. 00:42:48.008 [2024-05-15 09:08:42.503812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.008 [2024-05-15 09:08:42.503958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.008 [2024-05-15 09:08:42.503984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.008 qpair failed and we were unable to recover it. 00:42:48.008 [2024-05-15 09:08:42.504116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.008 [2024-05-15 09:08:42.504253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.008 [2024-05-15 09:08:42.504280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.008 qpair failed and we were unable to recover it. 00:42:48.008 09:08:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2441890 00:42:48.008 [2024-05-15 09:08:42.504392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.008 09:08:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:42:48.008 09:08:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2441890 00:42:48.008 [2024-05-15 09:08:42.504531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.008 [2024-05-15 09:08:42.504557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.008 qpair failed and we were unable to recover it. 00:42:48.008 09:08:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@828 -- # '[' -z 2441890 ']' 00:42:48.008 [2024-05-15 09:08:42.504682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.008 09:08:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:48.008 [2024-05-15 09:08:42.504816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.008 [2024-05-15 09:08:42.504844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.008 qpair failed and we were unable to recover it. 00:42:48.008 09:08:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local max_retries=100 00:42:48.008 [2024-05-15 09:08:42.504997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.008 09:08:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:48.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:48.008 [2024-05-15 09:08:42.505130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.008 [2024-05-15 09:08:42.505157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.008 09:08:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # xtrace_disable 00:42:48.008 qpair failed and we were unable to recover it. 00:42:48.008 09:08:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:42:48.008 [2024-05-15 09:08:42.505321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.008 [2024-05-15 09:08:42.505420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.008 [2024-05-15 09:08:42.505446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.008 qpair failed and we were unable to recover it. 00:42:48.008 [2024-05-15 09:08:42.505568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.008 [2024-05-15 09:08:42.505673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.008 [2024-05-15 09:08:42.505698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.008 qpair failed and we were unable to recover it. 00:42:48.008 [2024-05-15 09:08:42.505804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.008 [2024-05-15 09:08:42.508040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.009 [2024-05-15 09:08:42.508078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.009 qpair failed and we were unable to recover it. 00:42:48.009 [2024-05-15 09:08:42.508252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.009 [2024-05-15 09:08:42.508383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.009 [2024-05-15 09:08:42.508408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.009 qpair failed and we were unable to recover it. 00:42:48.009 [2024-05-15 09:08:42.508541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.009 [2024-05-15 09:08:42.508668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.009 [2024-05-15 09:08:42.508696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.009 qpair failed and we were unable to recover it. 00:42:48.009 [2024-05-15 09:08:42.508824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.009 [2024-05-15 09:08:42.508982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.009 [2024-05-15 09:08:42.509012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.009 qpair failed and we were unable to recover it. 00:42:48.009 [2024-05-15 09:08:42.509144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.009 [2024-05-15 09:08:42.509311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.009 [2024-05-15 09:08:42.509339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.009 qpair failed and we were unable to recover it. 00:42:48.009 [2024-05-15 09:08:42.509473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.009 [2024-05-15 09:08:42.509601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.009 [2024-05-15 09:08:42.509627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.009 qpair failed and we were unable to recover it. 00:42:48.009 [2024-05-15 09:08:42.509794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.009 [2024-05-15 09:08:42.509948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.009 [2024-05-15 09:08:42.509977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.009 qpair failed and we were unable to recover it. 00:42:48.009 [2024-05-15 09:08:42.510095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.009 [2024-05-15 09:08:42.510223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.009 [2024-05-15 09:08:42.510269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.009 qpair failed and we were unable to recover it. 00:42:48.009 [2024-05-15 09:08:42.510399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.009 [2024-05-15 09:08:42.510527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.009 [2024-05-15 09:08:42.510553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.009 qpair failed and we were unable to recover it. 00:42:48.009 [2024-05-15 09:08:42.510699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.009 [2024-05-15 09:08:42.510841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.009 [2024-05-15 09:08:42.510871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.009 qpair failed and we were unable to recover it. 00:42:48.009 [2024-05-15 09:08:42.511015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.009 [2024-05-15 09:08:42.511168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.009 [2024-05-15 09:08:42.511194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.009 qpair failed and we were unable to recover it. 00:42:48.009 [2024-05-15 09:08:42.511333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.009 [2024-05-15 09:08:42.511441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.009 [2024-05-15 09:08:42.511468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.009 qpair failed and we were unable to recover it. 00:42:48.009 [2024-05-15 09:08:42.511606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.009 [2024-05-15 09:08:42.511728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.009 [2024-05-15 09:08:42.511758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.009 qpair failed and we were unable to recover it. 00:42:48.009 [2024-05-15 09:08:42.511867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.009 [2024-05-15 09:08:42.511987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.009 [2024-05-15 09:08:42.512016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.009 qpair failed and we were unable to recover it. 00:42:48.009 [2024-05-15 09:08:42.512201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.009 [2024-05-15 09:08:42.512343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.009 [2024-05-15 09:08:42.512374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.009 qpair failed and we were unable to recover it. 00:42:48.009 [2024-05-15 09:08:42.512529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.009 [2024-05-15 09:08:42.512670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.009 [2024-05-15 09:08:42.512699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.009 qpair failed and we were unable to recover it. 00:42:48.009 [2024-05-15 09:08:42.512874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.009 [2024-05-15 09:08:42.512985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.009 [2024-05-15 09:08:42.513015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.009 qpair failed and we were unable to recover it. 00:42:48.009 [2024-05-15 09:08:42.513150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.009 [2024-05-15 09:08:42.513314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.009 [2024-05-15 09:08:42.513341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.009 qpair failed and we were unable to recover it. 00:42:48.009 [2024-05-15 09:08:42.513470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.009 [2024-05-15 09:08:42.513603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.009 [2024-05-15 09:08:42.513645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.009 qpair failed and we were unable to recover it. 00:42:48.009 [2024-05-15 09:08:42.513775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.009 [2024-05-15 09:08:42.513914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.009 [2024-05-15 09:08:42.513942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.009 qpair failed and we were unable to recover it. 00:42:48.009 [2024-05-15 09:08:42.514081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.009 [2024-05-15 09:08:42.514213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.009 [2024-05-15 09:08:42.514265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.009 qpair failed and we were unable to recover it. 00:42:48.009 [2024-05-15 09:08:42.514373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.009 [2024-05-15 09:08:42.514485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.009 [2024-05-15 09:08:42.514529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.009 qpair failed and we were unable to recover it. 00:42:48.009 [2024-05-15 09:08:42.514707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.009 [2024-05-15 09:08:42.514824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.009 [2024-05-15 09:08:42.514853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.009 qpair failed and we were unable to recover it. 00:42:48.009 [2024-05-15 09:08:42.515050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.009 [2024-05-15 09:08:42.515220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.009 [2024-05-15 09:08:42.515246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.009 qpair failed and we were unable to recover it. 00:42:48.009 [2024-05-15 09:08:42.515359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.009 [2024-05-15 09:08:42.515486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.009 [2024-05-15 09:08:42.515526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.009 qpair failed and we were unable to recover it. 00:42:48.009 [2024-05-15 09:08:42.515634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.009 [2024-05-15 09:08:42.515797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.009 [2024-05-15 09:08:42.515823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.009 qpair failed and we were unable to recover it. 00:42:48.009 [2024-05-15 09:08:42.516000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.009 [2024-05-15 09:08:42.516145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.009 [2024-05-15 09:08:42.516173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.009 qpair failed and we were unable to recover it. 00:42:48.010 [2024-05-15 09:08:42.516310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.010 [2024-05-15 09:08:42.516417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.010 [2024-05-15 09:08:42.516444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.010 qpair failed and we were unable to recover it. 00:42:48.010 [2024-05-15 09:08:42.516612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.010 [2024-05-15 09:08:42.516758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.010 [2024-05-15 09:08:42.516787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.010 qpair failed and we were unable to recover it. 00:42:48.010 [2024-05-15 09:08:42.516964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.010 [2024-05-15 09:08:42.517074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.010 [2024-05-15 09:08:42.517102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.010 qpair failed and we were unable to recover it. 00:42:48.010 [2024-05-15 09:08:42.517245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.010 [2024-05-15 09:08:42.517357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.010 [2024-05-15 09:08:42.517383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.010 qpair failed and we were unable to recover it. 00:42:48.010 [2024-05-15 09:08:42.517514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.010 [2024-05-15 09:08:42.517633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.010 [2024-05-15 09:08:42.517659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.010 qpair failed and we were unable to recover it. 00:42:48.010 [2024-05-15 09:08:42.517841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.010 [2024-05-15 09:08:42.517976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.010 [2024-05-15 09:08:42.518005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.010 qpair failed and we were unable to recover it. 00:42:48.010 [2024-05-15 09:08:42.518122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.010 [2024-05-15 09:08:42.518240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.010 [2024-05-15 09:08:42.518283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.010 qpair failed and we were unable to recover it. 00:42:48.010 [2024-05-15 09:08:42.518420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.010 [2024-05-15 09:08:42.518554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.010 [2024-05-15 09:08:42.518585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.010 qpair failed and we were unable to recover it. 00:42:48.010 [2024-05-15 09:08:42.518716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.010 [2024-05-15 09:08:42.518891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.010 [2024-05-15 09:08:42.518919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.010 qpair failed and we were unable to recover it. 00:42:48.010 [2024-05-15 09:08:42.519035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.010 [2024-05-15 09:08:42.519178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.010 [2024-05-15 09:08:42.519223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.010 qpair failed and we were unable to recover it. 00:42:48.010 [2024-05-15 09:08:42.519387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.010 [2024-05-15 09:08:42.519507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.010 [2024-05-15 09:08:42.519536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.010 qpair failed and we were unable to recover it. 00:42:48.010 [2024-05-15 09:08:42.519673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.010 [2024-05-15 09:08:42.519827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.010 [2024-05-15 09:08:42.519855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.010 qpair failed and we were unable to recover it. 00:42:48.010 [2024-05-15 09:08:42.519990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.010 [2024-05-15 09:08:42.520126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.010 [2024-05-15 09:08:42.520156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.010 qpair failed and we were unable to recover it. 00:42:48.010 [2024-05-15 09:08:42.520293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.010 [2024-05-15 09:08:42.520404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.010 [2024-05-15 09:08:42.520430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.010 qpair failed and we were unable to recover it. 00:42:48.010 [2024-05-15 09:08:42.520553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.010 [2024-05-15 09:08:42.520657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.010 [2024-05-15 09:08:42.520699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.010 qpair failed and we were unable to recover it. 00:42:48.010 [2024-05-15 09:08:42.520808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.010 [2024-05-15 09:08:42.520921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.010 [2024-05-15 09:08:42.520950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.010 qpair failed and we were unable to recover it. 00:42:48.010 [2024-05-15 09:08:42.521078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.010 [2024-05-15 09:08:42.521227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.010 [2024-05-15 09:08:42.521272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.010 qpair failed and we were unable to recover it. 00:42:48.010 [2024-05-15 09:08:42.521384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.010 [2024-05-15 09:08:42.521517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.010 [2024-05-15 09:08:42.521547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.010 qpair failed and we were unable to recover it. 00:42:48.010 [2024-05-15 09:08:42.521683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.010 [2024-05-15 09:08:42.521784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.010 [2024-05-15 09:08:42.521813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.010 qpair failed and we were unable to recover it. 00:42:48.010 [2024-05-15 09:08:42.521955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.010 [2024-05-15 09:08:42.522074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.010 [2024-05-15 09:08:42.522104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.010 qpair failed and we were unable to recover it. 00:42:48.010 [2024-05-15 09:08:42.522236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.010 [2024-05-15 09:08:42.522373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.010 [2024-05-15 09:08:42.522399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.010 qpair failed and we were unable to recover it. 00:42:48.010 [2024-05-15 09:08:42.522526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.010 [2024-05-15 09:08:42.522643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.010 [2024-05-15 09:08:42.522672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.010 qpair failed and we were unable to recover it. 00:42:48.010 [2024-05-15 09:08:42.522820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.010 [2024-05-15 09:08:42.522986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.010 [2024-05-15 09:08:42.523014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.010 qpair failed and we were unable to recover it. 00:42:48.010 [2024-05-15 09:08:42.523146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.010 [2024-05-15 09:08:42.523273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.010 [2024-05-15 09:08:42.523300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.010 qpair failed and we were unable to recover it. 00:42:48.010 [2024-05-15 09:08:42.523395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.010 [2024-05-15 09:08:42.523509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.010 [2024-05-15 09:08:42.523535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.010 qpair failed and we were unable to recover it. 00:42:48.010 [2024-05-15 09:08:42.523660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.010 [2024-05-15 09:08:42.523789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.010 [2024-05-15 09:08:42.523818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.010 qpair failed and we were unable to recover it. 00:42:48.010 [2024-05-15 09:08:42.523988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.010 [2024-05-15 09:08:42.524134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.010 [2024-05-15 09:08:42.524163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.010 qpair failed and we were unable to recover it. 00:42:48.010 [2024-05-15 09:08:42.524299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.010 [2024-05-15 09:08:42.524400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.010 [2024-05-15 09:08:42.524427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.010 qpair failed and we were unable to recover it. 00:42:48.010 [2024-05-15 09:08:42.524557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.010 [2024-05-15 09:08:42.524691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.011 [2024-05-15 09:08:42.524718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.011 qpair failed and we were unable to recover it. 00:42:48.011 [2024-05-15 09:08:42.524840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.011 [2024-05-15 09:08:42.524983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.011 [2024-05-15 09:08:42.525012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.011 qpair failed and we were unable to recover it. 00:42:48.011 [2024-05-15 09:08:42.525127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.011 [2024-05-15 09:08:42.525290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.011 [2024-05-15 09:08:42.525317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.011 qpair failed and we were unable to recover it. 00:42:48.011 [2024-05-15 09:08:42.525451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.011 [2024-05-15 09:08:42.525563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.011 [2024-05-15 09:08:42.525589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.011 qpair failed and we were unable to recover it. 00:42:48.011 [2024-05-15 09:08:42.525735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.011 [2024-05-15 09:08:42.525874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.011 [2024-05-15 09:08:42.525903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.011 qpair failed and we were unable to recover it. 00:42:48.011 [2024-05-15 09:08:42.526044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.011 [2024-05-15 09:08:42.526170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.011 [2024-05-15 09:08:42.526199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.011 qpair failed and we were unable to recover it. 00:42:48.011 [2024-05-15 09:08:42.526340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.011 [2024-05-15 09:08:42.526446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.011 [2024-05-15 09:08:42.526473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.011 qpair failed and we were unable to recover it. 00:42:48.011 [2024-05-15 09:08:42.526636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.011 [2024-05-15 09:08:42.526791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.011 [2024-05-15 09:08:42.526819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.011 qpair failed and we were unable to recover it. 00:42:48.011 [2024-05-15 09:08:42.526947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.011 [2024-05-15 09:08:42.527058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.011 [2024-05-15 09:08:42.527087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.011 qpair failed and we were unable to recover it. 00:42:48.011 [2024-05-15 09:08:42.527276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.011 [2024-05-15 09:08:42.527381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.011 [2024-05-15 09:08:42.527408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.011 qpair failed and we were unable to recover it. 00:42:48.011 [2024-05-15 09:08:42.527546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.011 [2024-05-15 09:08:42.527699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.011 [2024-05-15 09:08:42.527728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.011 qpair failed and we were unable to recover it. 00:42:48.011 [2024-05-15 09:08:42.527852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.011 [2024-05-15 09:08:42.527985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.011 [2024-05-15 09:08:42.528029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.011 qpair failed and we were unable to recover it. 00:42:48.011 [2024-05-15 09:08:42.528155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.011 [2024-05-15 09:08:42.528309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.011 [2024-05-15 09:08:42.528335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.011 qpair failed and we were unable to recover it. 00:42:48.011 [2024-05-15 09:08:42.528496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.011 [2024-05-15 09:08:42.528620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.011 [2024-05-15 09:08:42.528646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.011 qpair failed and we were unable to recover it. 00:42:48.011 [2024-05-15 09:08:42.528749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.011 [2024-05-15 09:08:42.528859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.011 [2024-05-15 09:08:42.528884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.011 qpair failed and we were unable to recover it. 00:42:48.011 [2024-05-15 09:08:42.529041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.011 [2024-05-15 09:08:42.529152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.011 [2024-05-15 09:08:42.529176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.011 qpair failed and we were unable to recover it. 00:42:48.011 [2024-05-15 09:08:42.529296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.011 [2024-05-15 09:08:42.529403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.011 [2024-05-15 09:08:42.529428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.011 qpair failed and we were unable to recover it. 00:42:48.011 [2024-05-15 09:08:42.529539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.011 [2024-05-15 09:08:42.529642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.011 [2024-05-15 09:08:42.529668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.011 qpair failed and we were unable to recover it. 00:42:48.011 [2024-05-15 09:08:42.529777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.011 [2024-05-15 09:08:42.529885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.011 [2024-05-15 09:08:42.529910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.011 qpair failed and we were unable to recover it. 00:42:48.011 [2024-05-15 09:08:42.530042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.011 [2024-05-15 09:08:42.530141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.011 [2024-05-15 09:08:42.530166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.011 qpair failed and we were unable to recover it. 00:42:48.011 [2024-05-15 09:08:42.530303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.011 [2024-05-15 09:08:42.530462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.011 [2024-05-15 09:08:42.530488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.011 qpair failed and we were unable to recover it. 00:42:48.011 [2024-05-15 09:08:42.530644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.011 [2024-05-15 09:08:42.530749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.011 [2024-05-15 09:08:42.530775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.011 qpair failed and we were unable to recover it. 00:42:48.011 [2024-05-15 09:08:42.530910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.011 [2024-05-15 09:08:42.531040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.011 [2024-05-15 09:08:42.531065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.011 qpair failed and we were unable to recover it. 00:42:48.011 [2024-05-15 09:08:42.531197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.011 [2024-05-15 09:08:42.531332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.011 [2024-05-15 09:08:42.531360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.011 qpair failed and we were unable to recover it. 00:42:48.011 [2024-05-15 09:08:42.531526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.011 [2024-05-15 09:08:42.531685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.011 [2024-05-15 09:08:42.531712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.011 qpair failed and we were unable to recover it. 00:42:48.011 [2024-05-15 09:08:42.531848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.011 [2024-05-15 09:08:42.531951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.011 [2024-05-15 09:08:42.531977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.011 qpair failed and we were unable to recover it. 00:42:48.011 [2024-05-15 09:08:42.532120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.011 [2024-05-15 09:08:42.532236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.011 [2024-05-15 09:08:42.532280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.011 qpair failed and we were unable to recover it. 00:42:48.011 [2024-05-15 09:08:42.532508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.011 [2024-05-15 09:08:42.532647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.011 [2024-05-15 09:08:42.532675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.011 qpair failed and we were unable to recover it. 00:42:48.011 [2024-05-15 09:08:42.532797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.011 [2024-05-15 09:08:42.532923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.011 [2024-05-15 09:08:42.532948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.011 qpair failed and we were unable to recover it. 00:42:48.011 [2024-05-15 09:08:42.533082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.012 [2024-05-15 09:08:42.533194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.012 [2024-05-15 09:08:42.533227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.012 qpair failed and we were unable to recover it. 00:42:48.012 [2024-05-15 09:08:42.533393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.012 [2024-05-15 09:08:42.533535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.012 [2024-05-15 09:08:42.533577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.012 qpair failed and we were unable to recover it. 00:42:48.012 [2024-05-15 09:08:42.533714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.012 [2024-05-15 09:08:42.533851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.012 [2024-05-15 09:08:42.533876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.012 qpair failed and we were unable to recover it. 00:42:48.012 [2024-05-15 09:08:42.534009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.012 [2024-05-15 09:08:42.534141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.012 [2024-05-15 09:08:42.534166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.012 qpair failed and we were unable to recover it. 00:42:48.012 [2024-05-15 09:08:42.534281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.012 [2024-05-15 09:08:42.534407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.012 [2024-05-15 09:08:42.534432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.012 qpair failed and we were unable to recover it. 00:42:48.012 [2024-05-15 09:08:42.534560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.012 [2024-05-15 09:08:42.534695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.012 [2024-05-15 09:08:42.534720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.012 qpair failed and we were unable to recover it. 00:42:48.012 [2024-05-15 09:08:42.534847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.012 [2024-05-15 09:08:42.534980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.012 [2024-05-15 09:08:42.535006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.012 qpair failed and we were unable to recover it. 00:42:48.012 [2024-05-15 09:08:42.535110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.012 [2024-05-15 09:08:42.535233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.012 [2024-05-15 09:08:42.535278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.012 qpair failed and we were unable to recover it. 00:42:48.012 [2024-05-15 09:08:42.535405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.012 [2024-05-15 09:08:42.535536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.012 [2024-05-15 09:08:42.535562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.012 qpair failed and we were unable to recover it. 00:42:48.012 [2024-05-15 09:08:42.535697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.012 [2024-05-15 09:08:42.535793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.012 [2024-05-15 09:08:42.535819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.012 qpair failed and we were unable to recover it. 00:42:48.012 [2024-05-15 09:08:42.535947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.012 [2024-05-15 09:08:42.536043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.012 [2024-05-15 09:08:42.536067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.012 qpair failed and we were unable to recover it. 00:42:48.012 [2024-05-15 09:08:42.536200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.012 [2024-05-15 09:08:42.536314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.012 [2024-05-15 09:08:42.536340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.012 qpair failed and we were unable to recover it. 00:42:48.012 [2024-05-15 09:08:42.536446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.012 [2024-05-15 09:08:42.536553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.012 [2024-05-15 09:08:42.536580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.012 qpair failed and we were unable to recover it. 00:42:48.012 [2024-05-15 09:08:42.536687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.012 [2024-05-15 09:08:42.536791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.012 [2024-05-15 09:08:42.536816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.012 qpair failed and we were unable to recover it. 00:42:48.012 [2024-05-15 09:08:42.536935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.012 [2024-05-15 09:08:42.537043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.012 [2024-05-15 09:08:42.537068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.012 qpair failed and we were unable to recover it. 00:42:48.012 [2024-05-15 09:08:42.537274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.012 [2024-05-15 09:08:42.537404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.012 [2024-05-15 09:08:42.537430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.012 qpair failed and we were unable to recover it. 00:42:48.012 [2024-05-15 09:08:42.537528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.012 [2024-05-15 09:08:42.537661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.012 [2024-05-15 09:08:42.537686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.012 qpair failed and we were unable to recover it. 00:42:48.012 [2024-05-15 09:08:42.537792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.012 [2024-05-15 09:08:42.537900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.012 [2024-05-15 09:08:42.537926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.012 qpair failed and we were unable to recover it. 00:42:48.012 [2024-05-15 09:08:42.538087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.012 [2024-05-15 09:08:42.538206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.012 [2024-05-15 09:08:42.538238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.012 qpair failed and we were unable to recover it. 00:42:48.012 [2024-05-15 09:08:42.538363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.012 [2024-05-15 09:08:42.538524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.012 [2024-05-15 09:08:42.538551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.012 qpair failed and we were unable to recover it. 00:42:48.012 [2024-05-15 09:08:42.538721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.012 [2024-05-15 09:08:42.538863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.012 [2024-05-15 09:08:42.538891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.012 qpair failed and we were unable to recover it. 00:42:48.012 [2024-05-15 09:08:42.539044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.012 [2024-05-15 09:08:42.539159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.012 [2024-05-15 09:08:42.539209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.012 qpair failed and we were unable to recover it. 00:42:48.012 [2024-05-15 09:08:42.539361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.012 [2024-05-15 09:08:42.539517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.012 [2024-05-15 09:08:42.539544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.012 qpair failed and we were unable to recover it. 00:42:48.012 [2024-05-15 09:08:42.539727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.012 [2024-05-15 09:08:42.539890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.012 [2024-05-15 09:08:42.539932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.012 qpair failed and we were unable to recover it. 00:42:48.012 [2024-05-15 09:08:42.540063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.012 [2024-05-15 09:08:42.540197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.012 [2024-05-15 09:08:42.540265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.012 qpair failed and we were unable to recover it. 00:42:48.012 [2024-05-15 09:08:42.540458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.012 [2024-05-15 09:08:42.540622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.012 [2024-05-15 09:08:42.540649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.012 qpair failed and we were unable to recover it. 00:42:48.012 [2024-05-15 09:08:42.540819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.012 [2024-05-15 09:08:42.540964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.012 [2024-05-15 09:08:42.540993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.012 qpair failed and we were unable to recover it. 00:42:48.012 [2024-05-15 09:08:42.541160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.012 [2024-05-15 09:08:42.541310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.012 [2024-05-15 09:08:42.541339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.012 qpair failed and we were unable to recover it. 00:42:48.012 [2024-05-15 09:08:42.541468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.012 [2024-05-15 09:08:42.541598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.012 [2024-05-15 09:08:42.541624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.013 qpair failed and we were unable to recover it. 00:42:48.013 [2024-05-15 09:08:42.541738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.013 [2024-05-15 09:08:42.541892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.013 [2024-05-15 09:08:42.541917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.013 qpair failed and we were unable to recover it. 00:42:48.013 [2024-05-15 09:08:42.542028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.013 [2024-05-15 09:08:42.542170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.013 [2024-05-15 09:08:42.542196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.013 qpair failed and we were unable to recover it. 00:42:48.013 [2024-05-15 09:08:42.542361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.013 [2024-05-15 09:08:42.542525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.013 [2024-05-15 09:08:42.542564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.013 qpair failed and we were unable to recover it. 00:42:48.013 [2024-05-15 09:08:42.542713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.013 [2024-05-15 09:08:42.542853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.013 [2024-05-15 09:08:42.542880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.013 qpair failed and we were unable to recover it. 00:42:48.013 [2024-05-15 09:08:42.542992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.013 [2024-05-15 09:08:42.543124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.013 [2024-05-15 09:08:42.543149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.013 qpair failed and we were unable to recover it. 00:42:48.013 [2024-05-15 09:08:42.543282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.013 [2024-05-15 09:08:42.543442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.013 [2024-05-15 09:08:42.543468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.013 qpair failed and we were unable to recover it. 00:42:48.013 [2024-05-15 09:08:42.543575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.013 [2024-05-15 09:08:42.543675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.013 [2024-05-15 09:08:42.543699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.013 qpair failed and we were unable to recover it. 00:42:48.013 [2024-05-15 09:08:42.543828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.013 [2024-05-15 09:08:42.543957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.013 [2024-05-15 09:08:42.543981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.013 qpair failed and we were unable to recover it. 00:42:48.013 [2024-05-15 09:08:42.544104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.013 [2024-05-15 09:08:42.544238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.013 [2024-05-15 09:08:42.544265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.013 qpair failed and we were unable to recover it. 00:42:48.013 [2024-05-15 09:08:42.544411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.013 [2024-05-15 09:08:42.544526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.013 [2024-05-15 09:08:42.544559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.013 qpair failed and we were unable to recover it. 00:42:48.013 [2024-05-15 09:08:42.544700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.013 [2024-05-15 09:08:42.544829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.013 [2024-05-15 09:08:42.544858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.013 qpair failed and we were unable to recover it. 00:42:48.013 [2024-05-15 09:08:42.544994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.013 [2024-05-15 09:08:42.545101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.013 [2024-05-15 09:08:42.545127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.013 qpair failed and we were unable to recover it. 00:42:48.013 [2024-05-15 09:08:42.545249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.013 [2024-05-15 09:08:42.545389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.013 [2024-05-15 09:08:42.545419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.013 qpair failed and we were unable to recover it. 00:42:48.013 [2024-05-15 09:08:42.545568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.013 [2024-05-15 09:08:42.545682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.013 [2024-05-15 09:08:42.545712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.013 qpair failed and we were unable to recover it. 00:42:48.013 [2024-05-15 09:08:42.545823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.013 [2024-05-15 09:08:42.545957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.013 [2024-05-15 09:08:42.545983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.013 qpair failed and we were unable to recover it. 00:42:48.013 [2024-05-15 09:08:42.546110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.013 [2024-05-15 09:08:42.546213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.013 [2024-05-15 09:08:42.546249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.013 qpair failed and we were unable to recover it. 00:42:48.013 [2024-05-15 09:08:42.546373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.013 [2024-05-15 09:08:42.546511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.013 [2024-05-15 09:08:42.546538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.013 qpair failed and we were unable to recover it. 00:42:48.013 [2024-05-15 09:08:42.546651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.013 [2024-05-15 09:08:42.546788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.013 [2024-05-15 09:08:42.546816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.013 qpair failed and we were unable to recover it. 00:42:48.013 [2024-05-15 09:08:42.546939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.013 [2024-05-15 09:08:42.547046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.013 [2024-05-15 09:08:42.547075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.013 qpair failed and we were unable to recover it. 00:42:48.013 [2024-05-15 09:08:42.547184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.013 [2024-05-15 09:08:42.547303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.013 [2024-05-15 09:08:42.547329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.013 qpair failed and we were unable to recover it. 00:42:48.013 [2024-05-15 09:08:42.547442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.013 [2024-05-15 09:08:42.547610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.013 [2024-05-15 09:08:42.547639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.013 qpair failed and we were unable to recover it. 00:42:48.013 [2024-05-15 09:08:42.547781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.013 [2024-05-15 09:08:42.547891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.013 [2024-05-15 09:08:42.547917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.013 [2024-05-15 09:08:42.547904] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:42:48.013 qpair failed and we were unable to recover it. 00:42:48.013 [2024-05-15 09:08:42.547974] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:48.013 [2024-05-15 09:08:42.548081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.013 [2024-05-15 09:08:42.548197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.013 [2024-05-15 09:08:42.548240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.013 qpair failed and we were unable to recover it. 00:42:48.013 [2024-05-15 09:08:42.548350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.013 [2024-05-15 09:08:42.548455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.014 [2024-05-15 09:08:42.548480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.014 qpair failed and we were unable to recover it. 00:42:48.014 [2024-05-15 09:08:42.548654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.014 [2024-05-15 09:08:42.548792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.014 [2024-05-15 09:08:42.548820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.014 qpair failed and we were unable to recover it. 00:42:48.014 [2024-05-15 09:08:42.548980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.014 [2024-05-15 09:08:42.549114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.014 [2024-05-15 09:08:42.549144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.014 qpair failed and we were unable to recover it. 00:42:48.014 [2024-05-15 09:08:42.549293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.014 [2024-05-15 09:08:42.549471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.014 [2024-05-15 09:08:42.549500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.014 qpair failed and we were unable to recover it. 00:42:48.014 [2024-05-15 09:08:42.549629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.014 [2024-05-15 09:08:42.549735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.014 [2024-05-15 09:08:42.549764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.014 qpair failed and we were unable to recover it. 00:42:48.014 [2024-05-15 09:08:42.549931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.014 [2024-05-15 09:08:42.550043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.014 [2024-05-15 09:08:42.550070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.014 qpair failed and we were unable to recover it. 00:42:48.014 [2024-05-15 09:08:42.550180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.014 [2024-05-15 09:08:42.550345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.014 [2024-05-15 09:08:42.550377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.014 qpair failed and we were unable to recover it. 00:42:48.014 [2024-05-15 09:08:42.550561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.014 [2024-05-15 09:08:42.550754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.014 [2024-05-15 09:08:42.550803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.014 qpair failed and we were unable to recover it. 00:42:48.014 [2024-05-15 09:08:42.550967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.014 [2024-05-15 09:08:42.551095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.014 [2024-05-15 09:08:42.551123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.014 qpair failed and we were unable to recover it. 00:42:48.014 [2024-05-15 09:08:42.551273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.014 [2024-05-15 09:08:42.551469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.014 [2024-05-15 09:08:42.551503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.014 qpair failed and we were unable to recover it. 00:42:48.014 [2024-05-15 09:08:42.551705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.014 [2024-05-15 09:08:42.551847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.014 [2024-05-15 09:08:42.551877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.014 qpair failed and we were unable to recover it. 00:42:48.014 [2024-05-15 09:08:42.552016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.014 [2024-05-15 09:08:42.552128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.014 [2024-05-15 09:08:42.552157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.014 qpair failed and we were unable to recover it. 00:42:48.014 [2024-05-15 09:08:42.552294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.014 [2024-05-15 09:08:42.552441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.014 [2024-05-15 09:08:42.552472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.014 qpair failed and we were unable to recover it. 00:42:48.014 [2024-05-15 09:08:42.552656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.014 [2024-05-15 09:08:42.552818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.014 [2024-05-15 09:08:42.552853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.014 qpair failed and we were unable to recover it. 00:42:48.014 [2024-05-15 09:08:42.553032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.014 [2024-05-15 09:08:42.553149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.014 [2024-05-15 09:08:42.553180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.014 qpair failed and we were unable to recover it. 00:42:48.014 [2024-05-15 09:08:42.553367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.014 [2024-05-15 09:08:42.553496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.014 [2024-05-15 09:08:42.553548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.014 qpair failed and we were unable to recover it. 00:42:48.014 [2024-05-15 09:08:42.553726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.014 [2024-05-15 09:08:42.553875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.014 [2024-05-15 09:08:42.553904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.014 qpair failed and we were unable to recover it. 00:42:48.014 [2024-05-15 09:08:42.554031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.014 [2024-05-15 09:08:42.554195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.014 [2024-05-15 09:08:42.554237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.014 qpair failed and we were unable to recover it. 00:42:48.014 [2024-05-15 09:08:42.554392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.014 [2024-05-15 09:08:42.554543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.014 [2024-05-15 09:08:42.554577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.014 qpair failed and we were unable to recover it. 00:42:48.014 [2024-05-15 09:08:42.554726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.014 [2024-05-15 09:08:42.554924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.014 [2024-05-15 09:08:42.554956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.014 qpair failed and we were unable to recover it. 00:42:48.014 [2024-05-15 09:08:42.555088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.014 [2024-05-15 09:08:42.555231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.014 [2024-05-15 09:08:42.555281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.014 qpair failed and we were unable to recover it. 00:42:48.014 [2024-05-15 09:08:42.555458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.014 [2024-05-15 09:08:42.555594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.014 [2024-05-15 09:08:42.555623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.014 qpair failed and we were unable to recover it. 00:42:48.014 [2024-05-15 09:08:42.555765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.014 [2024-05-15 09:08:42.555874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.014 [2024-05-15 09:08:42.555904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.014 qpair failed and we were unable to recover it. 00:42:48.014 [2024-05-15 09:08:42.556044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.014 [2024-05-15 09:08:42.556150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.014 [2024-05-15 09:08:42.556179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.014 qpair failed and we were unable to recover it. 00:42:48.014 [2024-05-15 09:08:42.556305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.014 [2024-05-15 09:08:42.556418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.014 [2024-05-15 09:08:42.556445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.014 qpair failed and we were unable to recover it. 00:42:48.014 [2024-05-15 09:08:42.556598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.014 [2024-05-15 09:08:42.556709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.014 [2024-05-15 09:08:42.556738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.014 qpair failed and we were unable to recover it. 00:42:48.014 [2024-05-15 09:08:42.556884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.014 [2024-05-15 09:08:42.556985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.014 [2024-05-15 09:08:42.557014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.014 qpair failed and we were unable to recover it. 00:42:48.014 [2024-05-15 09:08:42.557164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.014 [2024-05-15 09:08:42.557263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.014 [2024-05-15 09:08:42.557296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.014 qpair failed and we were unable to recover it. 00:42:48.014 [2024-05-15 09:08:42.557420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.014 [2024-05-15 09:08:42.557567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.014 [2024-05-15 09:08:42.557596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.014 qpair failed and we were unable to recover it. 00:42:48.014 [2024-05-15 09:08:42.557742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.015 [2024-05-15 09:08:42.557865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.015 [2024-05-15 09:08:42.557890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.015 qpair failed and we were unable to recover it. 00:42:48.015 [2024-05-15 09:08:42.558009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.015 [2024-05-15 09:08:42.558119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.015 [2024-05-15 09:08:42.558145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.015 qpair failed and we were unable to recover it. 00:42:48.015 [2024-05-15 09:08:42.558317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.015 [2024-05-15 09:08:42.558451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.015 [2024-05-15 09:08:42.558481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.015 qpair failed and we were unable to recover it. 00:42:48.015 [2024-05-15 09:08:42.558631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.015 [2024-05-15 09:08:42.558787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.015 [2024-05-15 09:08:42.558821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.015 qpair failed and we were unable to recover it. 00:42:48.015 [2024-05-15 09:08:42.558982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.015 [2024-05-15 09:08:42.559140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.015 [2024-05-15 09:08:42.559170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.015 qpair failed and we were unable to recover it. 00:42:48.015 [2024-05-15 09:08:42.559307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.015 [2024-05-15 09:08:42.559465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.015 [2024-05-15 09:08:42.559495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.015 qpair failed and we were unable to recover it. 00:42:48.015 [2024-05-15 09:08:42.559661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.015 [2024-05-15 09:08:42.559827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.015 [2024-05-15 09:08:42.559873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.015 qpair failed and we were unable to recover it. 00:42:48.015 [2024-05-15 09:08:42.560032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.015 [2024-05-15 09:08:42.560144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.015 [2024-05-15 09:08:42.560172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.015 qpair failed and we were unable to recover it. 00:42:48.015 [2024-05-15 09:08:42.560300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.015 [2024-05-15 09:08:42.560460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.015 [2024-05-15 09:08:42.560490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.015 qpair failed and we were unable to recover it. 00:42:48.015 [2024-05-15 09:08:42.560619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.015 [2024-05-15 09:08:42.560730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.015 [2024-05-15 09:08:42.560756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.015 qpair failed and we were unable to recover it. 00:42:48.015 [2024-05-15 09:08:42.560871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.015 [2024-05-15 09:08:42.560993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.015 [2024-05-15 09:08:42.561020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.015 qpair failed and we were unable to recover it. 00:42:48.015 [2024-05-15 09:08:42.561178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.015 [2024-05-15 09:08:42.561316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.015 [2024-05-15 09:08:42.561350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.015 qpair failed and we were unable to recover it. 00:42:48.015 [2024-05-15 09:08:42.561511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.015 [2024-05-15 09:08:42.561691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.015 [2024-05-15 09:08:42.561719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.015 qpair failed and we were unable to recover it. 00:42:48.015 [2024-05-15 09:08:42.561894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.015 [2024-05-15 09:08:42.562023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.015 [2024-05-15 09:08:42.562050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.015 qpair failed and we were unable to recover it. 00:42:48.015 [2024-05-15 09:08:42.562185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.015 [2024-05-15 09:08:42.562323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.015 [2024-05-15 09:08:42.562353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.015 qpair failed and we were unable to recover it. 00:42:48.015 [2024-05-15 09:08:42.562533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.015 [2024-05-15 09:08:42.562727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.015 [2024-05-15 09:08:42.562758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.015 qpair failed and we were unable to recover it. 00:42:48.015 [2024-05-15 09:08:42.562914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.015 [2024-05-15 09:08:42.563027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.015 [2024-05-15 09:08:42.563053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.015 qpair failed and we were unable to recover it. 00:42:48.015 [2024-05-15 09:08:42.563229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.015 [2024-05-15 09:08:42.563360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.015 [2024-05-15 09:08:42.563394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.015 qpair failed and we were unable to recover it. 00:42:48.015 [2024-05-15 09:08:42.563582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.015 [2024-05-15 09:08:42.563731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.015 [2024-05-15 09:08:42.563759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.015 qpair failed and we were unable to recover it. 00:42:48.015 [2024-05-15 09:08:42.563880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.015 [2024-05-15 09:08:42.564014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.015 [2024-05-15 09:08:42.564042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.015 qpair failed and we were unable to recover it. 00:42:48.015 [2024-05-15 09:08:42.564179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.015 [2024-05-15 09:08:42.564348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.015 [2024-05-15 09:08:42.564387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.015 qpair failed and we were unable to recover it. 00:42:48.015 [2024-05-15 09:08:42.564542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.015 [2024-05-15 09:08:42.564697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.015 [2024-05-15 09:08:42.564728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.015 qpair failed and we were unable to recover it. 00:42:48.015 [2024-05-15 09:08:42.564882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.015 [2024-05-15 09:08:42.564992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.015 [2024-05-15 09:08:42.565020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.015 qpair failed and we were unable to recover it. 00:42:48.015 [2024-05-15 09:08:42.565143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.015 [2024-05-15 09:08:42.565298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.015 [2024-05-15 09:08:42.565327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.015 qpair failed and we were unable to recover it. 00:42:48.015 [2024-05-15 09:08:42.565456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.015 [2024-05-15 09:08:42.565605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.015 [2024-05-15 09:08:42.565631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.015 qpair failed and we were unable to recover it. 00:42:48.015 [2024-05-15 09:08:42.565753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.015 [2024-05-15 09:08:42.565885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.015 [2024-05-15 09:08:42.565913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.015 qpair failed and we were unable to recover it. 00:42:48.015 [2024-05-15 09:08:42.566084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.015 [2024-05-15 09:08:42.566226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.015 [2024-05-15 09:08:42.566271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.015 qpair failed and we were unable to recover it. 00:42:48.015 [2024-05-15 09:08:42.566450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.015 [2024-05-15 09:08:42.566606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.015 [2024-05-15 09:08:42.566639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.015 qpair failed and we were unable to recover it. 00:42:48.015 [2024-05-15 09:08:42.566829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.015 [2024-05-15 09:08:42.566940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.015 [2024-05-15 09:08:42.566968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.015 qpair failed and we were unable to recover it. 00:42:48.015 [2024-05-15 09:08:42.567105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.016 [2024-05-15 09:08:42.567281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.016 [2024-05-15 09:08:42.567311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.016 qpair failed and we were unable to recover it. 00:42:48.016 [2024-05-15 09:08:42.567425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.016 [2024-05-15 09:08:42.567571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.016 [2024-05-15 09:08:42.567599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.016 qpair failed and we were unable to recover it. 00:42:48.016 [2024-05-15 09:08:42.567742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.016 [2024-05-15 09:08:42.567885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.016 [2024-05-15 09:08:42.567915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.016 qpair failed and we were unable to recover it. 00:42:48.016 [2024-05-15 09:08:42.568054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.016 [2024-05-15 09:08:42.568186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.016 [2024-05-15 09:08:42.568231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.016 qpair failed and we were unable to recover it. 00:42:48.016 [2024-05-15 09:08:42.568370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.016 [2024-05-15 09:08:42.568535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.016 [2024-05-15 09:08:42.568568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.016 qpair failed and we were unable to recover it. 00:42:48.016 [2024-05-15 09:08:42.568740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.016 [2024-05-15 09:08:42.568897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.016 [2024-05-15 09:08:42.568922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.016 qpair failed and we were unable to recover it. 00:42:48.016 [2024-05-15 09:08:42.569067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.016 [2024-05-15 09:08:42.569196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.016 [2024-05-15 09:08:42.569256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.016 qpair failed and we were unable to recover it. 00:42:48.016 [2024-05-15 09:08:42.569399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.016 [2024-05-15 09:08:42.569544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.016 [2024-05-15 09:08:42.569574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.016 qpair failed and we were unable to recover it. 00:42:48.016 [2024-05-15 09:08:42.569689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.016 [2024-05-15 09:08:42.569796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.016 [2024-05-15 09:08:42.569825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.016 qpair failed and we were unable to recover it. 00:42:48.016 [2024-05-15 09:08:42.569970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.016 [2024-05-15 09:08:42.570113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.016 [2024-05-15 09:08:42.570139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.016 qpair failed and we were unable to recover it. 00:42:48.016 [2024-05-15 09:08:42.570273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.016 [2024-05-15 09:08:42.570389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.016 [2024-05-15 09:08:42.570416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.016 qpair failed and we were unable to recover it. 00:42:48.016 [2024-05-15 09:08:42.570547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.016 [2024-05-15 09:08:42.570708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.016 [2024-05-15 09:08:42.570736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.016 qpair failed and we were unable to recover it. 00:42:48.016 [2024-05-15 09:08:42.570884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.016 [2024-05-15 09:08:42.571025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.016 [2024-05-15 09:08:42.571050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.016 qpair failed and we were unable to recover it. 00:42:48.016 [2024-05-15 09:08:42.571224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.016 [2024-05-15 09:08:42.571334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.016 [2024-05-15 09:08:42.571364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.016 qpair failed and we were unable to recover it. 00:42:48.016 [2024-05-15 09:08:42.571508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.016 [2024-05-15 09:08:42.571649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.016 [2024-05-15 09:08:42.571679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.016 qpair failed and we were unable to recover it. 00:42:48.016 [2024-05-15 09:08:42.571844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.016 [2024-05-15 09:08:42.571983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.016 [2024-05-15 09:08:42.572008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.016 qpair failed and we were unable to recover it. 00:42:48.016 [2024-05-15 09:08:42.572121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.016 [2024-05-15 09:08:42.572256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.016 [2024-05-15 09:08:42.572283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.016 qpair failed and we were unable to recover it. 00:42:48.016 [2024-05-15 09:08:42.572414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.016 [2024-05-15 09:08:42.572553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.016 [2024-05-15 09:08:42.572582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.016 qpair failed and we were unable to recover it. 00:42:48.016 [2024-05-15 09:08:42.572723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.016 [2024-05-15 09:08:42.572832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.016 [2024-05-15 09:08:42.572859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.016 qpair failed and we were unable to recover it. 00:42:48.016 [2024-05-15 09:08:42.572997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.016 [2024-05-15 09:08:42.573115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.016 [2024-05-15 09:08:42.573141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.016 qpair failed and we were unable to recover it. 00:42:48.016 [2024-05-15 09:08:42.573257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.016 [2024-05-15 09:08:42.573360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.016 [2024-05-15 09:08:42.573389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.016 qpair failed and we were unable to recover it. 00:42:48.016 [2024-05-15 09:08:42.573513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.016 [2024-05-15 09:08:42.573671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.016 [2024-05-15 09:08:42.573701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.016 qpair failed and we were unable to recover it. 00:42:48.016 [2024-05-15 09:08:42.573815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.016 [2024-05-15 09:08:42.573961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.016 [2024-05-15 09:08:42.573989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.016 qpair failed and we were unable to recover it. 00:42:48.016 [2024-05-15 09:08:42.574125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.016 [2024-05-15 09:08:42.574238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.016 [2024-05-15 09:08:42.574273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.016 qpair failed and we were unable to recover it. 00:42:48.016 [2024-05-15 09:08:42.574403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.016 [2024-05-15 09:08:42.574539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.016 [2024-05-15 09:08:42.574570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.016 qpair failed and we were unable to recover it. 00:42:48.016 [2024-05-15 09:08:42.574685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.016 [2024-05-15 09:08:42.574791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.016 [2024-05-15 09:08:42.574817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.016 qpair failed and we were unable to recover it. 00:42:48.016 [2024-05-15 09:08:42.574930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.016 [2024-05-15 09:08:42.575042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.016 [2024-05-15 09:08:42.575070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.016 qpair failed and we were unable to recover it. 00:42:48.016 [2024-05-15 09:08:42.575209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.016 [2024-05-15 09:08:42.575326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.016 [2024-05-15 09:08:42.575358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.016 qpair failed and we were unable to recover it. 00:42:48.016 [2024-05-15 09:08:42.575470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.016 [2024-05-15 09:08:42.575604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.016 [2024-05-15 09:08:42.575634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.016 qpair failed and we were unable to recover it. 00:42:48.017 [2024-05-15 09:08:42.575779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.017 [2024-05-15 09:08:42.575893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.017 [2024-05-15 09:08:42.575919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.017 qpair failed and we were unable to recover it. 00:42:48.017 [2024-05-15 09:08:42.576081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.017 [2024-05-15 09:08:42.576228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.017 [2024-05-15 09:08:42.576255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.017 qpair failed and we were unable to recover it. 00:42:48.017 [2024-05-15 09:08:42.576403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.017 [2024-05-15 09:08:42.576532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.017 [2024-05-15 09:08:42.576562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.017 qpair failed and we were unable to recover it. 00:42:48.017 [2024-05-15 09:08:42.576723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.017 [2024-05-15 09:08:42.576853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.017 [2024-05-15 09:08:42.576880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.017 qpair failed and we were unable to recover it. 00:42:48.017 [2024-05-15 09:08:42.576996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.017 [2024-05-15 09:08:42.577126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.017 [2024-05-15 09:08:42.577153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.017 qpair failed and we were unable to recover it. 00:42:48.017 [2024-05-15 09:08:42.577303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.017 [2024-05-15 09:08:42.577420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.017 [2024-05-15 09:08:42.577447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.017 qpair failed and we were unable to recover it. 00:42:48.017 [2024-05-15 09:08:42.577596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.017 [2024-05-15 09:08:42.577735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.017 [2024-05-15 09:08:42.577761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.017 qpair failed and we were unable to recover it. 00:42:48.017 [2024-05-15 09:08:42.577935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.017 [2024-05-15 09:08:42.578068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.017 [2024-05-15 09:08:42.578098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.017 qpair failed and we were unable to recover it. 00:42:48.017 [2024-05-15 09:08:42.578265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.017 [2024-05-15 09:08:42.578379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.017 [2024-05-15 09:08:42.578406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.017 qpair failed and we were unable to recover it. 00:42:48.017 [2024-05-15 09:08:42.578520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.017 [2024-05-15 09:08:42.578655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.017 [2024-05-15 09:08:42.578690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.017 qpair failed and we were unable to recover it. 00:42:48.017 [2024-05-15 09:08:42.578806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.017 [2024-05-15 09:08:42.578912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.017 [2024-05-15 09:08:42.578940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.017 qpair failed and we were unable to recover it. 00:42:48.017 [2024-05-15 09:08:42.579058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.017 [2024-05-15 09:08:42.579213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.017 [2024-05-15 09:08:42.579250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.017 qpair failed and we were unable to recover it. 00:42:48.017 [2024-05-15 09:08:42.579359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.017 [2024-05-15 09:08:42.579478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.017 [2024-05-15 09:08:42.579508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.017 qpair failed and we were unable to recover it. 00:42:48.017 [2024-05-15 09:08:42.579653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.017 [2024-05-15 09:08:42.579792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.017 [2024-05-15 09:08:42.579826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.017 qpair failed and we were unable to recover it. 00:42:48.017 [2024-05-15 09:08:42.579971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.017 [2024-05-15 09:08:42.580089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.017 [2024-05-15 09:08:42.580116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.017 qpair failed and we were unable to recover it. 00:42:48.017 [2024-05-15 09:08:42.580242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.017 [2024-05-15 09:08:42.580405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.017 [2024-05-15 09:08:42.580434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.017 qpair failed and we were unable to recover it. 00:42:48.017 [2024-05-15 09:08:42.580576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.017 [2024-05-15 09:08:42.580714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.017 [2024-05-15 09:08:42.580745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.017 qpair failed and we were unable to recover it. 00:42:48.017 [2024-05-15 09:08:42.580860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.017 [2024-05-15 09:08:42.581006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.017 [2024-05-15 09:08:42.581033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.017 qpair failed and we were unable to recover it. 00:42:48.017 [2024-05-15 09:08:42.581142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.017 [2024-05-15 09:08:42.581269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.017 [2024-05-15 09:08:42.581299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.017 qpair failed and we were unable to recover it. 00:42:48.017 [2024-05-15 09:08:42.581442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.017 [2024-05-15 09:08:42.581582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.017 [2024-05-15 09:08:42.581610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.017 qpair failed and we were unable to recover it. 00:42:48.017 [2024-05-15 09:08:42.581739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.017 [2024-05-15 09:08:42.581879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.017 [2024-05-15 09:08:42.581908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.017 qpair failed and we were unable to recover it. 00:42:48.017 [2024-05-15 09:08:42.582051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.017 [2024-05-15 09:08:42.582192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.017 [2024-05-15 09:08:42.582226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.017 qpair failed and we were unable to recover it. 00:42:48.017 [2024-05-15 09:08:42.582351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.017 [2024-05-15 09:08:42.582464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.017 [2024-05-15 09:08:42.582490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.017 qpair failed and we were unable to recover it. 00:42:48.017 [2024-05-15 09:08:42.582625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.017 [2024-05-15 09:08:42.582780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.017 [2024-05-15 09:08:42.582809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.017 qpair failed and we were unable to recover it. 00:42:48.017 [2024-05-15 09:08:42.582926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.017 [2024-05-15 09:08:42.583060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.017 [2024-05-15 09:08:42.583091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.017 qpair failed and we were unable to recover it. 00:42:48.017 [2024-05-15 09:08:42.583261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.017 [2024-05-15 09:08:42.583379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.017 [2024-05-15 09:08:42.583405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.017 qpair failed and we were unable to recover it. 00:42:48.017 [2024-05-15 09:08:42.583556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.017 [2024-05-15 09:08:42.583694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.017 [2024-05-15 09:08:42.583724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.017 qpair failed and we were unable to recover it. 00:42:48.017 [2024-05-15 09:08:42.583837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.017 [2024-05-15 09:08:42.583969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.017 [2024-05-15 09:08:42.583998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.017 qpair failed and we were unable to recover it. 00:42:48.017 [2024-05-15 09:08:42.584136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.017 [2024-05-15 09:08:42.584294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.017 [2024-05-15 09:08:42.584324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.017 qpair failed and we were unable to recover it. 00:42:48.018 [2024-05-15 09:08:42.584457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.018 [2024-05-15 09:08:42.584560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.018 [2024-05-15 09:08:42.584586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.018 qpair failed and we were unable to recover it. 00:42:48.018 [2024-05-15 09:08:42.584699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.018 [2024-05-15 09:08:42.584841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.018 [2024-05-15 09:08:42.584868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.018 qpair failed and we were unable to recover it. 00:42:48.018 [2024-05-15 09:08:42.585036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.018 [2024-05-15 09:08:42.585147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.018 [2024-05-15 09:08:42.585177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.018 qpair failed and we were unable to recover it. 00:42:48.018 [2024-05-15 09:08:42.585321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.025 [2024-05-15 09:08:42.585432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.025 [2024-05-15 09:08:42.585460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.025 qpair failed and we were unable to recover it. 00:42:48.025 [2024-05-15 09:08:42.585598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.025 [2024-05-15 09:08:42.585700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.025 [2024-05-15 09:08:42.585728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.025 qpair failed and we were unable to recover it. 00:42:48.025 [2024-05-15 09:08:42.585876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.025 [2024-05-15 09:08:42.585986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.025 [2024-05-15 09:08:42.586014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.025 qpair failed and we were unable to recover it. 00:42:48.025 [2024-05-15 09:08:42.586135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.025 [2024-05-15 09:08:42.586271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.025 [2024-05-15 09:08:42.586297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.025 qpair failed and we were unable to recover it. 00:42:48.025 [2024-05-15 09:08:42.586431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.025 [2024-05-15 09:08:42.586546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.025 [2024-05-15 09:08:42.586575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.025 qpair failed and we were unable to recover it. 00:42:48.025 [2024-05-15 09:08:42.586710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.025 [2024-05-15 09:08:42.586851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.025 [2024-05-15 09:08:42.586879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.025 qpair failed and we were unable to recover it. 00:42:48.025 [2024-05-15 09:08:42.586990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.025 [2024-05-15 09:08:42.587105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.025 [2024-05-15 09:08:42.587131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.025 qpair failed and we were unable to recover it. 00:42:48.025 [2024-05-15 09:08:42.587256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.025 [2024-05-15 09:08:42.587400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.025 [2024-05-15 09:08:42.587426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.025 qpair failed and we were unable to recover it. 00:42:48.025 [2024-05-15 09:08:42.587562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.025 [2024-05-15 09:08:42.587692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.025 [2024-05-15 09:08:42.587722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.025 qpair failed and we were unable to recover it. 00:42:48.025 [2024-05-15 09:08:42.587869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.025 [2024-05-15 09:08:42.587977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.025 [2024-05-15 09:08:42.588004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.025 qpair failed and we were unable to recover it. 00:42:48.025 [2024-05-15 09:08:42.588116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.025 [2024-05-15 09:08:42.588221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.025 [2024-05-15 09:08:42.588249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.025 qpair failed and we were unable to recover it. 00:42:48.025 [2024-05-15 09:08:42.588366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.025 [2024-05-15 09:08:42.588502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.025 [2024-05-15 09:08:42.588537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.025 qpair failed and we were unable to recover it. 00:42:48.025 [2024-05-15 09:08:42.588651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.025 [2024-05-15 09:08:42.588744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.025 [2024-05-15 09:08:42.588773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.025 qpair failed and we were unable to recover it. 00:42:48.025 [2024-05-15 09:08:42.588882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.026 [2024-05-15 09:08:42.589042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.026 [2024-05-15 09:08:42.589066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.026 qpair failed and we were unable to recover it. 00:42:48.026 [2024-05-15 09:08:42.589210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.026 [2024-05-15 09:08:42.589380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.026 [2024-05-15 09:08:42.589412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.026 qpair failed and we were unable to recover it. 00:42:48.026 [2024-05-15 09:08:42.589552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.026 [2024-05-15 09:08:42.589660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.026 [2024-05-15 09:08:42.589690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.026 qpair failed and we were unable to recover it. 00:42:48.026 [2024-05-15 09:08:42.589829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.026 [2024-05-15 09:08:42.589964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.026 [2024-05-15 09:08:42.589990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.026 qpair failed and we were unable to recover it. 00:42:48.026 [2024-05-15 09:08:42.590133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.026 [2024-05-15 09:08:42.590253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.026 EAL: No free 2048 kB hugepages reported on node 1 00:42:48.026 [2024-05-15 09:08:42.590280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.026 qpair failed and we were unable to recover it. 00:42:48.026 [2024-05-15 09:08:42.590391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.026 [2024-05-15 09:08:42.590539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.026 [2024-05-15 09:08:42.590565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.026 qpair failed and we were unable to recover it. 00:42:48.026 [2024-05-15 09:08:42.590706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.026 [2024-05-15 09:08:42.590870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.026 [2024-05-15 09:08:42.590897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.026 qpair failed and we were unable to recover it. 00:42:48.026 [2024-05-15 09:08:42.591005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.026 [2024-05-15 09:08:42.591130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.026 [2024-05-15 09:08:42.591154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.026 qpair failed and we were unable to recover it. 00:42:48.026 [2024-05-15 09:08:42.591309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.026 [2024-05-15 09:08:42.591407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.026 [2024-05-15 09:08:42.591431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.026 qpair failed and we were unable to recover it. 00:42:48.026 [2024-05-15 09:08:42.591556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.026 [2024-05-15 09:08:42.591657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.026 [2024-05-15 09:08:42.591680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.026 qpair failed and we were unable to recover it. 00:42:48.026 [2024-05-15 09:08:42.591787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.026 [2024-05-15 09:08:42.591918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.026 [2024-05-15 09:08:42.591942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.026 qpair failed and we were unable to recover it. 00:42:48.026 [2024-05-15 09:08:42.592072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.026 [2024-05-15 09:08:42.592203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.026 [2024-05-15 09:08:42.592236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.026 qpair failed and we were unable to recover it. 00:42:48.026 [2024-05-15 09:08:42.592342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.026 [2024-05-15 09:08:42.592505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.026 [2024-05-15 09:08:42.592529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.026 qpair failed and we were unable to recover it. 00:42:48.026 [2024-05-15 09:08:42.592683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.026 [2024-05-15 09:08:42.592813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.026 [2024-05-15 09:08:42.592837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.026 qpair failed and we were unable to recover it. 00:42:48.026 [2024-05-15 09:08:42.592969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.026 [2024-05-15 09:08:42.593069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.026 [2024-05-15 09:08:42.593094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.026 qpair failed and we were unable to recover it. 00:42:48.026 [2024-05-15 09:08:42.593225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.026 [2024-05-15 09:08:42.593352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.026 [2024-05-15 09:08:42.593377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.026 qpair failed and we were unable to recover it. 00:42:48.026 [2024-05-15 09:08:42.593485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.026 [2024-05-15 09:08:42.593585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.026 [2024-05-15 09:08:42.593609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.026 qpair failed and we were unable to recover it. 00:42:48.026 [2024-05-15 09:08:42.593717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.026 [2024-05-15 09:08:42.593840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.026 [2024-05-15 09:08:42.593864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.026 qpair failed and we were unable to recover it. 00:42:48.026 [2024-05-15 09:08:42.593990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.026 [2024-05-15 09:08:42.594096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.026 [2024-05-15 09:08:42.594120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.026 qpair failed and we were unable to recover it. 00:42:48.026 [2024-05-15 09:08:42.594253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.026 [2024-05-15 09:08:42.594364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.026 [2024-05-15 09:08:42.594390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.026 qpair failed and we were unable to recover it. 00:42:48.026 [2024-05-15 09:08:42.594521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.026 [2024-05-15 09:08:42.594676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.026 [2024-05-15 09:08:42.594700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.026 qpair failed and we were unable to recover it. 00:42:48.026 [2024-05-15 09:08:42.594831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.026 [2024-05-15 09:08:42.594955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.026 [2024-05-15 09:08:42.594979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.026 qpair failed and we were unable to recover it. 00:42:48.026 [2024-05-15 09:08:42.595136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.026 [2024-05-15 09:08:42.595254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.026 [2024-05-15 09:08:42.595281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.026 qpair failed and we were unable to recover it. 00:42:48.026 [2024-05-15 09:08:42.595391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.026 [2024-05-15 09:08:42.595513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.026 [2024-05-15 09:08:42.595537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.026 qpair failed and we were unable to recover it. 00:42:48.026 [2024-05-15 09:08:42.595642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.026 [2024-05-15 09:08:42.595764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.026 [2024-05-15 09:08:42.595790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.026 qpair failed and we were unable to recover it. 00:42:48.026 [2024-05-15 09:08:42.595902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.026 [2024-05-15 09:08:42.596028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.026 [2024-05-15 09:08:42.596053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.026 qpair failed and we were unable to recover it. 00:42:48.026 [2024-05-15 09:08:42.596184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.026 [2024-05-15 09:08:42.596340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.026 [2024-05-15 09:08:42.596366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.026 qpair failed and we were unable to recover it. 00:42:48.026 [2024-05-15 09:08:42.596498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.026 [2024-05-15 09:08:42.596622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.026 [2024-05-15 09:08:42.596647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.026 qpair failed and we were unable to recover it. 00:42:48.026 [2024-05-15 09:08:42.596783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.026 [2024-05-15 09:08:42.596888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.026 [2024-05-15 09:08:42.596913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.026 qpair failed and we were unable to recover it. 00:42:48.027 [2024-05-15 09:08:42.597044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.027 [2024-05-15 09:08:42.597150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.027 [2024-05-15 09:08:42.597179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.027 qpair failed and we were unable to recover it. 00:42:48.027 [2024-05-15 09:08:42.597340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.027 [2024-05-15 09:08:42.597468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.027 [2024-05-15 09:08:42.597492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.027 qpair failed and we were unable to recover it. 00:42:48.027 [2024-05-15 09:08:42.597600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.027 [2024-05-15 09:08:42.597695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.027 [2024-05-15 09:08:42.597719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.027 qpair failed and we were unable to recover it. 00:42:48.027 [2024-05-15 09:08:42.597847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.027 [2024-05-15 09:08:42.597976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.027 [2024-05-15 09:08:42.597999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.027 qpair failed and we were unable to recover it. 00:42:48.027 [2024-05-15 09:08:42.598105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.027 [2024-05-15 09:08:42.598242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.027 [2024-05-15 09:08:42.598266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.027 qpair failed and we were unable to recover it. 00:42:48.027 [2024-05-15 09:08:42.598373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.027 [2024-05-15 09:08:42.598503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.027 [2024-05-15 09:08:42.598528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.027 qpair failed and we were unable to recover it. 00:42:48.027 [2024-05-15 09:08:42.598658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.027 [2024-05-15 09:08:42.598763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.027 [2024-05-15 09:08:42.598789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.027 qpair failed and we were unable to recover it. 00:42:48.027 [2024-05-15 09:08:42.598940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.027 [2024-05-15 09:08:42.599045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.027 [2024-05-15 09:08:42.599068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.027 qpair failed and we were unable to recover it. 00:42:48.027 [2024-05-15 09:08:42.599195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.027 [2024-05-15 09:08:42.599325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.027 [2024-05-15 09:08:42.599350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.027 qpair failed and we were unable to recover it. 00:42:48.027 [2024-05-15 09:08:42.599453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.027 [2024-05-15 09:08:42.599564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.027 [2024-05-15 09:08:42.599588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.027 qpair failed and we were unable to recover it. 00:42:48.027 [2024-05-15 09:08:42.599692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.027 [2024-05-15 09:08:42.599792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.027 [2024-05-15 09:08:42.599816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.027 qpair failed and we were unable to recover it. 00:42:48.027 [2024-05-15 09:08:42.599922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.027 [2024-05-15 09:08:42.600032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.027 [2024-05-15 09:08:42.600055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.027 qpair failed and we were unable to recover it. 00:42:48.027 [2024-05-15 09:08:42.600157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.027 [2024-05-15 09:08:42.600291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.027 [2024-05-15 09:08:42.600316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.027 qpair failed and we were unable to recover it. 00:42:48.027 [2024-05-15 09:08:42.600417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.027 [2024-05-15 09:08:42.600520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.027 [2024-05-15 09:08:42.600544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.027 qpair failed and we were unable to recover it. 00:42:48.027 [2024-05-15 09:08:42.600685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.027 [2024-05-15 09:08:42.600811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.027 [2024-05-15 09:08:42.600836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.027 qpair failed and we were unable to recover it. 00:42:48.027 [2024-05-15 09:08:42.600952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.027 [2024-05-15 09:08:42.601061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.027 [2024-05-15 09:08:42.601086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.027 qpair failed and we were unable to recover it. 00:42:48.027 [2024-05-15 09:08:42.601220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.027 [2024-05-15 09:08:42.601323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.027 [2024-05-15 09:08:42.601348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.027 qpair failed and we were unable to recover it. 00:42:48.027 [2024-05-15 09:08:42.601456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.027 [2024-05-15 09:08:42.601611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.027 [2024-05-15 09:08:42.601636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.027 qpair failed and we were unable to recover it. 00:42:48.027 [2024-05-15 09:08:42.601734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.027 [2024-05-15 09:08:42.601839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.027 [2024-05-15 09:08:42.601864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.027 qpair failed and we were unable to recover it. 00:42:48.027 [2024-05-15 09:08:42.601963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.027 [2024-05-15 09:08:42.602057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.027 [2024-05-15 09:08:42.602081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.027 qpair failed and we were unable to recover it. 00:42:48.027 [2024-05-15 09:08:42.602190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.027 [2024-05-15 09:08:42.602323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.027 [2024-05-15 09:08:42.602349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.027 qpair failed and we were unable to recover it. 00:42:48.027 [2024-05-15 09:08:42.602455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.027 [2024-05-15 09:08:42.602567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.027 [2024-05-15 09:08:42.602592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.027 qpair failed and we were unable to recover it. 00:42:48.027 [2024-05-15 09:08:42.602726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.027 [2024-05-15 09:08:42.602829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.027 [2024-05-15 09:08:42.602853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.027 qpair failed and we were unable to recover it. 00:42:48.027 [2024-05-15 09:08:42.602979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.027 [2024-05-15 09:08:42.603130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.027 [2024-05-15 09:08:42.603155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.027 qpair failed and we were unable to recover it. 00:42:48.027 [2024-05-15 09:08:42.603297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.027 [2024-05-15 09:08:42.603400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.027 [2024-05-15 09:08:42.603425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.027 qpair failed and we were unable to recover it. 00:42:48.027 [2024-05-15 09:08:42.603584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.027 [2024-05-15 09:08:42.603707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.027 [2024-05-15 09:08:42.603731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.027 qpair failed and we were unable to recover it. 00:42:48.027 [2024-05-15 09:08:42.603866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.027 [2024-05-15 09:08:42.604007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.027 [2024-05-15 09:08:42.604031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.027 qpair failed and we were unable to recover it. 00:42:48.027 [2024-05-15 09:08:42.604135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.027 [2024-05-15 09:08:42.604276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.027 [2024-05-15 09:08:42.604301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.027 qpair failed and we were unable to recover it. 00:42:48.027 [2024-05-15 09:08:42.604396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.027 [2024-05-15 09:08:42.604523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.027 [2024-05-15 09:08:42.604547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.028 qpair failed and we were unable to recover it. 00:42:48.028 [2024-05-15 09:08:42.604672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.028 [2024-05-15 09:08:42.604798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.028 [2024-05-15 09:08:42.604822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.028 qpair failed and we were unable to recover it. 00:42:48.028 [2024-05-15 09:08:42.604949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.028 [2024-05-15 09:08:42.605050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.028 [2024-05-15 09:08:42.605074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.028 qpair failed and we were unable to recover it. 00:42:48.028 [2024-05-15 09:08:42.605186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.028 [2024-05-15 09:08:42.605317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.028 [2024-05-15 09:08:42.605342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.028 qpair failed and we were unable to recover it. 00:42:48.028 [2024-05-15 09:08:42.605499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.028 [2024-05-15 09:08:42.605594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.028 [2024-05-15 09:08:42.605618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.028 qpair failed and we were unable to recover it. 00:42:48.028 [2024-05-15 09:08:42.605726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.028 [2024-05-15 09:08:42.605856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.028 [2024-05-15 09:08:42.605880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.028 qpair failed and we were unable to recover it. 00:42:48.028 [2024-05-15 09:08:42.606006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.028 [2024-05-15 09:08:42.606137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.028 [2024-05-15 09:08:42.606163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.028 qpair failed and we were unable to recover it. 00:42:48.028 [2024-05-15 09:08:42.606276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.028 [2024-05-15 09:08:42.606389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.028 [2024-05-15 09:08:42.606414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.028 qpair failed and we were unable to recover it. 00:42:48.028 [2024-05-15 09:08:42.606521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.028 [2024-05-15 09:08:42.606618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.028 [2024-05-15 09:08:42.606642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.028 qpair failed and we were unable to recover it. 00:42:48.028 [2024-05-15 09:08:42.606733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.028 [2024-05-15 09:08:42.606866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.028 [2024-05-15 09:08:42.606890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.028 qpair failed and we were unable to recover it. 00:42:48.028 [2024-05-15 09:08:42.607053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.028 [2024-05-15 09:08:42.607182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.028 [2024-05-15 09:08:42.607206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.028 qpair failed and we were unable to recover it. 00:42:48.028 [2024-05-15 09:08:42.607355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.028 [2024-05-15 09:08:42.607477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.028 [2024-05-15 09:08:42.607502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.028 qpair failed and we were unable to recover it. 00:42:48.028 [2024-05-15 09:08:42.607632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.028 [2024-05-15 09:08:42.607755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.028 [2024-05-15 09:08:42.607780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.028 qpair failed and we were unable to recover it. 00:42:48.028 [2024-05-15 09:08:42.607887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.028 [2024-05-15 09:08:42.607993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.028 [2024-05-15 09:08:42.608018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.028 qpair failed and we were unable to recover it. 00:42:48.028 [2024-05-15 09:08:42.608144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.028 [2024-05-15 09:08:42.608260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.028 [2024-05-15 09:08:42.608286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.028 qpair failed and we were unable to recover it. 00:42:48.028 [2024-05-15 09:08:42.608410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.028 [2024-05-15 09:08:42.608510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.028 [2024-05-15 09:08:42.608534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.028 qpair failed and we were unable to recover it. 00:42:48.028 [2024-05-15 09:08:42.608642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.028 [2024-05-15 09:08:42.608769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.028 [2024-05-15 09:08:42.608796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.028 qpair failed and we were unable to recover it. 00:42:48.028 [2024-05-15 09:08:42.608935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.028 [2024-05-15 09:08:42.609058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.028 [2024-05-15 09:08:42.609083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.028 qpair failed and we were unable to recover it. 00:42:48.028 [2024-05-15 09:08:42.609210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.028 [2024-05-15 09:08:42.609345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.028 [2024-05-15 09:08:42.609371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.028 qpair failed and we were unable to recover it. 00:42:48.028 [2024-05-15 09:08:42.609477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.028 [2024-05-15 09:08:42.609608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.028 [2024-05-15 09:08:42.609633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.028 qpair failed and we were unable to recover it. 00:42:48.028 [2024-05-15 09:08:42.609763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.028 [2024-05-15 09:08:42.609896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.028 [2024-05-15 09:08:42.609920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.028 qpair failed and we were unable to recover it. 00:42:48.028 [2024-05-15 09:08:42.610035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.028 [2024-05-15 09:08:42.610186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.028 [2024-05-15 09:08:42.610211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.028 qpair failed and we were unable to recover it. 00:42:48.028 [2024-05-15 09:08:42.610353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.028 [2024-05-15 09:08:42.610506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.028 [2024-05-15 09:08:42.610530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.028 qpair failed and we were unable to recover it. 00:42:48.028 [2024-05-15 09:08:42.610641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.028 [2024-05-15 09:08:42.610771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.028 [2024-05-15 09:08:42.610800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.028 qpair failed and we were unable to recover it. 00:42:48.028 [2024-05-15 09:08:42.610908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.028 [2024-05-15 09:08:42.611008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.028 [2024-05-15 09:08:42.611032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.028 qpair failed and we were unable to recover it. 00:42:48.028 [2024-05-15 09:08:42.611168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.028 [2024-05-15 09:08:42.611331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.028 [2024-05-15 09:08:42.611357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.028 qpair failed and we were unable to recover it. 00:42:48.028 [2024-05-15 09:08:42.611488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.028 [2024-05-15 09:08:42.611607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.028 [2024-05-15 09:08:42.611632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.028 qpair failed and we were unable to recover it. 00:42:48.028 [2024-05-15 09:08:42.611766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.028 [2024-05-15 09:08:42.611887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.028 [2024-05-15 09:08:42.611911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.028 qpair failed and we were unable to recover it. 00:42:48.028 [2024-05-15 09:08:42.612022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.028 [2024-05-15 09:08:42.612150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.028 [2024-05-15 09:08:42.612174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.028 qpair failed and we were unable to recover it. 00:42:48.028 [2024-05-15 09:08:42.612295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.028 [2024-05-15 09:08:42.612397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.029 [2024-05-15 09:08:42.612421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.029 qpair failed and we were unable to recover it. 00:42:48.029 [2024-05-15 09:08:42.612549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.029 [2024-05-15 09:08:42.612659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.029 [2024-05-15 09:08:42.612683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.029 qpair failed and we were unable to recover it. 00:42:48.029 [2024-05-15 09:08:42.612790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.029 [2024-05-15 09:08:42.612949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.029 [2024-05-15 09:08:42.612974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.029 qpair failed and we were unable to recover it. 00:42:48.029 [2024-05-15 09:08:42.613079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.029 [2024-05-15 09:08:42.613231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.029 [2024-05-15 09:08:42.613257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.029 qpair failed and we were unable to recover it. 00:42:48.029 [2024-05-15 09:08:42.613362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.029 [2024-05-15 09:08:42.613488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.029 [2024-05-15 09:08:42.613530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.029 qpair failed and we were unable to recover it. 00:42:48.029 [2024-05-15 09:08:42.613644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.029 [2024-05-15 09:08:42.613769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.029 [2024-05-15 09:08:42.613794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.029 qpair failed and we were unable to recover it. 00:42:48.029 [2024-05-15 09:08:42.613903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.029 [2024-05-15 09:08:42.614031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.029 [2024-05-15 09:08:42.614055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.029 qpair failed and we were unable to recover it. 00:42:48.029 [2024-05-15 09:08:42.614151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.029 [2024-05-15 09:08:42.614265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.029 [2024-05-15 09:08:42.614292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.029 qpair failed and we were unable to recover it. 00:42:48.029 [2024-05-15 09:08:42.614406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.029 [2024-05-15 09:08:42.614567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.029 [2024-05-15 09:08:42.614591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.029 qpair failed and we were unable to recover it. 00:42:48.029 [2024-05-15 09:08:42.614696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.029 [2024-05-15 09:08:42.614801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.029 [2024-05-15 09:08:42.614827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.029 qpair failed and we were unable to recover it. 00:42:48.029 [2024-05-15 09:08:42.614922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.029 [2024-05-15 09:08:42.615017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.029 [2024-05-15 09:08:42.615041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.029 qpair failed and we were unable to recover it. 00:42:48.029 [2024-05-15 09:08:42.615182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.029 [2024-05-15 09:08:42.615316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.029 [2024-05-15 09:08:42.615341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.029 qpair failed and we were unable to recover it. 00:42:48.029 [2024-05-15 09:08:42.615464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.029 [2024-05-15 09:08:42.615596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.029 [2024-05-15 09:08:42.615620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.029 qpair failed and we were unable to recover it. 00:42:48.029 [2024-05-15 09:08:42.615749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.029 [2024-05-15 09:08:42.615844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.029 [2024-05-15 09:08:42.615869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.029 qpair failed and we were unable to recover it. 00:42:48.029 [2024-05-15 09:08:42.616004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.029 [2024-05-15 09:08:42.616126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.029 [2024-05-15 09:08:42.616151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.029 qpair failed and we were unable to recover it. 00:42:48.029 [2024-05-15 09:08:42.616279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.029 [2024-05-15 09:08:42.616386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.029 [2024-05-15 09:08:42.616411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.029 qpair failed and we were unable to recover it. 00:42:48.029 [2024-05-15 09:08:42.616561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.029 [2024-05-15 09:08:42.616697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.029 [2024-05-15 09:08:42.616721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.029 qpair failed and we were unable to recover it. 00:42:48.029 [2024-05-15 09:08:42.616829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.029 [2024-05-15 09:08:42.616984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.029 [2024-05-15 09:08:42.617008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.029 qpair failed and we were unable to recover it. 00:42:48.029 [2024-05-15 09:08:42.617120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.029 [2024-05-15 09:08:42.617254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.029 [2024-05-15 09:08:42.617279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.029 qpair failed and we were unable to recover it. 00:42:48.029 [2024-05-15 09:08:42.617409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.029 [2024-05-15 09:08:42.617517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.029 [2024-05-15 09:08:42.617542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.029 qpair failed and we were unable to recover it. 00:42:48.029 [2024-05-15 09:08:42.617699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.029 [2024-05-15 09:08:42.617799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.029 [2024-05-15 09:08:42.617824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.029 qpair failed and we were unable to recover it. 00:42:48.029 [2024-05-15 09:08:42.617978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.029 [2024-05-15 09:08:42.618078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.029 [2024-05-15 09:08:42.618102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.029 qpair failed and we were unable to recover it. 00:42:48.029 [2024-05-15 09:08:42.618260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.029 [2024-05-15 09:08:42.618359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.029 [2024-05-15 09:08:42.618384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.029 qpair failed and we were unable to recover it. 00:42:48.029 [2024-05-15 09:08:42.618497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.029 [2024-05-15 09:08:42.618632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.029 [2024-05-15 09:08:42.618657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.029 qpair failed and we were unable to recover it. 00:42:48.029 [2024-05-15 09:08:42.618785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.029 [2024-05-15 09:08:42.618891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.029 [2024-05-15 09:08:42.618916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.029 qpair failed and we were unable to recover it. 00:42:48.029 [2024-05-15 09:08:42.619053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.029 [2024-05-15 09:08:42.619176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.029 [2024-05-15 09:08:42.619201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.029 qpair failed and we were unable to recover it. 00:42:48.029 [2024-05-15 09:08:42.619341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.029 [2024-05-15 09:08:42.619435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.029 [2024-05-15 09:08:42.619459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.029 qpair failed and we were unable to recover it. 00:42:48.029 [2024-05-15 09:08:42.619573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.029 [2024-05-15 09:08:42.619677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.029 [2024-05-15 09:08:42.619701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.029 qpair failed and we were unable to recover it. 00:42:48.029 [2024-05-15 09:08:42.619801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.029 [2024-05-15 09:08:42.619930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.029 [2024-05-15 09:08:42.619955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.029 qpair failed and we were unable to recover it. 00:42:48.029 [2024-05-15 09:08:42.620085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.029 [2024-05-15 09:08:42.620181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.030 [2024-05-15 09:08:42.620206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.030 qpair failed and we were unable to recover it. 00:42:48.030 [2024-05-15 09:08:42.620368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.030 [2024-05-15 09:08:42.620468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.030 [2024-05-15 09:08:42.620493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.030 qpair failed and we were unable to recover it. 00:42:48.030 [2024-05-15 09:08:42.620592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.030 [2024-05-15 09:08:42.620705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.030 [2024-05-15 09:08:42.620730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.030 qpair failed and we were unable to recover it. 00:42:48.030 [2024-05-15 09:08:42.620857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.030 [2024-05-15 09:08:42.620987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.030 [2024-05-15 09:08:42.621012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.030 qpair failed and we were unable to recover it. 00:42:48.030 [2024-05-15 09:08:42.621139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.030 [2024-05-15 09:08:42.621246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.030 [2024-05-15 09:08:42.621272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.030 qpair failed and we were unable to recover it. 00:42:48.030 [2024-05-15 09:08:42.621384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.030 [2024-05-15 09:08:42.621546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.030 [2024-05-15 09:08:42.621571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.030 qpair failed and we were unable to recover it. 00:42:48.030 [2024-05-15 09:08:42.621740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.030 [2024-05-15 09:08:42.621874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.030 [2024-05-15 09:08:42.621899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.030 qpair failed and we were unable to recover it. 00:42:48.030 [2024-05-15 09:08:42.622032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.030 [2024-05-15 09:08:42.622164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.030 [2024-05-15 09:08:42.622189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.030 qpair failed and we were unable to recover it. 00:42:48.030 [2024-05-15 09:08:42.622316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.030 [2024-05-15 09:08:42.622420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.030 [2024-05-15 09:08:42.622445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.030 qpair failed and we were unable to recover it. 00:42:48.030 [2024-05-15 09:08:42.622602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.030 [2024-05-15 09:08:42.622728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.030 [2024-05-15 09:08:42.622753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.030 qpair failed and we were unable to recover it. 00:42:48.030 [2024-05-15 09:08:42.622875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.030 [2024-05-15 09:08:42.623002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.030 [2024-05-15 09:08:42.623026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.030 qpair failed and we were unable to recover it. 00:42:48.030 [2024-05-15 09:08:42.623152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.030 [2024-05-15 09:08:42.623289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.030 [2024-05-15 09:08:42.623314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.030 qpair failed and we were unable to recover it. 00:42:48.030 [2024-05-15 09:08:42.623418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.030 [2024-05-15 09:08:42.623520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.030 [2024-05-15 09:08:42.623545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.030 qpair failed and we were unable to recover it. 00:42:48.030 [2024-05-15 09:08:42.623682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.030 [2024-05-15 09:08:42.623830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.030 [2024-05-15 09:08:42.623855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.030 qpair failed and we were unable to recover it. 00:42:48.030 [2024-05-15 09:08:42.623986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.030 [2024-05-15 09:08:42.624085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.030 [2024-05-15 09:08:42.624111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.030 qpair failed and we were unable to recover it. 00:42:48.030 [2024-05-15 09:08:42.624213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.030 [2024-05-15 09:08:42.624346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.030 [2024-05-15 09:08:42.624371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.030 qpair failed and we were unable to recover it. 00:42:48.030 [2024-05-15 09:08:42.624481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.030 [2024-05-15 09:08:42.624618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.030 [2024-05-15 09:08:42.624642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.030 qpair failed and we were unable to recover it. 00:42:48.030 [2024-05-15 09:08:42.624769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.030 [2024-05-15 09:08:42.624921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.030 [2024-05-15 09:08:42.624946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.030 qpair failed and we were unable to recover it. 00:42:48.030 [2024-05-15 09:08:42.625073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.030 [2024-05-15 09:08:42.625181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.030 [2024-05-15 09:08:42.625206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.030 qpair failed and we were unable to recover it. 00:42:48.030 [2024-05-15 09:08:42.625308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.030 [2024-05-15 09:08:42.625455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.030 [2024-05-15 09:08:42.625480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.030 qpair failed and we were unable to recover it. 00:42:48.030 [2024-05-15 09:08:42.625583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.030 [2024-05-15 09:08:42.625690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.030 [2024-05-15 09:08:42.625716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.030 qpair failed and we were unable to recover it. 00:42:48.030 [2024-05-15 09:08:42.625872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.030 [2024-05-15 09:08:42.625997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.030 [2024-05-15 09:08:42.626022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.030 qpair failed and we were unable to recover it. 00:42:48.030 [2024-05-15 09:08:42.626174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.030 [2024-05-15 09:08:42.626314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.030 [2024-05-15 09:08:42.626339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.030 qpair failed and we were unable to recover it. 00:42:48.030 [2024-05-15 09:08:42.626496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.030 [2024-05-15 09:08:42.626603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.030 [2024-05-15 09:08:42.626628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.030 qpair failed and we were unable to recover it. 00:42:48.030 [2024-05-15 09:08:42.626757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.030 [2024-05-15 09:08:42.626910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.030 [2024-05-15 09:08:42.626936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.031 qpair failed and we were unable to recover it. 00:42:48.031 [2024-05-15 09:08:42.627086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.031 [2024-05-15 09:08:42.627212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.031 [2024-05-15 09:08:42.627242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.031 qpair failed and we were unable to recover it. 00:42:48.031 [2024-05-15 09:08:42.627363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.031 [2024-05-15 09:08:42.627459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.031 [2024-05-15 09:08:42.627487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.031 qpair failed and we were unable to recover it. 00:42:48.031 [2024-05-15 09:08:42.627643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.031 [2024-05-15 09:08:42.627763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.031 [2024-05-15 09:08:42.627787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.031 qpair failed and we were unable to recover it. 00:42:48.031 [2024-05-15 09:08:42.627917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.031 [2024-05-15 09:08:42.628042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.031 [2024-05-15 09:08:42.628066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.031 qpair failed and we were unable to recover it. 00:42:48.031 [2024-05-15 09:08:42.628172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.031 [2024-05-15 09:08:42.628299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.031 [2024-05-15 09:08:42.628323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.031 qpair failed and we were unable to recover it. 00:42:48.031 [2024-05-15 09:08:42.628452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.031 [2024-05-15 09:08:42.628619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.031 [2024-05-15 09:08:42.628643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.031 qpair failed and we were unable to recover it. 00:42:48.031 [2024-05-15 09:08:42.628737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.031 [2024-05-15 09:08:42.628839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.031 [2024-05-15 09:08:42.628864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.031 qpair failed and we were unable to recover it. 00:42:48.031 [2024-05-15 09:08:42.628956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.031 [2024-05-15 09:08:42.629081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.031 [2024-05-15 09:08:42.629106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.031 qpair failed and we were unable to recover it. 00:42:48.031 [2024-05-15 09:08:42.629208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.031 [2024-05-15 09:08:42.629363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.031 [2024-05-15 09:08:42.629387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.031 qpair failed and we were unable to recover it. 00:42:48.031 [2024-05-15 09:08:42.629489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.031 [2024-05-15 09:08:42.629591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.031 [2024-05-15 09:08:42.629615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.031 qpair failed and we were unable to recover it. 00:42:48.031 [2024-05-15 09:08:42.629743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.031 [2024-05-15 09:08:42.629879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.031 [2024-05-15 09:08:42.629902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.031 qpair failed and we were unable to recover it. 00:42:48.031 [2024-05-15 09:08:42.629991] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:48.031 [2024-05-15 09:08:42.630006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.031 [2024-05-15 09:08:42.630160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.031 [2024-05-15 09:08:42.630188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.031 qpair failed and we were unable to recover it. 00:42:48.031 [2024-05-15 09:08:42.630319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.031 [2024-05-15 09:08:42.630478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.031 [2024-05-15 09:08:42.630503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.031 qpair failed and we were unable to recover it. 00:42:48.031 [2024-05-15 09:08:42.630670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.031 [2024-05-15 09:08:42.630799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.031 [2024-05-15 09:08:42.630824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.031 qpair failed and we were unable to recover it. 00:42:48.031 [2024-05-15 09:08:42.630929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.031 [2024-05-15 09:08:42.631060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.031 [2024-05-15 09:08:42.631083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.031 qpair failed and we were unable to recover it. 00:42:48.031 [2024-05-15 09:08:42.631190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.031 [2024-05-15 09:08:42.631342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.031 [2024-05-15 09:08:42.631366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.031 qpair failed and we were unable to recover it. 00:42:48.031 [2024-05-15 09:08:42.631499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.031 [2024-05-15 09:08:42.631654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.031 [2024-05-15 09:08:42.631679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.031 qpair failed and we were unable to recover it. 00:42:48.031 [2024-05-15 09:08:42.631834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.031 [2024-05-15 09:08:42.631964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.031 [2024-05-15 09:08:42.631989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.031 qpair failed and we were unable to recover it. 00:42:48.031 [2024-05-15 09:08:42.632117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.031 [2024-05-15 09:08:42.632242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.031 [2024-05-15 09:08:42.632268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.031 qpair failed and we were unable to recover it. 00:42:48.031 [2024-05-15 09:08:42.632371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.031 [2024-05-15 09:08:42.632495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.031 [2024-05-15 09:08:42.632527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.031 qpair failed and we were unable to recover it. 00:42:48.031 [2024-05-15 09:08:42.632658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.031 [2024-05-15 09:08:42.632783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.031 [2024-05-15 09:08:42.632806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.031 qpair failed and we were unable to recover it. 00:42:48.031 [2024-05-15 09:08:42.632934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.031 [2024-05-15 09:08:42.633043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.031 [2024-05-15 09:08:42.633067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.031 qpair failed and we were unable to recover it. 00:42:48.031 [2024-05-15 09:08:42.633222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.031 [2024-05-15 09:08:42.633323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.031 [2024-05-15 09:08:42.633348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.031 qpair failed and we were unable to recover it. 00:42:48.031 [2024-05-15 09:08:42.633478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.031 [2024-05-15 09:08:42.633638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.031 [2024-05-15 09:08:42.633662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.031 qpair failed and we were unable to recover it. 00:42:48.031 [2024-05-15 09:08:42.633818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.031 [2024-05-15 09:08:42.633922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.031 [2024-05-15 09:08:42.633946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.031 qpair failed and we were unable to recover it. 00:42:48.031 [2024-05-15 09:08:42.634069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.031 [2024-05-15 09:08:42.634175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.031 [2024-05-15 09:08:42.634199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.031 qpair failed and we were unable to recover it. 00:42:48.031 [2024-05-15 09:08:42.634363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.031 [2024-05-15 09:08:42.634465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.031 [2024-05-15 09:08:42.634490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.031 qpair failed and we were unable to recover it. 00:42:48.031 [2024-05-15 09:08:42.634626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.031 [2024-05-15 09:08:42.634755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.031 [2024-05-15 09:08:42.634780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.031 qpair failed and we were unable to recover it. 00:42:48.031 [2024-05-15 09:08:42.634909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.032 [2024-05-15 09:08:42.635038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.032 [2024-05-15 09:08:42.635062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.032 qpair failed and we were unable to recover it. 00:42:48.032 [2024-05-15 09:08:42.635251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.032 [2024-05-15 09:08:42.635385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.032 [2024-05-15 09:08:42.635409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.032 qpair failed and we were unable to recover it. 00:42:48.032 [2024-05-15 09:08:42.635546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.032 [2024-05-15 09:08:42.635668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.032 [2024-05-15 09:08:42.635692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.032 qpair failed and we were unable to recover it. 00:42:48.032 [2024-05-15 09:08:42.635823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.032 [2024-05-15 09:08:42.635983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.032 [2024-05-15 09:08:42.636007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.032 qpair failed and we were unable to recover it. 00:42:48.032 [2024-05-15 09:08:42.636141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.032 [2024-05-15 09:08:42.636259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.032 [2024-05-15 09:08:42.636287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.032 qpair failed and we were unable to recover it. 00:42:48.032 [2024-05-15 09:08:42.636396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.032 [2024-05-15 09:08:42.636546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.032 [2024-05-15 09:08:42.636570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.032 qpair failed and we were unable to recover it. 00:42:48.032 [2024-05-15 09:08:42.636672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.032 [2024-05-15 09:08:42.636826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.032 [2024-05-15 09:08:42.636850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.032 qpair failed and we were unable to recover it. 00:42:48.032 [2024-05-15 09:08:42.636963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.032 [2024-05-15 09:08:42.637119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.032 [2024-05-15 09:08:42.637145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.032 qpair failed and we were unable to recover it. 00:42:48.032 [2024-05-15 09:08:42.637291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.032 [2024-05-15 09:08:42.637397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.032 [2024-05-15 09:08:42.637421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.032 qpair failed and we were unable to recover it. 00:42:48.032 [2024-05-15 09:08:42.637561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.032 [2024-05-15 09:08:42.637667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.032 [2024-05-15 09:08:42.637691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.032 qpair failed and we were unable to recover it. 00:42:48.032 [2024-05-15 09:08:42.637824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.032 [2024-05-15 09:08:42.637977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.032 [2024-05-15 09:08:42.638001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.032 qpair failed and we were unable to recover it. 00:42:48.032 [2024-05-15 09:08:42.638142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.032 [2024-05-15 09:08:42.638254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.032 [2024-05-15 09:08:42.638278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.032 qpair failed and we were unable to recover it. 00:42:48.032 [2024-05-15 09:08:42.638385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.032 [2024-05-15 09:08:42.638492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.032 [2024-05-15 09:08:42.638516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.032 qpair failed and we were unable to recover it. 00:42:48.032 [2024-05-15 09:08:42.638673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.032 [2024-05-15 09:08:42.638810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.032 [2024-05-15 09:08:42.638836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.032 qpair failed and we were unable to recover it. 00:42:48.032 [2024-05-15 09:08:42.638955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.032 [2024-05-15 09:08:42.639053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.032 [2024-05-15 09:08:42.639078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.032 qpair failed and we were unable to recover it. 00:42:48.032 [2024-05-15 09:08:42.639177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.032 [2024-05-15 09:08:42.639320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.032 [2024-05-15 09:08:42.639345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.032 qpair failed and we were unable to recover it. 00:42:48.032 [2024-05-15 09:08:42.639451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.032 [2024-05-15 09:08:42.639581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.032 [2024-05-15 09:08:42.639605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.032 qpair failed and we were unable to recover it. 00:42:48.032 [2024-05-15 09:08:42.639734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.032 [2024-05-15 09:08:42.639842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.032 [2024-05-15 09:08:42.639867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.032 qpair failed and we were unable to recover it. 00:42:48.032 [2024-05-15 09:08:42.639997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.032 [2024-05-15 09:08:42.640103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.032 [2024-05-15 09:08:42.640128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.032 qpair failed and we were unable to recover it. 00:42:48.032 [2024-05-15 09:08:42.640230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.032 [2024-05-15 09:08:42.640353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.032 [2024-05-15 09:08:42.640378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.032 qpair failed and we were unable to recover it. 00:42:48.032 [2024-05-15 09:08:42.640517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.032 [2024-05-15 09:08:42.640673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.032 [2024-05-15 09:08:42.640698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.032 qpair failed and we were unable to recover it. 00:42:48.032 [2024-05-15 09:08:42.640797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.032 [2024-05-15 09:08:42.640924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.032 [2024-05-15 09:08:42.640948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.032 qpair failed and we were unable to recover it. 00:42:48.032 [2024-05-15 09:08:42.641080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.032 [2024-05-15 09:08:42.641179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.032 [2024-05-15 09:08:42.641203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.032 qpair failed and we were unable to recover it. 00:42:48.032 [2024-05-15 09:08:42.641362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.032 [2024-05-15 09:08:42.641471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.032 [2024-05-15 09:08:42.641504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.032 qpair failed and we were unable to recover it. 00:42:48.032 [2024-05-15 09:08:42.641633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.032 [2024-05-15 09:08:42.641763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.032 [2024-05-15 09:08:42.641787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.032 qpair failed and we were unable to recover it. 00:42:48.032 [2024-05-15 09:08:42.641920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.032 [2024-05-15 09:08:42.642027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.032 [2024-05-15 09:08:42.642051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.032 qpair failed and we were unable to recover it. 00:42:48.032 [2024-05-15 09:08:42.642162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.032 [2024-05-15 09:08:42.642318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.032 [2024-05-15 09:08:42.642343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.032 qpair failed and we were unable to recover it. 00:42:48.032 [2024-05-15 09:08:42.642449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.032 [2024-05-15 09:08:42.642613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.032 [2024-05-15 09:08:42.642637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.032 qpair failed and we were unable to recover it. 00:42:48.032 [2024-05-15 09:08:42.642775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.032 [2024-05-15 09:08:42.642882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.032 [2024-05-15 09:08:42.642908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.032 qpair failed and we were unable to recover it. 00:42:48.032 [2024-05-15 09:08:42.643039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.033 [2024-05-15 09:08:42.643146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.033 [2024-05-15 09:08:42.643171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.033 qpair failed and we were unable to recover it. 00:42:48.033 [2024-05-15 09:08:42.643340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.033 [2024-05-15 09:08:42.643446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.033 [2024-05-15 09:08:42.643472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.033 qpair failed and we were unable to recover it. 00:42:48.033 [2024-05-15 09:08:42.643620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.033 [2024-05-15 09:08:42.643752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.033 [2024-05-15 09:08:42.643776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.033 qpair failed and we were unable to recover it. 00:42:48.033 [2024-05-15 09:08:42.643883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.033 [2024-05-15 09:08:42.644015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.033 [2024-05-15 09:08:42.644040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.033 qpair failed and we were unable to recover it. 00:42:48.033 [2024-05-15 09:08:42.644172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.033 [2024-05-15 09:08:42.644281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.033 [2024-05-15 09:08:42.644307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.033 qpair failed and we were unable to recover it. 00:42:48.033 [2024-05-15 09:08:42.644413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.033 [2024-05-15 09:08:42.644508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.033 [2024-05-15 09:08:42.644536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.033 qpair failed and we were unable to recover it. 00:42:48.033 [2024-05-15 09:08:42.644633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.033 [2024-05-15 09:08:42.644786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.033 [2024-05-15 09:08:42.644811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.033 qpair failed and we were unable to recover it. 00:42:48.033 [2024-05-15 09:08:42.644915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.033 [2024-05-15 09:08:42.645040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.033 [2024-05-15 09:08:42.645065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.033 qpair failed and we were unable to recover it. 00:42:48.033 [2024-05-15 09:08:42.645224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.033 [2024-05-15 09:08:42.645357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.033 [2024-05-15 09:08:42.645382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.033 qpair failed and we were unable to recover it. 00:42:48.033 [2024-05-15 09:08:42.645512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.033 [2024-05-15 09:08:42.645628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.033 [2024-05-15 09:08:42.645653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.033 qpair failed and we were unable to recover it. 00:42:48.033 [2024-05-15 09:08:42.645810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.033 [2024-05-15 09:08:42.645940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.033 [2024-05-15 09:08:42.645964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.033 qpair failed and we were unable to recover it. 00:42:48.033 [2024-05-15 09:08:42.646062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.033 [2024-05-15 09:08:42.646191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.033 [2024-05-15 09:08:42.646229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.033 qpair failed and we were unable to recover it. 00:42:48.033 [2024-05-15 09:08:42.646340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.033 [2024-05-15 09:08:42.646471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.033 [2024-05-15 09:08:42.646505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.033 qpair failed and we were unable to recover it. 00:42:48.033 [2024-05-15 09:08:42.646661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.033 [2024-05-15 09:08:42.646762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.033 [2024-05-15 09:08:42.646787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.033 qpair failed and we were unable to recover it. 00:42:48.033 [2024-05-15 09:08:42.646903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.033 [2024-05-15 09:08:42.647029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.033 [2024-05-15 09:08:42.647054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.033 qpair failed and we were unable to recover it. 00:42:48.033 [2024-05-15 09:08:42.647207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.033 [2024-05-15 09:08:42.647319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.033 [2024-05-15 09:08:42.647345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.033 qpair failed and we were unable to recover it. 00:42:48.033 [2024-05-15 09:08:42.647451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.033 [2024-05-15 09:08:42.647587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.033 [2024-05-15 09:08:42.647614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.033 qpair failed and we were unable to recover it. 00:42:48.033 [2024-05-15 09:08:42.647756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.033 [2024-05-15 09:08:42.647887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.033 [2024-05-15 09:08:42.647913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.033 qpair failed and we were unable to recover it. 00:42:48.033 [2024-05-15 09:08:42.648042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.033 [2024-05-15 09:08:42.648139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.033 [2024-05-15 09:08:42.648163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.033 qpair failed and we were unable to recover it. 00:42:48.033 [2024-05-15 09:08:42.648295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.033 [2024-05-15 09:08:42.648397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.033 [2024-05-15 09:08:42.648422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.033 qpair failed and we were unable to recover it. 00:42:48.033 [2024-05-15 09:08:42.648584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.033 [2024-05-15 09:08:42.648716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.033 [2024-05-15 09:08:42.648741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.033 qpair failed and we were unable to recover it. 00:42:48.033 [2024-05-15 09:08:42.648842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.033 [2024-05-15 09:08:42.648974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.033 [2024-05-15 09:08:42.648999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.033 qpair failed and we were unable to recover it. 00:42:48.033 [2024-05-15 09:08:42.649106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.033 [2024-05-15 09:08:42.649207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.033 [2024-05-15 09:08:42.649239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.033 qpair failed and we were unable to recover it. 00:42:48.033 [2024-05-15 09:08:42.649372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.033 [2024-05-15 09:08:42.649503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.033 [2024-05-15 09:08:42.649527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.033 qpair failed and we were unable to recover it. 00:42:48.033 [2024-05-15 09:08:42.649661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.033 [2024-05-15 09:08:42.649770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.033 [2024-05-15 09:08:42.649794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.033 qpair failed and we were unable to recover it. 00:42:48.033 [2024-05-15 09:08:42.649946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.033 [2024-05-15 09:08:42.650102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.033 [2024-05-15 09:08:42.650125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.033 qpair failed and we were unable to recover it. 00:42:48.033 [2024-05-15 09:08:42.650267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.033 [2024-05-15 09:08:42.650377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.033 [2024-05-15 09:08:42.650404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.033 qpair failed and we were unable to recover it. 00:42:48.033 [2024-05-15 09:08:42.650505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.033 [2024-05-15 09:08:42.650635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.033 [2024-05-15 09:08:42.650659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.033 qpair failed and we were unable to recover it. 00:42:48.033 [2024-05-15 09:08:42.650771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.033 [2024-05-15 09:08:42.650900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.033 [2024-05-15 09:08:42.650925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.033 qpair failed and we were unable to recover it. 00:42:48.033 [2024-05-15 09:08:42.651037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.034 [2024-05-15 09:08:42.651145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.034 [2024-05-15 09:08:42.651169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.034 qpair failed and we were unable to recover it. 00:42:48.034 [2024-05-15 09:08:42.651298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.034 [2024-05-15 09:08:42.651428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.034 [2024-05-15 09:08:42.651452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.034 qpair failed and we were unable to recover it. 00:42:48.034 [2024-05-15 09:08:42.651562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.034 [2024-05-15 09:08:42.651691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.034 [2024-05-15 09:08:42.651716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.034 qpair failed and we were unable to recover it. 00:42:48.034 [2024-05-15 09:08:42.651821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.034 [2024-05-15 09:08:42.651977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.034 [2024-05-15 09:08:42.652003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.034 qpair failed and we were unable to recover it. 00:42:48.034 [2024-05-15 09:08:42.652104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.034 [2024-05-15 09:08:42.652213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.034 [2024-05-15 09:08:42.652245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.034 qpair failed and we were unable to recover it. 00:42:48.034 [2024-05-15 09:08:42.652347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.034 [2024-05-15 09:08:42.652449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.034 [2024-05-15 09:08:42.652474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.034 qpair failed and we were unable to recover it. 00:42:48.034 [2024-05-15 09:08:42.652640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.034 [2024-05-15 09:08:42.652741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.034 [2024-05-15 09:08:42.652767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.034 qpair failed and we were unable to recover it. 00:42:48.034 [2024-05-15 09:08:42.652909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.034 [2024-05-15 09:08:42.653061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.034 [2024-05-15 09:08:42.653086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.034 qpair failed and we were unable to recover it. 00:42:48.034 [2024-05-15 09:08:42.653233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.034 [2024-05-15 09:08:42.653381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.034 [2024-05-15 09:08:42.653406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.034 qpair failed and we were unable to recover it. 00:42:48.034 [2024-05-15 09:08:42.653540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.034 [2024-05-15 09:08:42.653669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.034 [2024-05-15 09:08:42.653695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.034 qpair failed and we were unable to recover it. 00:42:48.034 [2024-05-15 09:08:42.653800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.034 [2024-05-15 09:08:42.653902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.034 [2024-05-15 09:08:42.653927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.034 qpair failed and we were unable to recover it. 00:42:48.034 [2024-05-15 09:08:42.654035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.034 [2024-05-15 09:08:42.654158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.034 [2024-05-15 09:08:42.654183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.034 qpair failed and we were unable to recover it. 00:42:48.034 [2024-05-15 09:08:42.654291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.034 [2024-05-15 09:08:42.654465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.034 [2024-05-15 09:08:42.654500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.034 qpair failed and we were unable to recover it. 00:42:48.034 [2024-05-15 09:08:42.654605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.034 [2024-05-15 09:08:42.654712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.034 [2024-05-15 09:08:42.654738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.034 qpair failed and we were unable to recover it. 00:42:48.034 [2024-05-15 09:08:42.654842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.034 [2024-05-15 09:08:42.654949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.034 [2024-05-15 09:08:42.654974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.034 qpair failed and we were unable to recover it. 00:42:48.034 [2024-05-15 09:08:42.655129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.034 [2024-05-15 09:08:42.655258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.034 [2024-05-15 09:08:42.655284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.034 qpair failed and we were unable to recover it. 00:42:48.034 [2024-05-15 09:08:42.655437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.034 [2024-05-15 09:08:42.655546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.034 [2024-05-15 09:08:42.655572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.034 qpair failed and we were unable to recover it. 00:42:48.034 [2024-05-15 09:08:42.655704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.034 [2024-05-15 09:08:42.655826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.034 [2024-05-15 09:08:42.655851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.034 qpair failed and we were unable to recover it. 00:42:48.034 [2024-05-15 09:08:42.655980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.034 [2024-05-15 09:08:42.656102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.034 [2024-05-15 09:08:42.656127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.034 qpair failed and we were unable to recover it. 00:42:48.034 [2024-05-15 09:08:42.656304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.034 [2024-05-15 09:08:42.656437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.034 [2024-05-15 09:08:42.656462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.034 qpair failed and we were unable to recover it. 00:42:48.034 [2024-05-15 09:08:42.656591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.034 [2024-05-15 09:08:42.656720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.034 [2024-05-15 09:08:42.656745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.034 qpair failed and we were unable to recover it. 00:42:48.034 [2024-05-15 09:08:42.656872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.034 [2024-05-15 09:08:42.656978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.034 [2024-05-15 09:08:42.657002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.034 qpair failed and we were unable to recover it. 00:42:48.034 [2024-05-15 09:08:42.657104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.034 [2024-05-15 09:08:42.657235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.034 [2024-05-15 09:08:42.657261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.034 qpair failed and we were unable to recover it. 00:42:48.034 [2024-05-15 09:08:42.657365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.034 [2024-05-15 09:08:42.657493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.034 [2024-05-15 09:08:42.657523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.034 qpair failed and we were unable to recover it. 00:42:48.034 [2024-05-15 09:08:42.657685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.034 [2024-05-15 09:08:42.657817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.035 [2024-05-15 09:08:42.657841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.035 qpair failed and we were unable to recover it. 00:42:48.035 [2024-05-15 09:08:42.657945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.035 [2024-05-15 09:08:42.658045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.035 [2024-05-15 09:08:42.658069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.035 qpair failed and we were unable to recover it. 00:42:48.035 [2024-05-15 09:08:42.658179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.035 [2024-05-15 09:08:42.658287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.035 [2024-05-15 09:08:42.658312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.035 qpair failed and we were unable to recover it. 00:42:48.035 [2024-05-15 09:08:42.658443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.035 [2024-05-15 09:08:42.658573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.035 [2024-05-15 09:08:42.658602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.035 qpair failed and we were unable to recover it. 00:42:48.035 [2024-05-15 09:08:42.658708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.035 [2024-05-15 09:08:42.658837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.035 [2024-05-15 09:08:42.658861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.035 qpair failed and we were unable to recover it. 00:42:48.035 [2024-05-15 09:08:42.658967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.035 [2024-05-15 09:08:42.659072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.035 [2024-05-15 09:08:42.659097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.035 qpair failed and we were unable to recover it. 00:42:48.035 [2024-05-15 09:08:42.659233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.035 [2024-05-15 09:08:42.659332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.035 [2024-05-15 09:08:42.659356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.035 qpair failed and we were unable to recover it. 00:42:48.035 [2024-05-15 09:08:42.659465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.035 [2024-05-15 09:08:42.659566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.035 [2024-05-15 09:08:42.659589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.035 qpair failed and we were unable to recover it. 00:42:48.035 [2024-05-15 09:08:42.659721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.035 [2024-05-15 09:08:42.659841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.035 [2024-05-15 09:08:42.659865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.035 qpair failed and we were unable to recover it. 00:42:48.035 [2024-05-15 09:08:42.659996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.035 [2024-05-15 09:08:42.660211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.035 [2024-05-15 09:08:42.660269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.035 qpair failed and we were unable to recover it. 00:42:48.035 [2024-05-15 09:08:42.660372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.035 [2024-05-15 09:08:42.660505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.035 [2024-05-15 09:08:42.660530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.035 qpair failed and we were unable to recover it. 00:42:48.035 [2024-05-15 09:08:42.660688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.035 [2024-05-15 09:08:42.660792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.035 [2024-05-15 09:08:42.660816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.035 qpair failed and we were unable to recover it. 00:42:48.035 [2024-05-15 09:08:42.660949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.035 [2024-05-15 09:08:42.661059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.035 [2024-05-15 09:08:42.661085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.035 qpair failed and we were unable to recover it. 00:42:48.035 [2024-05-15 09:08:42.661205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.035 [2024-05-15 09:08:42.661366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.035 [2024-05-15 09:08:42.661395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.035 qpair failed and we were unable to recover it. 00:42:48.035 [2024-05-15 09:08:42.661560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.035 [2024-05-15 09:08:42.661688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.035 [2024-05-15 09:08:42.661712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.035 qpair failed and we were unable to recover it. 00:42:48.035 [2024-05-15 09:08:42.661840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.035 [2024-05-15 09:08:42.661944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.035 [2024-05-15 09:08:42.661969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.035 qpair failed and we were unable to recover it. 00:42:48.035 [2024-05-15 09:08:42.662100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.035 [2024-05-15 09:08:42.662200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.035 [2024-05-15 09:08:42.662230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.035 qpair failed and we were unable to recover it. 00:42:48.035 [2024-05-15 09:08:42.662339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.035 [2024-05-15 09:08:42.662456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.035 [2024-05-15 09:08:42.662480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.035 qpair failed and we were unable to recover it. 00:42:48.035 [2024-05-15 09:08:42.662585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.035 [2024-05-15 09:08:42.662713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.035 [2024-05-15 09:08:42.662738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.035 qpair failed and we were unable to recover it. 00:42:48.035 [2024-05-15 09:08:42.662868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.035 [2024-05-15 09:08:42.662991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.035 [2024-05-15 09:08:42.663015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.035 qpair failed and we were unable to recover it. 00:42:48.035 [2024-05-15 09:08:42.663148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.035 [2024-05-15 09:08:42.663257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.035 [2024-05-15 09:08:42.663282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.035 qpair failed and we were unable to recover it. 00:42:48.035 [2024-05-15 09:08:42.663411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.035 [2024-05-15 09:08:42.663565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.035 [2024-05-15 09:08:42.663590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.035 qpair failed and we were unable to recover it. 00:42:48.035 [2024-05-15 09:08:42.663755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.035 [2024-05-15 09:08:42.663853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.035 [2024-05-15 09:08:42.663878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.035 qpair failed and we were unable to recover it. 00:42:48.035 [2024-05-15 09:08:42.663976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.035 [2024-05-15 09:08:42.664132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.035 [2024-05-15 09:08:42.664157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.035 qpair failed and we were unable to recover it. 00:42:48.035 [2024-05-15 09:08:42.664299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.035 [2024-05-15 09:08:42.664456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.035 [2024-05-15 09:08:42.664480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.035 qpair failed and we were unable to recover it. 00:42:48.035 [2024-05-15 09:08:42.664595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.035 [2024-05-15 09:08:42.664725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.035 [2024-05-15 09:08:42.664750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.035 qpair failed and we were unable to recover it. 00:42:48.035 [2024-05-15 09:08:42.664882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.035 [2024-05-15 09:08:42.665035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.035 [2024-05-15 09:08:42.665060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.035 qpair failed and we were unable to recover it. 00:42:48.035 [2024-05-15 09:08:42.665165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.035 [2024-05-15 09:08:42.665270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.035 [2024-05-15 09:08:42.665294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.035 qpair failed and we were unable to recover it. 00:42:48.035 [2024-05-15 09:08:42.665448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.035 [2024-05-15 09:08:42.665553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.035 [2024-05-15 09:08:42.665577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.035 qpair failed and we were unable to recover it. 00:42:48.035 [2024-05-15 09:08:42.665717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.035 [2024-05-15 09:08:42.665846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.036 [2024-05-15 09:08:42.665871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.036 qpair failed and we were unable to recover it. 00:42:48.036 [2024-05-15 09:08:42.666033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.036 [2024-05-15 09:08:42.666143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.036 [2024-05-15 09:08:42.666167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.036 qpair failed and we were unable to recover it. 00:42:48.036 [2024-05-15 09:08:42.666266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.036 [2024-05-15 09:08:42.666394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.036 [2024-05-15 09:08:42.666419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.036 qpair failed and we were unable to recover it. 00:42:48.036 [2024-05-15 09:08:42.666550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.036 [2024-05-15 09:08:42.666646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.036 [2024-05-15 09:08:42.666670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.036 qpair failed and we were unable to recover it. 00:42:48.036 [2024-05-15 09:08:42.666778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.036 [2024-05-15 09:08:42.666933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.036 [2024-05-15 09:08:42.666958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.036 qpair failed and we were unable to recover it. 00:42:48.036 [2024-05-15 09:08:42.667069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.036 [2024-05-15 09:08:42.667176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.036 [2024-05-15 09:08:42.667202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.036 qpair failed and we were unable to recover it. 00:42:48.036 [2024-05-15 09:08:42.667330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.036 [2024-05-15 09:08:42.667439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.036 [2024-05-15 09:08:42.667464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.036 qpair failed and we were unable to recover it. 00:42:48.036 [2024-05-15 09:08:42.667622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.036 [2024-05-15 09:08:42.667771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.036 [2024-05-15 09:08:42.667796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.036 qpair failed and we were unable to recover it. 00:42:48.036 [2024-05-15 09:08:42.667929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.036 [2024-05-15 09:08:42.668031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.036 [2024-05-15 09:08:42.668055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.036 qpair failed and we were unable to recover it. 00:42:48.036 [2024-05-15 09:08:42.668208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.036 [2024-05-15 09:08:42.668340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.036 [2024-05-15 09:08:42.668367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.036 qpair failed and we were unable to recover it. 00:42:48.036 [2024-05-15 09:08:42.668474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.036 [2024-05-15 09:08:42.668573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.036 [2024-05-15 09:08:42.668598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.036 qpair failed and we were unable to recover it. 00:42:48.036 [2024-05-15 09:08:42.668724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.036 [2024-05-15 09:08:42.668828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.036 [2024-05-15 09:08:42.668853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.036 qpair failed and we were unable to recover it. 00:42:48.036 [2024-05-15 09:08:42.668955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.036 [2024-05-15 09:08:42.669088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.036 [2024-05-15 09:08:42.669113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.036 qpair failed and we were unable to recover it. 00:42:48.036 [2024-05-15 09:08:42.669230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.036 [2024-05-15 09:08:42.669354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.036 [2024-05-15 09:08:42.669379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.036 qpair failed and we were unable to recover it. 00:42:48.036 [2024-05-15 09:08:42.669513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.036 [2024-05-15 09:08:42.669651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.036 [2024-05-15 09:08:42.669676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.036 qpair failed and we were unable to recover it. 00:42:48.036 [2024-05-15 09:08:42.669811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.036 [2024-05-15 09:08:42.669968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.036 [2024-05-15 09:08:42.669993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.036 qpair failed and we were unable to recover it. 00:42:48.036 [2024-05-15 09:08:42.670155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.036 [2024-05-15 09:08:42.670290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.036 [2024-05-15 09:08:42.670315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.036 qpair failed and we were unable to recover it. 00:42:48.036 [2024-05-15 09:08:42.670446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.036 [2024-05-15 09:08:42.670567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.036 [2024-05-15 09:08:42.670591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.036 qpair failed and we were unable to recover it. 00:42:48.036 [2024-05-15 09:08:42.670720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.036 [2024-05-15 09:08:42.670866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.036 [2024-05-15 09:08:42.670892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.036 qpair failed and we were unable to recover it. 00:42:48.036 [2024-05-15 09:08:42.671022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.036 [2024-05-15 09:08:42.671143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.036 [2024-05-15 09:08:42.671168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.036 qpair failed and we were unable to recover it. 00:42:48.036 [2024-05-15 09:08:42.671266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.036 [2024-05-15 09:08:42.671371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.036 [2024-05-15 09:08:42.671396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.036 qpair failed and we were unable to recover it. 00:42:48.036 [2024-05-15 09:08:42.671499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.036 [2024-05-15 09:08:42.671595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.036 [2024-05-15 09:08:42.671620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.036 qpair failed and we were unable to recover it. 00:42:48.036 [2024-05-15 09:08:42.671721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.036 [2024-05-15 09:08:42.671877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.036 [2024-05-15 09:08:42.671903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.036 qpair failed and we were unable to recover it. 00:42:48.036 [2024-05-15 09:08:42.672029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.036 [2024-05-15 09:08:42.672141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.036 [2024-05-15 09:08:42.672165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.036 qpair failed and we were unable to recover it. 00:42:48.036 [2024-05-15 09:08:42.672318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.036 [2024-05-15 09:08:42.672412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.036 [2024-05-15 09:08:42.672437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.036 qpair failed and we were unable to recover it. 00:42:48.036 [2024-05-15 09:08:42.672595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.036 [2024-05-15 09:08:42.672728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.036 [2024-05-15 09:08:42.672753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.036 qpair failed and we were unable to recover it. 00:42:48.036 [2024-05-15 09:08:42.672887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.036 [2024-05-15 09:08:42.673018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.036 [2024-05-15 09:08:42.673042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.036 qpair failed and we were unable to recover it. 00:42:48.036 [2024-05-15 09:08:42.673171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.036 [2024-05-15 09:08:42.673273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.036 [2024-05-15 09:08:42.673299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.036 qpair failed and we were unable to recover it. 00:42:48.036 [2024-05-15 09:08:42.673419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.036 [2024-05-15 09:08:42.673571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.036 [2024-05-15 09:08:42.673596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.036 qpair failed and we were unable to recover it. 00:42:48.036 [2024-05-15 09:08:42.673706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.037 [2024-05-15 09:08:42.673862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.037 [2024-05-15 09:08:42.673886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.037 qpair failed and we were unable to recover it. 00:42:48.037 [2024-05-15 09:08:42.674038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.037 [2024-05-15 09:08:42.674160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.037 [2024-05-15 09:08:42.674183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.037 qpair failed and we were unable to recover it. 00:42:48.037 [2024-05-15 09:08:42.674293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.037 [2024-05-15 09:08:42.674402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.037 [2024-05-15 09:08:42.674427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.037 qpair failed and we were unable to recover it. 00:42:48.037 [2024-05-15 09:08:42.674580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.037 [2024-05-15 09:08:42.674732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.037 [2024-05-15 09:08:42.674756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.037 qpair failed and we were unable to recover it. 00:42:48.037 [2024-05-15 09:08:42.674868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.037 [2024-05-15 09:08:42.675023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.037 [2024-05-15 09:08:42.675048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.037 qpair failed and we were unable to recover it. 00:42:48.037 [2024-05-15 09:08:42.675158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.037 [2024-05-15 09:08:42.675272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.037 [2024-05-15 09:08:42.675297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.037 qpair failed and we were unable to recover it. 00:42:48.037 [2024-05-15 09:08:42.675430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.037 [2024-05-15 09:08:42.675538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.037 [2024-05-15 09:08:42.675567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.037 qpair failed and we were unable to recover it. 00:42:48.037 [2024-05-15 09:08:42.675695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.037 [2024-05-15 09:08:42.675802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.037 [2024-05-15 09:08:42.675827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.037 qpair failed and we were unable to recover it. 00:42:48.037 [2024-05-15 09:08:42.675988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.037 [2024-05-15 09:08:42.676088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.037 [2024-05-15 09:08:42.676113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.037 qpair failed and we were unable to recover it. 00:42:48.037 [2024-05-15 09:08:42.676255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.037 [2024-05-15 09:08:42.676387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.037 [2024-05-15 09:08:42.676413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.037 qpair failed and we were unable to recover it. 00:42:48.037 [2024-05-15 09:08:42.676520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.037 [2024-05-15 09:08:42.676670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.037 [2024-05-15 09:08:42.676694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.037 qpair failed and we were unable to recover it. 00:42:48.037 [2024-05-15 09:08:42.676831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.037 [2024-05-15 09:08:42.676937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.037 [2024-05-15 09:08:42.676962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.037 qpair failed and we were unable to recover it. 00:42:48.037 [2024-05-15 09:08:42.677123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.037 [2024-05-15 09:08:42.677258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.037 [2024-05-15 09:08:42.677284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.037 qpair failed and we were unable to recover it. 00:42:48.037 [2024-05-15 09:08:42.677390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.037 [2024-05-15 09:08:42.677532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.037 [2024-05-15 09:08:42.677556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.037 qpair failed and we were unable to recover it. 00:42:48.037 [2024-05-15 09:08:42.677720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.037 [2024-05-15 09:08:42.677822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.037 [2024-05-15 09:08:42.677846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.037 qpair failed and we were unable to recover it. 00:42:48.037 [2024-05-15 09:08:42.677952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.037 [2024-05-15 09:08:42.678051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.037 [2024-05-15 09:08:42.678076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.037 qpair failed and we were unable to recover it. 00:42:48.037 [2024-05-15 09:08:42.678180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.037 [2024-05-15 09:08:42.678325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.037 [2024-05-15 09:08:42.678350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.037 qpair failed and we were unable to recover it. 00:42:48.037 [2024-05-15 09:08:42.678488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.037 [2024-05-15 09:08:42.678617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.037 [2024-05-15 09:08:42.678643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.037 qpair failed and we were unable to recover it. 00:42:48.037 [2024-05-15 09:08:42.678771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.037 [2024-05-15 09:08:42.678894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.037 [2024-05-15 09:08:42.678919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.037 qpair failed and we were unable to recover it. 00:42:48.037 [2024-05-15 09:08:42.679052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.037 [2024-05-15 09:08:42.679177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.037 [2024-05-15 09:08:42.679202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.037 qpair failed and we were unable to recover it. 00:42:48.037 [2024-05-15 09:08:42.679310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.037 [2024-05-15 09:08:42.679438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.037 [2024-05-15 09:08:42.679462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.037 qpair failed and we were unable to recover it. 00:42:48.037 [2024-05-15 09:08:42.679596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.037 [2024-05-15 09:08:42.679726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.037 [2024-05-15 09:08:42.679752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.037 qpair failed and we were unable to recover it. 00:42:48.037 [2024-05-15 09:08:42.679853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.037 [2024-05-15 09:08:42.679959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.037 [2024-05-15 09:08:42.679982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.037 qpair failed and we were unable to recover it. 00:42:48.037 [2024-05-15 09:08:42.680113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.037 [2024-05-15 09:08:42.680248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.037 [2024-05-15 09:08:42.680272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.037 qpair failed and we were unable to recover it. 00:42:48.037 [2024-05-15 09:08:42.680398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.037 [2024-05-15 09:08:42.680524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.037 [2024-05-15 09:08:42.680548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.037 qpair failed and we were unable to recover it. 00:42:48.037 [2024-05-15 09:08:42.680687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.037 [2024-05-15 09:08:42.680798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.037 [2024-05-15 09:08:42.680822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.037 qpair failed and we were unable to recover it. 00:42:48.037 [2024-05-15 09:08:42.680952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.037 [2024-05-15 09:08:42.681100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.037 [2024-05-15 09:08:42.681125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.037 qpair failed and we were unable to recover it. 00:42:48.037 [2024-05-15 09:08:42.681237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.037 [2024-05-15 09:08:42.681372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.037 [2024-05-15 09:08:42.681396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.037 qpair failed and we were unable to recover it. 00:42:48.037 [2024-05-15 09:08:42.681527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.037 [2024-05-15 09:08:42.681694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.037 [2024-05-15 09:08:42.681719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.037 qpair failed and we were unable to recover it. 00:42:48.037 [2024-05-15 09:08:42.681873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.038 [2024-05-15 09:08:42.682000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.038 [2024-05-15 09:08:42.682024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.038 qpair failed and we were unable to recover it. 00:42:48.038 [2024-05-15 09:08:42.682130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.038 [2024-05-15 09:08:42.682231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.038 [2024-05-15 09:08:42.682256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.038 qpair failed and we were unable to recover it. 00:42:48.038 [2024-05-15 09:08:42.682359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.038 [2024-05-15 09:08:42.682497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.038 [2024-05-15 09:08:42.682521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.038 qpair failed and we were unable to recover it. 00:42:48.038 [2024-05-15 09:08:42.682682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.038 [2024-05-15 09:08:42.682785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.038 [2024-05-15 09:08:42.682809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.038 qpair failed and we were unable to recover it. 00:42:48.038 [2024-05-15 09:08:42.682964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.038 [2024-05-15 09:08:42.683073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.038 [2024-05-15 09:08:42.683099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.038 qpair failed and we were unable to recover it. 00:42:48.038 [2024-05-15 09:08:42.683234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.038 [2024-05-15 09:08:42.683390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.038 [2024-05-15 09:08:42.683415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.038 qpair failed and we were unable to recover it. 00:42:48.038 [2024-05-15 09:08:42.683546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.038 [2024-05-15 09:08:42.683670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.038 [2024-05-15 09:08:42.683695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.038 qpair failed and we were unable to recover it. 00:42:48.038 [2024-05-15 09:08:42.683837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.038 [2024-05-15 09:08:42.683968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.038 [2024-05-15 09:08:42.683993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.038 qpair failed and we were unable to recover it. 00:42:48.038 [2024-05-15 09:08:42.684131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.038 [2024-05-15 09:08:42.684259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.038 [2024-05-15 09:08:42.684284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.038 qpair failed and we were unable to recover it. 00:42:48.038 [2024-05-15 09:08:42.684419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.038 [2024-05-15 09:08:42.684558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.038 [2024-05-15 09:08:42.684582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.038 qpair failed and we were unable to recover it. 00:42:48.038 [2024-05-15 09:08:42.684836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.038 [2024-05-15 09:08:42.684936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.038 [2024-05-15 09:08:42.684961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.038 qpair failed and we were unable to recover it. 00:42:48.038 [2024-05-15 09:08:42.685084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.038 [2024-05-15 09:08:42.685234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.038 [2024-05-15 09:08:42.685259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.038 qpair failed and we were unable to recover it. 00:42:48.038 [2024-05-15 09:08:42.685395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.038 [2024-05-15 09:08:42.685526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.038 [2024-05-15 09:08:42.685551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.038 qpair failed and we were unable to recover it. 00:42:48.038 [2024-05-15 09:08:42.685720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.038 [2024-05-15 09:08:42.685871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.038 [2024-05-15 09:08:42.685896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.038 qpair failed and we were unable to recover it. 00:42:48.038 [2024-05-15 09:08:42.686021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.038 [2024-05-15 09:08:42.686127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.038 [2024-05-15 09:08:42.686152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.038 qpair failed and we were unable to recover it. 00:42:48.038 [2024-05-15 09:08:42.686282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.038 [2024-05-15 09:08:42.686384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.038 [2024-05-15 09:08:42.686409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.038 qpair failed and we were unable to recover it. 00:42:48.038 [2024-05-15 09:08:42.686531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.038 [2024-05-15 09:08:42.686637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.038 [2024-05-15 09:08:42.686662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.038 qpair failed and we were unable to recover it. 00:42:48.038 [2024-05-15 09:08:42.686770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.038 [2024-05-15 09:08:42.686877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.038 [2024-05-15 09:08:42.686902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.038 qpair failed and we were unable to recover it. 00:42:48.038 [2024-05-15 09:08:42.687059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.038 [2024-05-15 09:08:42.687176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.038 [2024-05-15 09:08:42.687202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.038 qpair failed and we were unable to recover it. 00:42:48.038 [2024-05-15 09:08:42.687316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.038 [2024-05-15 09:08:42.687443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.038 [2024-05-15 09:08:42.687468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.038 qpair failed and we were unable to recover it. 00:42:48.038 [2024-05-15 09:08:42.687592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.038 [2024-05-15 09:08:42.687718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.038 [2024-05-15 09:08:42.687743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.038 qpair failed and we were unable to recover it. 00:42:48.038 [2024-05-15 09:08:42.687849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.038 [2024-05-15 09:08:42.687946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.038 [2024-05-15 09:08:42.687970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.038 qpair failed and we were unable to recover it. 00:42:48.038 [2024-05-15 09:08:42.688097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.038 [2024-05-15 09:08:42.688255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.038 [2024-05-15 09:08:42.688281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.038 qpair failed and we were unable to recover it. 00:42:48.038 [2024-05-15 09:08:42.688412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.038 [2024-05-15 09:08:42.688511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.038 [2024-05-15 09:08:42.688537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.038 qpair failed and we were unable to recover it. 00:42:48.038 [2024-05-15 09:08:42.688683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.038 [2024-05-15 09:08:42.688820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.038 [2024-05-15 09:08:42.688845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.038 qpair failed and we were unable to recover it. 00:42:48.038 [2024-05-15 09:08:42.688968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.038 [2024-05-15 09:08:42.689096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.038 [2024-05-15 09:08:42.689121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.038 qpair failed and we were unable to recover it. 00:42:48.038 [2024-05-15 09:08:42.689229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.038 [2024-05-15 09:08:42.689361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.038 [2024-05-15 09:08:42.689386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.038 qpair failed and we were unable to recover it. 00:42:48.038 [2024-05-15 09:08:42.689545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.038 [2024-05-15 09:08:42.689647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.038 [2024-05-15 09:08:42.689671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.038 qpair failed and we were unable to recover it. 00:42:48.038 [2024-05-15 09:08:42.689828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.038 [2024-05-15 09:08:42.689937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.038 [2024-05-15 09:08:42.689966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.038 qpair failed and we were unable to recover it. 00:42:48.038 [2024-05-15 09:08:42.690092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.039 [2024-05-15 09:08:42.690190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.039 [2024-05-15 09:08:42.690222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.039 qpair failed and we were unable to recover it. 00:42:48.039 [2024-05-15 09:08:42.690359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.039 [2024-05-15 09:08:42.690487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.039 [2024-05-15 09:08:42.690512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.039 qpair failed and we were unable to recover it. 00:42:48.039 [2024-05-15 09:08:42.690645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.039 [2024-05-15 09:08:42.690743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.039 [2024-05-15 09:08:42.690769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.039 qpair failed and we were unable to recover it. 00:42:48.039 [2024-05-15 09:08:42.690868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.039 [2024-05-15 09:08:42.690994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.039 [2024-05-15 09:08:42.691019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.039 qpair failed and we were unable to recover it. 00:42:48.039 [2024-05-15 09:08:42.691148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.039 [2024-05-15 09:08:42.691258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.039 [2024-05-15 09:08:42.691285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.039 qpair failed and we were unable to recover it. 00:42:48.039 [2024-05-15 09:08:42.691391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.039 [2024-05-15 09:08:42.691520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.039 [2024-05-15 09:08:42.691546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.039 qpair failed and we were unable to recover it. 00:42:48.039 [2024-05-15 09:08:42.691674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.039 [2024-05-15 09:08:42.691826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.039 [2024-05-15 09:08:42.691850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.039 qpair failed and we were unable to recover it. 00:42:48.039 [2024-05-15 09:08:42.691986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.039 [2024-05-15 09:08:42.692118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.039 [2024-05-15 09:08:42.692142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.039 qpair failed and we were unable to recover it. 00:42:48.039 [2024-05-15 09:08:42.692271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.039 [2024-05-15 09:08:42.692378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.039 [2024-05-15 09:08:42.692403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.039 qpair failed and we were unable to recover it. 00:42:48.039 [2024-05-15 09:08:42.692507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.039 [2024-05-15 09:08:42.692656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.039 [2024-05-15 09:08:42.692681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.039 qpair failed and we were unable to recover it. 00:42:48.039 [2024-05-15 09:08:42.692842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.039 [2024-05-15 09:08:42.692976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.039 [2024-05-15 09:08:42.693000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.039 qpair failed and we were unable to recover it. 00:42:48.039 [2024-05-15 09:08:42.693125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.039 [2024-05-15 09:08:42.693258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.039 [2024-05-15 09:08:42.693284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.039 qpair failed and we were unable to recover it. 00:42:48.039 [2024-05-15 09:08:42.693415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.039 [2024-05-15 09:08:42.693566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.039 [2024-05-15 09:08:42.693590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.039 qpair failed and we were unable to recover it. 00:42:48.039 [2024-05-15 09:08:42.693724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.039 [2024-05-15 09:08:42.693825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.039 [2024-05-15 09:08:42.693851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.039 qpair failed and we were unable to recover it. 00:42:48.039 [2024-05-15 09:08:42.693956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.039 [2024-05-15 09:08:42.694076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.040 [2024-05-15 09:08:42.694101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.040 qpair failed and we were unable to recover it. 00:42:48.040 [2024-05-15 09:08:42.694254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.040 [2024-05-15 09:08:42.694364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.040 [2024-05-15 09:08:42.694388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.040 qpair failed and we were unable to recover it. 00:42:48.040 [2024-05-15 09:08:42.694520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.040 [2024-05-15 09:08:42.694648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.040 [2024-05-15 09:08:42.694674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.040 qpair failed and we were unable to recover it. 00:42:48.040 [2024-05-15 09:08:42.694803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.040 [2024-05-15 09:08:42.694910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.040 [2024-05-15 09:08:42.694936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.040 qpair failed and we were unable to recover it. 00:42:48.040 [2024-05-15 09:08:42.695068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.040 [2024-05-15 09:08:42.695226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.040 [2024-05-15 09:08:42.695251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.040 qpair failed and we were unable to recover it. 00:42:48.040 [2024-05-15 09:08:42.695381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.040 [2024-05-15 09:08:42.695510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.040 [2024-05-15 09:08:42.695535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.040 qpair failed and we were unable to recover it. 00:42:48.040 [2024-05-15 09:08:42.695667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.040 [2024-05-15 09:08:42.695823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.040 [2024-05-15 09:08:42.695847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.040 qpair failed and we were unable to recover it. 00:42:48.040 [2024-05-15 09:08:42.695953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.040 [2024-05-15 09:08:42.696059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.040 [2024-05-15 09:08:42.696083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.040 qpair failed and we were unable to recover it. 00:42:48.040 [2024-05-15 09:08:42.696247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.040 [2024-05-15 09:08:42.696402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.040 [2024-05-15 09:08:42.696428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.040 qpair failed and we were unable to recover it. 00:42:48.040 [2024-05-15 09:08:42.696555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.040 [2024-05-15 09:08:42.696687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.040 [2024-05-15 09:08:42.696712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.040 qpair failed and we were unable to recover it. 00:42:48.040 [2024-05-15 09:08:42.696814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.040 [2024-05-15 09:08:42.696935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.040 [2024-05-15 09:08:42.696959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.040 qpair failed and we were unable to recover it. 00:42:48.040 [2024-05-15 09:08:42.697092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.040 [2024-05-15 09:08:42.697253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.040 [2024-05-15 09:08:42.697279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.040 qpair failed and we were unable to recover it. 00:42:48.040 [2024-05-15 09:08:42.697385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.040 [2024-05-15 09:08:42.697491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.040 [2024-05-15 09:08:42.697516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.040 qpair failed and we were unable to recover it. 00:42:48.040 [2024-05-15 09:08:42.697646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.040 [2024-05-15 09:08:42.697751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.040 [2024-05-15 09:08:42.697777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.040 qpair failed and we were unable to recover it. 00:42:48.040 [2024-05-15 09:08:42.697876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.040 [2024-05-15 09:08:42.698002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.040 [2024-05-15 09:08:42.698028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.040 qpair failed and we were unable to recover it. 00:42:48.040 [2024-05-15 09:08:42.698158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.040 [2024-05-15 09:08:42.698311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.040 [2024-05-15 09:08:42.698336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.040 qpair failed and we were unable to recover it. 00:42:48.040 [2024-05-15 09:08:42.698505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.040 [2024-05-15 09:08:42.698628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.040 [2024-05-15 09:08:42.698657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.040 qpair failed and we were unable to recover it. 00:42:48.040 [2024-05-15 09:08:42.698884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.040 [2024-05-15 09:08:42.699017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.040 [2024-05-15 09:08:42.699043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.040 qpair failed and we were unable to recover it. 00:42:48.040 [2024-05-15 09:08:42.699149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.040 [2024-05-15 09:08:42.699279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.040 [2024-05-15 09:08:42.699307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.040 qpair failed and we were unable to recover it. 00:42:48.040 [2024-05-15 09:08:42.699443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.040 [2024-05-15 09:08:42.699574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.040 [2024-05-15 09:08:42.699601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.040 qpair failed and we were unable to recover it. 00:42:48.040 [2024-05-15 09:08:42.699741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.040 [2024-05-15 09:08:42.699877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.040 [2024-05-15 09:08:42.699903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.040 qpair failed and we were unable to recover it. 00:42:48.040 [2024-05-15 09:08:42.700057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.040 [2024-05-15 09:08:42.700190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.040 [2024-05-15 09:08:42.700221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.040 qpair failed and we were unable to recover it. 00:42:48.040 [2024-05-15 09:08:42.700353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.040 [2024-05-15 09:08:42.700483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.040 [2024-05-15 09:08:42.700509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.040 qpair failed and we were unable to recover it. 00:42:48.040 [2024-05-15 09:08:42.700621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.040 [2024-05-15 09:08:42.700748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.040 [2024-05-15 09:08:42.700774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.040 qpair failed and we were unable to recover it. 00:42:48.040 [2024-05-15 09:08:42.700874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.040 [2024-05-15 09:08:42.701004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.040 [2024-05-15 09:08:42.701031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.040 qpair failed and we were unable to recover it. 00:42:48.040 [2024-05-15 09:08:42.701146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.040 [2024-05-15 09:08:42.701266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.040 [2024-05-15 09:08:42.701292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.040 qpair failed and we were unable to recover it. 00:42:48.040 [2024-05-15 09:08:42.701451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.040 [2024-05-15 09:08:42.701579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.040 [2024-05-15 09:08:42.701605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.040 qpair failed and we were unable to recover it. 00:42:48.040 [2024-05-15 09:08:42.701738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.040 [2024-05-15 09:08:42.701838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.040 [2024-05-15 09:08:42.701862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.040 qpair failed and we were unable to recover it. 00:42:48.040 [2024-05-15 09:08:42.701961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.041 [2024-05-15 09:08:42.702094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.041 [2024-05-15 09:08:42.702119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.041 qpair failed and we were unable to recover it. 00:42:48.041 [2024-05-15 09:08:42.702263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.041 [2024-05-15 09:08:42.702369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.041 [2024-05-15 09:08:42.702393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.041 qpair failed and we were unable to recover it. 00:42:48.041 [2024-05-15 09:08:42.702506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.041 [2024-05-15 09:08:42.702615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.041 [2024-05-15 09:08:42.702641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.041 qpair failed and we were unable to recover it. 00:42:48.041 [2024-05-15 09:08:42.702774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.041 [2024-05-15 09:08:42.702901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.041 [2024-05-15 09:08:42.702925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.041 qpair failed and we were unable to recover it. 00:42:48.041 [2024-05-15 09:08:42.703057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.041 [2024-05-15 09:08:42.703166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.041 [2024-05-15 09:08:42.703190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.041 qpair failed and we were unable to recover it. 00:42:48.041 [2024-05-15 09:08:42.703324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.041 [2024-05-15 09:08:42.703426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.041 [2024-05-15 09:08:42.703451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.041 qpair failed and we were unable to recover it. 00:42:48.041 [2024-05-15 09:08:42.703561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.041 [2024-05-15 09:08:42.703713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.041 [2024-05-15 09:08:42.703738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.041 qpair failed and we were unable to recover it. 00:42:48.041 [2024-05-15 09:08:42.703838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.041 [2024-05-15 09:08:42.703936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.041 [2024-05-15 09:08:42.703961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.041 qpair failed and we were unable to recover it. 00:42:48.041 [2024-05-15 09:08:42.704090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.041 [2024-05-15 09:08:42.704212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.041 [2024-05-15 09:08:42.704241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.041 qpair failed and we were unable to recover it. 00:42:48.041 [2024-05-15 09:08:42.704339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.041 [2024-05-15 09:08:42.704443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.041 [2024-05-15 09:08:42.704466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.041 qpair failed and we were unable to recover it. 00:42:48.041 [2024-05-15 09:08:42.704603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.041 [2024-05-15 09:08:42.704734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.041 [2024-05-15 09:08:42.704759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.041 qpair failed and we were unable to recover it. 00:42:48.041 [2024-05-15 09:08:42.704871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.041 [2024-05-15 09:08:42.705000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.041 [2024-05-15 09:08:42.705026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.041 qpair failed and we were unable to recover it. 00:42:48.041 [2024-05-15 09:08:42.705162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.041 [2024-05-15 09:08:42.705265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.041 [2024-05-15 09:08:42.705291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.041 qpair failed and we were unable to recover it. 00:42:48.041 [2024-05-15 09:08:42.705429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.041 [2024-05-15 09:08:42.705538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.041 [2024-05-15 09:08:42.705563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.041 qpair failed and we were unable to recover it. 00:42:48.041 [2024-05-15 09:08:42.705687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.041 [2024-05-15 09:08:42.705800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.041 [2024-05-15 09:08:42.705824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.041 qpair failed and we were unable to recover it. 00:42:48.041 [2024-05-15 09:08:42.705927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.041 [2024-05-15 09:08:42.706052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.041 [2024-05-15 09:08:42.706076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.041 qpair failed and we were unable to recover it. 00:42:48.041 [2024-05-15 09:08:42.706184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.041 [2024-05-15 09:08:42.706318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.041 [2024-05-15 09:08:42.706344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.041 qpair failed and we were unable to recover it. 00:42:48.041 [2024-05-15 09:08:42.706479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.041 [2024-05-15 09:08:42.706588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.041 [2024-05-15 09:08:42.706612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.041 qpair failed and we were unable to recover it. 00:42:48.041 [2024-05-15 09:08:42.706766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.041 [2024-05-15 09:08:42.706872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.041 [2024-05-15 09:08:42.706901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.041 qpair failed and we were unable to recover it. 00:42:48.041 [2024-05-15 09:08:42.707012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.041 [2024-05-15 09:08:42.707145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.041 [2024-05-15 09:08:42.707170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.041 qpair failed and we were unable to recover it. 00:42:48.041 [2024-05-15 09:08:42.707316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.041 [2024-05-15 09:08:42.707445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.041 [2024-05-15 09:08:42.707470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.041 qpair failed and we were unable to recover it. 00:42:48.041 [2024-05-15 09:08:42.707570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.041 [2024-05-15 09:08:42.707661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.041 [2024-05-15 09:08:42.707685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.041 qpair failed and we were unable to recover it. 00:42:48.041 [2024-05-15 09:08:42.707822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.041 [2024-05-15 09:08:42.707949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.041 [2024-05-15 09:08:42.707973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.041 qpair failed and we were unable to recover it. 00:42:48.041 [2024-05-15 09:08:42.708100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.041 [2024-05-15 09:08:42.708231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.041 [2024-05-15 09:08:42.708256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.041 qpair failed and we were unable to recover it. 00:42:48.041 [2024-05-15 09:08:42.708388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.041 [2024-05-15 09:08:42.708501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.041 [2024-05-15 09:08:42.708525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.041 qpair failed and we were unable to recover it. 00:42:48.041 [2024-05-15 09:08:42.708633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.041 [2024-05-15 09:08:42.708731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.041 [2024-05-15 09:08:42.708755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.041 qpair failed and we were unable to recover it. 00:42:48.041 [2024-05-15 09:08:42.708855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.041 [2024-05-15 09:08:42.709006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.041 [2024-05-15 09:08:42.709031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.041 qpair failed and we were unable to recover it. 00:42:48.041 [2024-05-15 09:08:42.709158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.041 [2024-05-15 09:08:42.709260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.041 [2024-05-15 09:08:42.709286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.041 qpair failed and we were unable to recover it. 00:42:48.041 [2024-05-15 09:08:42.709445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.041 [2024-05-15 09:08:42.709575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.041 [2024-05-15 09:08:42.709600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.041 qpair failed and we were unable to recover it. 00:42:48.042 [2024-05-15 09:08:42.709754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.042 [2024-05-15 09:08:42.709884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.042 [2024-05-15 09:08:42.709909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.042 qpair failed and we were unable to recover it. 00:42:48.042 [2024-05-15 09:08:42.710013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.042 [2024-05-15 09:08:42.710164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.042 [2024-05-15 09:08:42.710189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.042 qpair failed and we were unable to recover it. 00:42:48.042 [2024-05-15 09:08:42.710366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.042 [2024-05-15 09:08:42.710495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.042 [2024-05-15 09:08:42.710520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.042 qpair failed and we were unable to recover it. 00:42:48.042 [2024-05-15 09:08:42.710624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.042 [2024-05-15 09:08:42.710727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.042 [2024-05-15 09:08:42.710752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.042 qpair failed and we were unable to recover it. 00:42:48.042 [2024-05-15 09:08:42.710876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.042 [2024-05-15 09:08:42.711010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.042 [2024-05-15 09:08:42.711035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.042 qpair failed and we were unable to recover it. 00:42:48.042 [2024-05-15 09:08:42.711167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.042 [2024-05-15 09:08:42.711302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.042 [2024-05-15 09:08:42.711328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.042 qpair failed and we were unable to recover it. 00:42:48.042 [2024-05-15 09:08:42.711437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.042 [2024-05-15 09:08:42.711567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.042 [2024-05-15 09:08:42.711591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.042 qpair failed and we were unable to recover it. 00:42:48.042 [2024-05-15 09:08:42.711725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.042 [2024-05-15 09:08:42.711849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.042 [2024-05-15 09:08:42.711874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.042 qpair failed and we were unable to recover it. 00:42:48.042 [2024-05-15 09:08:42.712030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.042 [2024-05-15 09:08:42.712137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.042 [2024-05-15 09:08:42.712162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.042 qpair failed and we were unable to recover it. 00:42:48.042 [2024-05-15 09:08:42.712293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.042 [2024-05-15 09:08:42.712424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.042 [2024-05-15 09:08:42.712449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.042 qpair failed and we were unable to recover it. 00:42:48.042 [2024-05-15 09:08:42.712563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.042 [2024-05-15 09:08:42.712692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.042 [2024-05-15 09:08:42.712717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.042 qpair failed and we were unable to recover it. 00:42:48.042 [2024-05-15 09:08:42.712848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.042 [2024-05-15 09:08:42.712973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.042 [2024-05-15 09:08:42.712998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.042 qpair failed and we were unable to recover it. 00:42:48.042 [2024-05-15 09:08:42.713100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.042 [2024-05-15 09:08:42.713232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.042 [2024-05-15 09:08:42.713258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.042 qpair failed and we were unable to recover it. 00:42:48.042 [2024-05-15 09:08:42.713408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.042 [2024-05-15 09:08:42.713541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.042 [2024-05-15 09:08:42.713566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.042 qpair failed and we were unable to recover it. 00:42:48.042 [2024-05-15 09:08:42.713692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.042 [2024-05-15 09:08:42.713826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.042 [2024-05-15 09:08:42.713851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.042 qpair failed and we were unable to recover it. 00:42:48.042 [2024-05-15 09:08:42.713977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.042 [2024-05-15 09:08:42.714111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.042 [2024-05-15 09:08:42.714136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.042 qpair failed and we were unable to recover it. 00:42:48.042 [2024-05-15 09:08:42.714267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.042 [2024-05-15 09:08:42.714398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.042 [2024-05-15 09:08:42.714423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.042 qpair failed and we were unable to recover it. 00:42:48.042 [2024-05-15 09:08:42.714533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.042 [2024-05-15 09:08:42.714659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.042 [2024-05-15 09:08:42.714684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.042 qpair failed and we were unable to recover it. 00:42:48.042 [2024-05-15 09:08:42.714783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.042 [2024-05-15 09:08:42.714889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.042 [2024-05-15 09:08:42.714913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.042 qpair failed and we were unable to recover it. 00:42:48.042 [2024-05-15 09:08:42.715055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.042 [2024-05-15 09:08:42.715180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.042 [2024-05-15 09:08:42.715205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.042 qpair failed and we were unable to recover it. 00:42:48.042 [2024-05-15 09:08:42.715340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.042 [2024-05-15 09:08:42.715467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.042 [2024-05-15 09:08:42.715492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.042 qpair failed and we were unable to recover it. 00:42:48.042 [2024-05-15 09:08:42.715625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.042 [2024-05-15 09:08:42.715728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.042 [2024-05-15 09:08:42.715753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.042 qpair failed and we were unable to recover it. 00:42:48.042 [2024-05-15 09:08:42.715879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.042 [2024-05-15 09:08:42.715991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.042 [2024-05-15 09:08:42.716015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.042 qpair failed and we were unable to recover it. 00:42:48.042 [2024-05-15 09:08:42.716127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.042 [2024-05-15 09:08:42.716256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.042 [2024-05-15 09:08:42.716281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.042 qpair failed and we were unable to recover it. 00:42:48.042 [2024-05-15 09:08:42.716435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.042 [2024-05-15 09:08:42.716587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.042 [2024-05-15 09:08:42.716612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.042 qpair failed and we were unable to recover it. 00:42:48.042 [2024-05-15 09:08:42.716741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.042 [2024-05-15 09:08:42.716843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.042 [2024-05-15 09:08:42.716868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.042 qpair failed and we were unable to recover it. 00:42:48.042 [2024-05-15 09:08:42.716994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.042 [2024-05-15 09:08:42.717121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.042 [2024-05-15 09:08:42.717146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.042 qpair failed and we were unable to recover it. 00:42:48.042 [2024-05-15 09:08:42.717250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.042 [2024-05-15 09:08:42.717380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.042 [2024-05-15 09:08:42.717404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.042 qpair failed and we were unable to recover it. 00:42:48.042 [2024-05-15 09:08:42.717540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.042 [2024-05-15 09:08:42.717665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.042 [2024-05-15 09:08:42.717689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.043 qpair failed and we were unable to recover it. 00:42:48.043 [2024-05-15 09:08:42.717819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.043 [2024-05-15 09:08:42.717920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.043 [2024-05-15 09:08:42.717945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.043 qpair failed and we were unable to recover it. 00:42:48.043 [2024-05-15 09:08:42.718048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.043 [2024-05-15 09:08:42.718177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.043 [2024-05-15 09:08:42.718202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.043 qpair failed and we were unable to recover it. 00:42:48.043 [2024-05-15 09:08:42.718343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.043 [2024-05-15 09:08:42.718441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.043 [2024-05-15 09:08:42.718466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.043 qpair failed and we were unable to recover it. 00:42:48.043 [2024-05-15 09:08:42.718564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.043 [2024-05-15 09:08:42.718659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.043 [2024-05-15 09:08:42.718684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.043 qpair failed and we were unable to recover it. 00:42:48.043 [2024-05-15 09:08:42.718788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.043 [2024-05-15 09:08:42.718898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.043 [2024-05-15 09:08:42.718924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.043 qpair failed and we were unable to recover it. 00:42:48.043 [2024-05-15 09:08:42.719053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.043 [2024-05-15 09:08:42.719154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.043 [2024-05-15 09:08:42.719180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.043 qpair failed and we were unable to recover it. 00:42:48.043 [2024-05-15 09:08:42.719290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.043 [2024-05-15 09:08:42.719395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.043 [2024-05-15 09:08:42.719419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.043 qpair failed and we were unable to recover it. 00:42:48.043 [2024-05-15 09:08:42.719518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.043 [2024-05-15 09:08:42.719618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.043 [2024-05-15 09:08:42.719643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.043 qpair failed and we were unable to recover it. 00:42:48.043 [2024-05-15 09:08:42.719699] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:48.043 [2024-05-15 09:08:42.719734] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:48.043 [2024-05-15 09:08:42.719749] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:48.043 [2024-05-15 09:08:42.719747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.043 [2024-05-15 09:08:42.719762] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:48.043 [2024-05-15 09:08:42.719774] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:48.043 [2024-05-15 09:08:42.719867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.043 [2024-05-15 09:08:42.719892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 wit[2024-05-15 09:08:42.719837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:42:48.043 h addr=10.0.0.2, port=4420 00:42:48.043 [2024-05-15 09:08:42.719893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:42:48.043 qpair failed and we were unable to recover it. 00:42:48.043 [2024-05-15 09:08:42.719890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:42:48.043 [2024-05-15 09:08:42.720001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.043 [2024-05-15 09:08:42.719865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:42:48.043 [2024-05-15 09:08:42.720134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.043 [2024-05-15 09:08:42.720159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.043 qpair failed and we were unable to recover it. 00:42:48.043 [2024-05-15 09:08:42.720286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.043 [2024-05-15 09:08:42.720419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.043 [2024-05-15 09:08:42.720445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.043 qpair failed and we were unable to recover it. 00:42:48.043 [2024-05-15 09:08:42.720547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.043 [2024-05-15 09:08:42.720654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.043 [2024-05-15 09:08:42.720678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.043 qpair failed and we were unable to recover it. 00:42:48.043 [2024-05-15 09:08:42.720780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.043 [2024-05-15 09:08:42.720883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.043 [2024-05-15 09:08:42.720908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.043 qpair failed and we were unable to recover it. 00:42:48.043 [2024-05-15 09:08:42.721037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.043 [2024-05-15 09:08:42.721163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.043 [2024-05-15 09:08:42.721189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.043 qpair failed and we were unable to recover it. 00:42:48.043 [2024-05-15 09:08:42.721321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.043 [2024-05-15 09:08:42.721420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.043 [2024-05-15 09:08:42.721446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.043 qpair failed and we were unable to recover it. 00:42:48.043 [2024-05-15 09:08:42.721552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.043 [2024-05-15 09:08:42.721694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.043 [2024-05-15 09:08:42.721719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.043 qpair failed and we were unable to recover it. 00:42:48.043 [2024-05-15 09:08:42.721860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.043 [2024-05-15 09:08:42.721990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.043 [2024-05-15 09:08:42.722015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.043 qpair failed and we were unable to recover it. 00:42:48.043 [2024-05-15 09:08:42.722110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.043 [2024-05-15 09:08:42.722243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.043 [2024-05-15 09:08:42.722269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.043 qpair failed and we were unable to recover it. 00:42:48.043 [2024-05-15 09:08:42.722401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.043 [2024-05-15 09:08:42.722499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.043 [2024-05-15 09:08:42.722524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.043 qpair failed and we were unable to recover it. 00:42:48.043 [2024-05-15 09:08:42.722653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.043 [2024-05-15 09:08:42.722762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.043 [2024-05-15 09:08:42.722787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.043 qpair failed and we were unable to recover it. 00:42:48.043 [2024-05-15 09:08:42.722893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.043 [2024-05-15 09:08:42.723019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.043 [2024-05-15 09:08:42.723044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.043 qpair failed and we were unable to recover it. 00:42:48.043 [2024-05-15 09:08:42.723152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.043 [2024-05-15 09:08:42.723284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.043 [2024-05-15 09:08:42.723309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.043 qpair failed and we were unable to recover it. 00:42:48.043 [2024-05-15 09:08:42.723455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.043 [2024-05-15 09:08:42.723560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.043 [2024-05-15 09:08:42.723586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.043 qpair failed and we were unable to recover it. 00:42:48.043 [2024-05-15 09:08:42.723691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.043 [2024-05-15 09:08:42.723813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.043 [2024-05-15 09:08:42.723837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.043 qpair failed and we were unable to recover it. 00:42:48.043 [2024-05-15 09:08:42.723977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.043 [2024-05-15 09:08:42.724081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.043 [2024-05-15 09:08:42.724106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.043 qpair failed and we were unable to recover it. 00:42:48.043 [2024-05-15 09:08:42.724240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.043 [2024-05-15 09:08:42.724369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.043 [2024-05-15 09:08:42.724394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.043 qpair failed and we were unable to recover it. 00:42:48.043 [2024-05-15 09:08:42.724519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.044 [2024-05-15 09:08:42.724611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.044 [2024-05-15 09:08:42.724636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.044 qpair failed and we were unable to recover it. 00:42:48.044 [2024-05-15 09:08:42.724744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.044 [2024-05-15 09:08:42.724872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.044 [2024-05-15 09:08:42.724899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.044 qpair failed and we were unable to recover it. 00:42:48.044 [2024-05-15 09:08:42.725001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.044 [2024-05-15 09:08:42.725099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.044 [2024-05-15 09:08:42.725124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.044 qpair failed and we were unable to recover it. 00:42:48.044 [2024-05-15 09:08:42.725249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.044 [2024-05-15 09:08:42.725386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.044 [2024-05-15 09:08:42.725412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.044 qpair failed and we were unable to recover it. 00:42:48.044 [2024-05-15 09:08:42.725521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.044 [2024-05-15 09:08:42.725617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.044 [2024-05-15 09:08:42.725642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.044 qpair failed and we were unable to recover it. 00:42:48.044 [2024-05-15 09:08:42.725770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.044 [2024-05-15 09:08:42.725873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.044 [2024-05-15 09:08:42.725897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.044 qpair failed and we were unable to recover it. 00:42:48.044 [2024-05-15 09:08:42.726002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.044 [2024-05-15 09:08:42.726122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.044 [2024-05-15 09:08:42.726147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.044 qpair failed and we were unable to recover it. 00:42:48.044 [2024-05-15 09:08:42.726272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.044 [2024-05-15 09:08:42.726378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.044 [2024-05-15 09:08:42.726403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.044 qpair failed and we were unable to recover it. 00:42:48.044 [2024-05-15 09:08:42.726504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.044 [2024-05-15 09:08:42.726633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.044 [2024-05-15 09:08:42.726658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.044 qpair failed and we were unable to recover it. 00:42:48.044 [2024-05-15 09:08:42.726758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.044 [2024-05-15 09:08:42.726860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.044 [2024-05-15 09:08:42.726884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.044 qpair failed and we were unable to recover it. 00:42:48.044 [2024-05-15 09:08:42.726998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.044 [2024-05-15 09:08:42.727112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.044 [2024-05-15 09:08:42.727137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.044 qpair failed and we were unable to recover it. 00:42:48.044 [2024-05-15 09:08:42.727263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.044 [2024-05-15 09:08:42.727392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.044 [2024-05-15 09:08:42.727418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.044 qpair failed and we were unable to recover it. 00:42:48.044 [2024-05-15 09:08:42.727513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.044 [2024-05-15 09:08:42.727613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.044 [2024-05-15 09:08:42.727639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.044 qpair failed and we were unable to recover it. 00:42:48.044 [2024-05-15 09:08:42.727760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.044 [2024-05-15 09:08:42.727858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.044 [2024-05-15 09:08:42.727888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.044 qpair failed and we were unable to recover it. 00:42:48.044 [2024-05-15 09:08:42.728015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.044 [2024-05-15 09:08:42.728119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.044 [2024-05-15 09:08:42.728144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.044 qpair failed and we were unable to recover it. 00:42:48.044 [2024-05-15 09:08:42.728248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.044 [2024-05-15 09:08:42.728383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.044 [2024-05-15 09:08:42.728409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.044 qpair failed and we were unable to recover it. 00:42:48.044 [2024-05-15 09:08:42.728504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.044 [2024-05-15 09:08:42.728633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.044 [2024-05-15 09:08:42.728658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.044 qpair failed and we were unable to recover it. 00:42:48.044 [2024-05-15 09:08:42.728789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.044 [2024-05-15 09:08:42.728890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.044 [2024-05-15 09:08:42.728916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.044 qpair failed and we were unable to recover it. 00:42:48.044 [2024-05-15 09:08:42.729020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.044 [2024-05-15 09:08:42.729121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.044 [2024-05-15 09:08:42.729147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.044 qpair failed and we were unable to recover it. 00:42:48.044 [2024-05-15 09:08:42.729250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.044 [2024-05-15 09:08:42.729375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.044 [2024-05-15 09:08:42.729400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.044 qpair failed and we were unable to recover it. 00:42:48.044 [2024-05-15 09:08:42.729530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.044 [2024-05-15 09:08:42.729628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.044 [2024-05-15 09:08:42.729654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.044 qpair failed and we were unable to recover it. 00:42:48.044 [2024-05-15 09:08:42.729783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.044 [2024-05-15 09:08:42.729877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.044 [2024-05-15 09:08:42.729902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.044 qpair failed and we were unable to recover it. 00:42:48.044 [2024-05-15 09:08:42.729999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.044 [2024-05-15 09:08:42.730125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.044 [2024-05-15 09:08:42.730150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.044 qpair failed and we were unable to recover it. 00:42:48.044 [2024-05-15 09:08:42.730256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.044 [2024-05-15 09:08:42.730358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.044 [2024-05-15 09:08:42.730383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.044 qpair failed and we were unable to recover it. 00:42:48.044 [2024-05-15 09:08:42.730525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.044 [2024-05-15 09:08:42.730631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.044 [2024-05-15 09:08:42.730656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.045 qpair failed and we were unable to recover it. 00:42:48.045 [2024-05-15 09:08:42.730762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.045 [2024-05-15 09:08:42.730868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.045 [2024-05-15 09:08:42.730893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.045 qpair failed and we were unable to recover it. 00:42:48.045 [2024-05-15 09:08:42.731035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.045 [2024-05-15 09:08:42.731133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.045 [2024-05-15 09:08:42.731158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.045 qpair failed and we were unable to recover it. 00:42:48.045 [2024-05-15 09:08:42.731292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.045 [2024-05-15 09:08:42.731400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.045 [2024-05-15 09:08:42.731425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.045 qpair failed and we were unable to recover it. 00:42:48.045 [2024-05-15 09:08:42.731563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.045 [2024-05-15 09:08:42.731677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.045 [2024-05-15 09:08:42.731702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.045 qpair failed and we were unable to recover it. 00:42:48.045 [2024-05-15 09:08:42.731807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.045 [2024-05-15 09:08:42.731910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.045 [2024-05-15 09:08:42.731935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.045 qpair failed and we were unable to recover it. 00:42:48.045 [2024-05-15 09:08:42.732046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.045 [2024-05-15 09:08:42.732156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.045 [2024-05-15 09:08:42.732181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.045 qpair failed and we were unable to recover it. 00:42:48.045 [2024-05-15 09:08:42.732300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.045 [2024-05-15 09:08:42.732432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.045 [2024-05-15 09:08:42.732457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.045 qpair failed and we were unable to recover it. 00:42:48.045 [2024-05-15 09:08:42.732589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.045 [2024-05-15 09:08:42.732747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.045 [2024-05-15 09:08:42.732772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.045 qpair failed and we were unable to recover it. 00:42:48.045 [2024-05-15 09:08:42.732898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.045 [2024-05-15 09:08:42.733003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.045 [2024-05-15 09:08:42.733028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.045 qpair failed and we were unable to recover it. 00:42:48.045 [2024-05-15 09:08:42.733133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.045 [2024-05-15 09:08:42.733240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.045 [2024-05-15 09:08:42.733265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.045 qpair failed and we were unable to recover it. 00:42:48.045 [2024-05-15 09:08:42.733405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.045 [2024-05-15 09:08:42.733533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.045 [2024-05-15 09:08:42.733558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.045 qpair failed and we were unable to recover it. 00:42:48.045 [2024-05-15 09:08:42.733692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.045 [2024-05-15 09:08:42.733795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.045 [2024-05-15 09:08:42.733819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.045 qpair failed and we were unable to recover it. 00:42:48.045 [2024-05-15 09:08:42.733922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.045 [2024-05-15 09:08:42.734027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.045 [2024-05-15 09:08:42.734052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.045 qpair failed and we were unable to recover it. 00:42:48.045 [2024-05-15 09:08:42.734188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.045 [2024-05-15 09:08:42.734297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.045 [2024-05-15 09:08:42.734322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.045 qpair failed and we were unable to recover it. 00:42:48.045 [2024-05-15 09:08:42.734444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.045 [2024-05-15 09:08:42.734553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.045 [2024-05-15 09:08:42.734578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.045 qpair failed and we were unable to recover it. 00:42:48.045 [2024-05-15 09:08:42.734732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.045 [2024-05-15 09:08:42.734832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.045 [2024-05-15 09:08:42.734857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.045 qpair failed and we were unable to recover it. 00:42:48.045 [2024-05-15 09:08:42.734995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.045 [2024-05-15 09:08:42.735135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.045 [2024-05-15 09:08:42.735160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.045 qpair failed and we were unable to recover it. 00:42:48.045 [2024-05-15 09:08:42.735296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.045 [2024-05-15 09:08:42.735391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.045 [2024-05-15 09:08:42.735416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.045 qpair failed and we were unable to recover it. 00:42:48.045 [2024-05-15 09:08:42.735581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.045 [2024-05-15 09:08:42.735703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.045 [2024-05-15 09:08:42.735727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.045 qpair failed and we were unable to recover it. 00:42:48.045 [2024-05-15 09:08:42.735845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.045 [2024-05-15 09:08:42.735970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.045 [2024-05-15 09:08:42.735994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.045 qpair failed and we were unable to recover it. 00:42:48.045 [2024-05-15 09:08:42.736123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.045 [2024-05-15 09:08:42.736223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.045 [2024-05-15 09:08:42.736248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.045 qpair failed and we were unable to recover it. 00:42:48.045 [2024-05-15 09:08:42.736349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.045 [2024-05-15 09:08:42.736450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.045 [2024-05-15 09:08:42.736475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.045 qpair failed and we were unable to recover it. 00:42:48.045 [2024-05-15 09:08:42.736581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.045 [2024-05-15 09:08:42.736705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.045 [2024-05-15 09:08:42.736729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.045 qpair failed and we were unable to recover it. 00:42:48.045 [2024-05-15 09:08:42.736827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.045 [2024-05-15 09:08:42.736935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.045 [2024-05-15 09:08:42.736959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.045 qpair failed and we were unable to recover it. 00:42:48.045 [2024-05-15 09:08:42.737087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.045 [2024-05-15 09:08:42.737212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.045 [2024-05-15 09:08:42.737241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.045 qpair failed and we were unable to recover it. 00:42:48.045 [2024-05-15 09:08:42.737372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.045 [2024-05-15 09:08:42.737471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.045 [2024-05-15 09:08:42.737495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.045 qpair failed and we were unable to recover it. 00:42:48.045 [2024-05-15 09:08:42.737600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.045 [2024-05-15 09:08:42.737760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.045 [2024-05-15 09:08:42.737784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.045 qpair failed and we were unable to recover it. 00:42:48.045 [2024-05-15 09:08:42.737888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.045 [2024-05-15 09:08:42.737987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.045 [2024-05-15 09:08:42.738012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.045 qpair failed and we were unable to recover it. 00:42:48.045 [2024-05-15 09:08:42.738142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.045 [2024-05-15 09:08:42.738245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.046 [2024-05-15 09:08:42.738270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.046 qpair failed and we were unable to recover it. 00:42:48.046 [2024-05-15 09:08:42.738385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.046 [2024-05-15 09:08:42.738495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.046 [2024-05-15 09:08:42.738521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.046 qpair failed and we were unable to recover it. 00:42:48.046 [2024-05-15 09:08:42.738624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.046 [2024-05-15 09:08:42.738748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.046 [2024-05-15 09:08:42.738773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.046 qpair failed and we were unable to recover it. 00:42:48.046 [2024-05-15 09:08:42.738875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.046 [2024-05-15 09:08:42.738976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.046 [2024-05-15 09:08:42.739000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.046 qpair failed and we were unable to recover it. 00:42:48.046 [2024-05-15 09:08:42.739109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.046 [2024-05-15 09:08:42.739262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.046 [2024-05-15 09:08:42.739288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.046 qpair failed and we were unable to recover it. 00:42:48.046 [2024-05-15 09:08:42.739387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.046 [2024-05-15 09:08:42.739533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.046 [2024-05-15 09:08:42.739558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.046 qpair failed and we were unable to recover it. 00:42:48.046 [2024-05-15 09:08:42.739661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.046 [2024-05-15 09:08:42.739811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.046 [2024-05-15 09:08:42.739836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.046 qpair failed and we were unable to recover it. 00:42:48.046 [2024-05-15 09:08:42.739935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.046 [2024-05-15 09:08:42.740049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.046 [2024-05-15 09:08:42.740073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.046 qpair failed and we were unable to recover it. 00:42:48.046 [2024-05-15 09:08:42.740200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.046 [2024-05-15 09:08:42.740333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.046 [2024-05-15 09:08:42.740358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.046 qpair failed and we were unable to recover it. 00:42:48.046 [2024-05-15 09:08:42.740465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.046 [2024-05-15 09:08:42.740598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.046 [2024-05-15 09:08:42.740623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.046 qpair failed and we were unable to recover it. 00:42:48.046 [2024-05-15 09:08:42.740753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.046 [2024-05-15 09:08:42.740859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.046 [2024-05-15 09:08:42.740883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.046 qpair failed and we were unable to recover it. 00:42:48.046 [2024-05-15 09:08:42.741004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.046 [2024-05-15 09:08:42.741111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.046 [2024-05-15 09:08:42.741140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.046 qpair failed and we were unable to recover it. 00:42:48.046 [2024-05-15 09:08:42.741240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.046 [2024-05-15 09:08:42.741367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.046 [2024-05-15 09:08:42.741392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.046 qpair failed and we were unable to recover it. 00:42:48.046 [2024-05-15 09:08:42.741496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.046 [2024-05-15 09:08:42.741600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.046 [2024-05-15 09:08:42.741625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.046 qpair failed and we were unable to recover it. 00:42:48.046 [2024-05-15 09:08:42.741733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.046 [2024-05-15 09:08:42.741834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.046 [2024-05-15 09:08:42.741859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.046 qpair failed and we were unable to recover it. 00:42:48.046 [2024-05-15 09:08:42.741964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.046 [2024-05-15 09:08:42.742069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.046 [2024-05-15 09:08:42.742094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.046 qpair failed and we were unable to recover it. 00:42:48.046 [2024-05-15 09:08:42.742188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.046 [2024-05-15 09:08:42.742297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.046 [2024-05-15 09:08:42.742321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.046 qpair failed and we were unable to recover it. 00:42:48.046 [2024-05-15 09:08:42.742418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.046 [2024-05-15 09:08:42.742557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.046 [2024-05-15 09:08:42.742582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.046 qpair failed and we were unable to recover it. 00:42:48.046 [2024-05-15 09:08:42.742707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.046 [2024-05-15 09:08:42.742807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.046 [2024-05-15 09:08:42.742832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.046 qpair failed and we were unable to recover it. 00:42:48.046 [2024-05-15 09:08:42.742939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.046 [2024-05-15 09:08:42.743040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.046 [2024-05-15 09:08:42.743066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.046 qpair failed and we were unable to recover it. 00:42:48.046 [2024-05-15 09:08:42.743175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.046 [2024-05-15 09:08:42.743303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.046 [2024-05-15 09:08:42.743328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.046 qpair failed and we were unable to recover it. 00:42:48.046 [2024-05-15 09:08:42.743453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.046 [2024-05-15 09:08:42.743576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.046 [2024-05-15 09:08:42.743601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.046 qpair failed and we were unable to recover it. 00:42:48.046 [2024-05-15 09:08:42.743700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.046 [2024-05-15 09:08:42.743793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.046 [2024-05-15 09:08:42.743818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.046 qpair failed and we were unable to recover it. 00:42:48.046 [2024-05-15 09:08:42.743975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.046 [2024-05-15 09:08:42.744079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.046 [2024-05-15 09:08:42.744104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.046 qpair failed and we were unable to recover it. 00:42:48.046 [2024-05-15 09:08:42.744272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.046 [2024-05-15 09:08:42.744400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.046 [2024-05-15 09:08:42.744425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.046 qpair failed and we were unable to recover it. 00:42:48.046 [2024-05-15 09:08:42.744546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.046 [2024-05-15 09:08:42.744646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.046 [2024-05-15 09:08:42.744671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.046 qpair failed and we were unable to recover it. 00:42:48.046 [2024-05-15 09:08:42.744763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.046 [2024-05-15 09:08:42.744885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.046 [2024-05-15 09:08:42.744910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.046 qpair failed and we were unable to recover it. 00:42:48.046 [2024-05-15 09:08:42.745044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.046 [2024-05-15 09:08:42.745213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.046 [2024-05-15 09:08:42.745242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.046 qpair failed and we were unable to recover it. 00:42:48.046 [2024-05-15 09:08:42.745387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.046 [2024-05-15 09:08:42.745498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.046 [2024-05-15 09:08:42.745522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.046 qpair failed and we were unable to recover it. 00:42:48.046 [2024-05-15 09:08:42.745631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.046 [2024-05-15 09:08:42.745732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.047 [2024-05-15 09:08:42.745756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.047 qpair failed and we were unable to recover it. 00:42:48.047 [2024-05-15 09:08:42.745887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.047 [2024-05-15 09:08:42.746018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.047 [2024-05-15 09:08:42.746044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.047 qpair failed and we were unable to recover it. 00:42:48.047 [2024-05-15 09:08:42.746145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.047 [2024-05-15 09:08:42.746272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.047 [2024-05-15 09:08:42.746299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.047 qpair failed and we were unable to recover it. 00:42:48.047 [2024-05-15 09:08:42.746426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.047 [2024-05-15 09:08:42.746530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.047 [2024-05-15 09:08:42.746556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.047 qpair failed and we were unable to recover it. 00:42:48.047 [2024-05-15 09:08:42.746698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.047 [2024-05-15 09:08:42.746801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.047 [2024-05-15 09:08:42.746826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.047 qpair failed and we were unable to recover it. 00:42:48.047 [2024-05-15 09:08:42.746958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.047 [2024-05-15 09:08:42.747087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.047 [2024-05-15 09:08:42.747112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.047 qpair failed and we were unable to recover it. 00:42:48.047 [2024-05-15 09:08:42.747227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.047 [2024-05-15 09:08:42.747327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.047 [2024-05-15 09:08:42.747352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.047 qpair failed and we were unable to recover it. 00:42:48.047 [2024-05-15 09:08:42.747461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.047 [2024-05-15 09:08:42.747570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.047 [2024-05-15 09:08:42.747594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.047 qpair failed and we were unable to recover it. 00:42:48.047 [2024-05-15 09:08:42.747740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.047 [2024-05-15 09:08:42.747899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.047 [2024-05-15 09:08:42.747924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.047 qpair failed and we were unable to recover it. 00:42:48.047 [2024-05-15 09:08:42.748019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.047 [2024-05-15 09:08:42.748129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.047 [2024-05-15 09:08:42.748154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.047 qpair failed and we were unable to recover it. 00:42:48.047 [2024-05-15 09:08:42.748286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.047 [2024-05-15 09:08:42.748387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.047 [2024-05-15 09:08:42.748412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.047 qpair failed and we were unable to recover it. 00:42:48.047 [2024-05-15 09:08:42.748518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.047 [2024-05-15 09:08:42.748619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.047 [2024-05-15 09:08:42.748643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.047 qpair failed and we were unable to recover it. 00:42:48.047 [2024-05-15 09:08:42.748747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.047 [2024-05-15 09:08:42.748870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.047 [2024-05-15 09:08:42.748895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.047 qpair failed and we were unable to recover it. 00:42:48.047 [2024-05-15 09:08:42.749006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.047 [2024-05-15 09:08:42.749098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.047 [2024-05-15 09:08:42.749122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.047 qpair failed and we were unable to recover it. 00:42:48.047 [2024-05-15 09:08:42.749228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.047 [2024-05-15 09:08:42.749326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.047 [2024-05-15 09:08:42.749350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.047 qpair failed and we were unable to recover it. 00:42:48.047 [2024-05-15 09:08:42.749448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.047 [2024-05-15 09:08:42.749550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.047 [2024-05-15 09:08:42.749575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.047 qpair failed and we were unable to recover it. 00:42:48.047 [2024-05-15 09:08:42.749688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.047 [2024-05-15 09:08:42.749814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.047 [2024-05-15 09:08:42.749840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.047 qpair failed and we were unable to recover it. 00:42:48.047 [2024-05-15 09:08:42.749943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.047 [2024-05-15 09:08:42.750037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.047 [2024-05-15 09:08:42.750062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.047 qpair failed and we were unable to recover it. 00:42:48.047 [2024-05-15 09:08:42.750169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.047 [2024-05-15 09:08:42.750318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.047 [2024-05-15 09:08:42.750343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.047 qpair failed and we were unable to recover it. 00:42:48.047 [2024-05-15 09:08:42.750472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.047 [2024-05-15 09:08:42.750574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.047 [2024-05-15 09:08:42.750599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.047 qpair failed and we were unable to recover it. 00:42:48.047 [2024-05-15 09:08:42.750697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.047 [2024-05-15 09:08:42.750788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.047 [2024-05-15 09:08:42.750813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.047 qpair failed and we were unable to recover it. 00:42:48.047 [2024-05-15 09:08:42.750923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.047 [2024-05-15 09:08:42.751079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.047 [2024-05-15 09:08:42.751103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.047 qpair failed and we were unable to recover it. 00:42:48.047 [2024-05-15 09:08:42.751220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.047 [2024-05-15 09:08:42.751324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.047 [2024-05-15 09:08:42.751350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.047 qpair failed and we were unable to recover it. 00:42:48.047 [2024-05-15 09:08:42.751480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.047 [2024-05-15 09:08:42.751620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.047 [2024-05-15 09:08:42.751645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.047 qpair failed and we were unable to recover it. 00:42:48.047 [2024-05-15 09:08:42.751756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.047 [2024-05-15 09:08:42.751852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.047 [2024-05-15 09:08:42.751877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.047 qpair failed and we were unable to recover it. 00:42:48.047 [2024-05-15 09:08:42.752013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.047 [2024-05-15 09:08:42.752186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.047 [2024-05-15 09:08:42.752211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.047 qpair failed and we were unable to recover it. 00:42:48.047 [2024-05-15 09:08:42.752346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.047 [2024-05-15 09:08:42.752447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.047 [2024-05-15 09:08:42.752471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.047 qpair failed and we were unable to recover it. 00:42:48.047 [2024-05-15 09:08:42.752604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.047 [2024-05-15 09:08:42.752701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.047 [2024-05-15 09:08:42.752726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.047 qpair failed and we were unable to recover it. 00:42:48.047 [2024-05-15 09:08:42.752861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.047 [2024-05-15 09:08:42.752981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.047 [2024-05-15 09:08:42.753006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.047 qpair failed and we were unable to recover it. 00:42:48.047 [2024-05-15 09:08:42.753111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.048 [2024-05-15 09:08:42.753233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.048 [2024-05-15 09:08:42.753258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.048 qpair failed and we were unable to recover it. 00:42:48.048 [2024-05-15 09:08:42.753361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.048 [2024-05-15 09:08:42.753508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.048 [2024-05-15 09:08:42.753532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.048 qpair failed and we were unable to recover it. 00:42:48.048 [2024-05-15 09:08:42.753634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.048 [2024-05-15 09:08:42.753758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.048 [2024-05-15 09:08:42.753783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.048 qpair failed and we were unable to recover it. 00:42:48.048 [2024-05-15 09:08:42.753892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.048 [2024-05-15 09:08:42.753996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.048 [2024-05-15 09:08:42.754021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.048 qpair failed and we were unable to recover it. 00:42:48.048 [2024-05-15 09:08:42.754149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.048 [2024-05-15 09:08:42.754259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.048 [2024-05-15 09:08:42.754289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.048 qpair failed and we were unable to recover it. 00:42:48.048 [2024-05-15 09:08:42.754415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.048 [2024-05-15 09:08:42.754517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.048 [2024-05-15 09:08:42.754542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.048 qpair failed and we were unable to recover it. 00:42:48.048 [2024-05-15 09:08:42.754670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.048 [2024-05-15 09:08:42.754800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.048 [2024-05-15 09:08:42.754826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.048 qpair failed and we were unable to recover it. 00:42:48.048 [2024-05-15 09:08:42.754928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.048 [2024-05-15 09:08:42.755028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.048 [2024-05-15 09:08:42.755053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.048 qpair failed and we were unable to recover it. 00:42:48.048 [2024-05-15 09:08:42.755160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.048 [2024-05-15 09:08:42.755256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.048 [2024-05-15 09:08:42.755281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.048 qpair failed and we were unable to recover it. 00:42:48.048 [2024-05-15 09:08:42.755410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.048 [2024-05-15 09:08:42.755506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.048 [2024-05-15 09:08:42.755530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.048 qpair failed and we were unable to recover it. 00:42:48.048 [2024-05-15 09:08:42.755628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.048 [2024-05-15 09:08:42.755755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.048 [2024-05-15 09:08:42.755779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.048 qpair failed and we were unable to recover it. 00:42:48.048 [2024-05-15 09:08:42.755886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.048 [2024-05-15 09:08:42.756015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.048 [2024-05-15 09:08:42.756039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.048 qpair failed and we were unable to recover it. 00:42:48.048 [2024-05-15 09:08:42.756164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.048 [2024-05-15 09:08:42.756264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.048 [2024-05-15 09:08:42.756289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.048 qpair failed and we were unable to recover it. 00:42:48.048 [2024-05-15 09:08:42.756389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.048 [2024-05-15 09:08:42.756517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.048 [2024-05-15 09:08:42.756541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.048 qpair failed and we were unable to recover it. 00:42:48.048 [2024-05-15 09:08:42.756677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.048 [2024-05-15 09:08:42.756809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.048 [2024-05-15 09:08:42.756834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.048 qpair failed and we were unable to recover it. 00:42:48.048 [2024-05-15 09:08:42.756943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.048 [2024-05-15 09:08:42.757065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.048 [2024-05-15 09:08:42.757090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.048 qpair failed and we were unable to recover it. 00:42:48.048 [2024-05-15 09:08:42.757213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.048 [2024-05-15 09:08:42.757350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.048 [2024-05-15 09:08:42.757374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.048 qpair failed and we were unable to recover it. 00:42:48.048 [2024-05-15 09:08:42.757487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.048 [2024-05-15 09:08:42.757613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.048 [2024-05-15 09:08:42.757637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.048 qpair failed and we were unable to recover it. 00:42:48.048 [2024-05-15 09:08:42.757739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.048 [2024-05-15 09:08:42.757866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.048 [2024-05-15 09:08:42.757890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.048 qpair failed and we were unable to recover it. 00:42:48.048 [2024-05-15 09:08:42.758012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.048 [2024-05-15 09:08:42.758112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.048 [2024-05-15 09:08:42.758137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.048 qpair failed and we were unable to recover it. 00:42:48.048 [2024-05-15 09:08:42.758281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.048 [2024-05-15 09:08:42.758390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.048 [2024-05-15 09:08:42.758415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.048 qpair failed and we were unable to recover it. 00:42:48.048 [2024-05-15 09:08:42.758546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.048 [2024-05-15 09:08:42.758661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.048 [2024-05-15 09:08:42.758695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.048 qpair failed and we were unable to recover it. 00:42:48.048 [2024-05-15 09:08:42.758825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.048 [2024-05-15 09:08:42.758941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.048 [2024-05-15 09:08:42.758972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.048 qpair failed and we were unable to recover it. 00:42:48.048 [2024-05-15 09:08:42.759137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.048 [2024-05-15 09:08:42.759355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.048 [2024-05-15 09:08:42.759383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.048 qpair failed and we were unable to recover it. 00:42:48.048 [2024-05-15 09:08:42.759490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.048 [2024-05-15 09:08:42.759600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.048 [2024-05-15 09:08:42.759628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.049 qpair failed and we were unable to recover it. 00:42:48.049 [2024-05-15 09:08:42.759755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.049 [2024-05-15 09:08:42.759873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.049 [2024-05-15 09:08:42.759904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.049 qpair failed and we were unable to recover it. 00:42:48.049 [2024-05-15 09:08:42.760021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.049 [2024-05-15 09:08:42.760123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.049 [2024-05-15 09:08:42.760149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.049 qpair failed and we were unable to recover it. 00:42:48.049 [2024-05-15 09:08:42.760295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.049 [2024-05-15 09:08:42.760398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.049 [2024-05-15 09:08:42.760424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.049 qpair failed and we were unable to recover it. 00:42:48.049 [2024-05-15 09:08:42.760541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.049 [2024-05-15 09:08:42.760642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.049 [2024-05-15 09:08:42.760669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.049 qpair failed and we were unable to recover it. 00:42:48.049 [2024-05-15 09:08:42.760813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.049 [2024-05-15 09:08:42.760943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.049 [2024-05-15 09:08:42.760972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.049 qpair failed and we were unable to recover it. 00:42:48.049 [2024-05-15 09:08:42.761081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.049 [2024-05-15 09:08:42.761191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.049 [2024-05-15 09:08:42.761227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.049 qpair failed and we were unable to recover it. 00:42:48.049 [2024-05-15 09:08:42.761335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.049 [2024-05-15 09:08:42.761442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.049 [2024-05-15 09:08:42.761472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.049 qpair failed and we were unable to recover it. 00:42:48.049 [2024-05-15 09:08:42.761588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.049 [2024-05-15 09:08:42.761722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.049 [2024-05-15 09:08:42.761752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.049 qpair failed and we were unable to recover it. 00:42:48.049 [2024-05-15 09:08:42.761870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.049 [2024-05-15 09:08:42.761996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.049 [2024-05-15 09:08:42.762027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.049 qpair failed and we were unable to recover it. 00:42:48.049 [2024-05-15 09:08:42.762162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.049 [2024-05-15 09:08:42.762273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.049 [2024-05-15 09:08:42.762300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.049 qpair failed and we were unable to recover it. 00:42:48.049 [2024-05-15 09:08:42.762435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.049 [2024-05-15 09:08:42.762587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.049 [2024-05-15 09:08:42.762617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.049 qpair failed and we were unable to recover it. 00:42:48.049 [2024-05-15 09:08:42.762753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.049 [2024-05-15 09:08:42.762853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.049 [2024-05-15 09:08:42.762879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.049 qpair failed and we were unable to recover it. 00:42:48.049 [2024-05-15 09:08:42.763027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.049 [2024-05-15 09:08:42.763160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.049 [2024-05-15 09:08:42.763187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.049 qpair failed and we were unable to recover it. 00:42:48.049 [2024-05-15 09:08:42.763316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.049 [2024-05-15 09:08:42.763434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.049 [2024-05-15 09:08:42.763460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.049 qpair failed and we were unable to recover it. 00:42:48.049 [2024-05-15 09:08:42.763566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.049 [2024-05-15 09:08:42.763683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.049 [2024-05-15 09:08:42.763711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.049 qpair failed and we were unable to recover it. 00:42:48.049 [2024-05-15 09:08:42.763829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.049 [2024-05-15 09:08:42.763936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.049 [2024-05-15 09:08:42.763962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.049 qpair failed and we were unable to recover it. 00:42:48.049 [2024-05-15 09:08:42.764106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.049 [2024-05-15 09:08:42.764239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.049 [2024-05-15 09:08:42.764272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.049 qpair failed and we were unable to recover it. 00:42:48.049 [2024-05-15 09:08:42.764379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.049 [2024-05-15 09:08:42.764483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.049 [2024-05-15 09:08:42.764510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.049 qpair failed and we were unable to recover it. 00:42:48.049 [2024-05-15 09:08:42.764666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.049 [2024-05-15 09:08:42.764764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.049 [2024-05-15 09:08:42.764794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.049 qpair failed and we were unable to recover it. 00:42:48.049 [2024-05-15 09:08:42.764931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.049 [2024-05-15 09:08:42.765069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.049 [2024-05-15 09:08:42.765095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.049 qpair failed and we were unable to recover it. 00:42:48.049 [2024-05-15 09:08:42.765262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.049 [2024-05-15 09:08:42.765378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.049 [2024-05-15 09:08:42.765407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.049 qpair failed and we were unable to recover it. 00:42:48.049 [2024-05-15 09:08:42.765557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.049 [2024-05-15 09:08:42.765665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.049 [2024-05-15 09:08:42.765693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.049 qpair failed and we were unable to recover it. 00:42:48.049 [2024-05-15 09:08:42.765811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.049 [2024-05-15 09:08:42.765918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.049 [2024-05-15 09:08:42.765947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.049 qpair failed and we were unable to recover it. 00:42:48.049 [2024-05-15 09:08:42.766116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.049 [2024-05-15 09:08:42.766223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.049 [2024-05-15 09:08:42.766253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.049 qpair failed and we were unable to recover it. 00:42:48.049 [2024-05-15 09:08:42.766366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.049 [2024-05-15 09:08:42.766474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.049 [2024-05-15 09:08:42.766500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.049 qpair failed and we were unable to recover it. 00:42:48.049 [2024-05-15 09:08:42.766647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.049 [2024-05-15 09:08:42.766752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.049 [2024-05-15 09:08:42.766780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.049 qpair failed and we were unable to recover it. 00:42:48.049 [2024-05-15 09:08:42.766895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.049 [2024-05-15 09:08:42.766997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.049 [2024-05-15 09:08:42.767025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.049 qpair failed and we were unable to recover it. 00:42:48.049 [2024-05-15 09:08:42.767141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.049 [2024-05-15 09:08:42.767255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.049 [2024-05-15 09:08:42.767281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.049 qpair failed and we were unable to recover it. 00:42:48.049 [2024-05-15 09:08:42.767401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.049 [2024-05-15 09:08:42.767507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.049 [2024-05-15 09:08:42.767533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.050 qpair failed and we were unable to recover it. 00:42:48.050 [2024-05-15 09:08:42.767676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.050 [2024-05-15 09:08:42.767800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.050 [2024-05-15 09:08:42.767827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.050 qpair failed and we were unable to recover it. 00:42:48.050 [2024-05-15 09:08:42.767936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.050 [2024-05-15 09:08:42.768098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.050 [2024-05-15 09:08:42.768132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.050 qpair failed and we were unable to recover it. 00:42:48.050 [2024-05-15 09:08:42.768265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.050 [2024-05-15 09:08:42.768381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.050 [2024-05-15 09:08:42.768414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.050 qpair failed and we were unable to recover it. 00:42:48.050 [2024-05-15 09:08:42.768523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.050 [2024-05-15 09:08:42.768673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.050 [2024-05-15 09:08:42.768702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.050 qpair failed and we were unable to recover it. 00:42:48.050 [2024-05-15 09:08:42.768817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.050 [2024-05-15 09:08:42.768931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.050 [2024-05-15 09:08:42.768959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.050 qpair failed and we were unable to recover it. 00:42:48.050 [2024-05-15 09:08:42.769080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.050 [2024-05-15 09:08:42.769222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.050 [2024-05-15 09:08:42.769248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.050 qpair failed and we were unable to recover it. 00:42:48.050 [2024-05-15 09:08:42.769396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.050 [2024-05-15 09:08:42.769503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.050 [2024-05-15 09:08:42.769529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.050 qpair failed and we were unable to recover it. 00:42:48.050 [2024-05-15 09:08:42.769638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.050 [2024-05-15 09:08:42.769745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.050 [2024-05-15 09:08:42.769771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.050 qpair failed and we were unable to recover it. 00:42:48.050 [2024-05-15 09:08:42.769881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.050 [2024-05-15 09:08:42.770015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.050 [2024-05-15 09:08:42.770046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.050 qpair failed and we were unable to recover it. 00:42:48.050 [2024-05-15 09:08:42.770153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.050 [2024-05-15 09:08:42.770264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.050 [2024-05-15 09:08:42.770295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.050 qpair failed and we were unable to recover it. 00:42:48.050 [2024-05-15 09:08:42.770409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.050 [2024-05-15 09:08:42.770522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.050 [2024-05-15 09:08:42.770549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.050 qpair failed and we were unable to recover it. 00:42:48.050 [2024-05-15 09:08:42.770669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.050 [2024-05-15 09:08:42.770804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.050 [2024-05-15 09:08:42.770839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.050 qpair failed and we were unable to recover it. 00:42:48.050 [2024-05-15 09:08:42.770960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.050 [2024-05-15 09:08:42.771109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.050 [2024-05-15 09:08:42.771139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.050 qpair failed and we were unable to recover it. 00:42:48.050 [2024-05-15 09:08:42.771254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.050 [2024-05-15 09:08:42.771369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.050 [2024-05-15 09:08:42.771399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.050 qpair failed and we were unable to recover it. 00:42:48.050 [2024-05-15 09:08:42.771521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.050 [2024-05-15 09:08:42.771662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.050 [2024-05-15 09:08:42.771692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.050 qpair failed and we were unable to recover it. 00:42:48.050 [2024-05-15 09:08:42.771815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.050 [2024-05-15 09:08:42.771922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.050 [2024-05-15 09:08:42.771948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.050 qpair failed and we were unable to recover it. 00:42:48.050 [2024-05-15 09:08:42.772088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.050 [2024-05-15 09:08:42.772231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.050 [2024-05-15 09:08:42.772257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.050 qpair failed and we were unable to recover it. 00:42:48.050 [2024-05-15 09:08:42.772375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.050 [2024-05-15 09:08:42.772498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.050 [2024-05-15 09:08:42.772528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.050 qpair failed and we were unable to recover it. 00:42:48.050 [2024-05-15 09:08:42.772644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.050 [2024-05-15 09:08:42.772785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.050 [2024-05-15 09:08:42.772817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.050 qpair failed and we were unable to recover it. 00:42:48.050 [2024-05-15 09:08:42.772929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.050 [2024-05-15 09:08:42.773033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.050 [2024-05-15 09:08:42.773059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.050 qpair failed and we were unable to recover it. 00:42:48.050 [2024-05-15 09:08:42.773177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.050 [2024-05-15 09:08:42.773294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.050 [2024-05-15 09:08:42.773323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.050 qpair failed and we were unable to recover it. 00:42:48.050 [2024-05-15 09:08:42.773440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.050 [2024-05-15 09:08:42.773544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.050 [2024-05-15 09:08:42.773573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.050 qpair failed and we were unable to recover it. 00:42:48.050 [2024-05-15 09:08:42.773706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.050 [2024-05-15 09:08:42.773801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.050 [2024-05-15 09:08:42.773831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.050 qpair failed and we were unable to recover it. 00:42:48.050 [2024-05-15 09:08:42.773999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.050 [2024-05-15 09:08:42.774134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.050 [2024-05-15 09:08:42.774166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.050 qpair failed and we were unable to recover it. 00:42:48.050 [2024-05-15 09:08:42.774335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.050 [2024-05-15 09:08:42.774437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.050 [2024-05-15 09:08:42.774465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.050 qpair failed and we were unable to recover it. 00:42:48.050 [2024-05-15 09:08:42.774599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.050 [2024-05-15 09:08:42.774712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.050 [2024-05-15 09:08:42.774744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.050 qpair failed and we were unable to recover it. 00:42:48.050 [2024-05-15 09:08:42.774889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.323 [2024-05-15 09:08:42.774989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.323 [2024-05-15 09:08:42.775019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.323 qpair failed and we were unable to recover it. 00:42:48.323 [2024-05-15 09:08:42.775160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.323 [2024-05-15 09:08:42.775288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.323 [2024-05-15 09:08:42.775317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.323 qpair failed and we were unable to recover it. 00:42:48.323 [2024-05-15 09:08:42.775436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.323 [2024-05-15 09:08:42.775568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.323 [2024-05-15 09:08:42.775596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.323 qpair failed and we were unable to recover it. 00:42:48.323 [2024-05-15 09:08:42.775710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.323 [2024-05-15 09:08:42.775842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.323 [2024-05-15 09:08:42.775870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.323 qpair failed and we were unable to recover it. 00:42:48.323 [2024-05-15 09:08:42.776000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.323 [2024-05-15 09:08:42.776122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.323 [2024-05-15 09:08:42.776149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.323 qpair failed and we were unable to recover it. 00:42:48.323 [2024-05-15 09:08:42.776267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.323 [2024-05-15 09:08:42.776385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.323 [2024-05-15 09:08:42.776419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.323 qpair failed and we were unable to recover it. 00:42:48.323 [2024-05-15 09:08:42.776537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.323 [2024-05-15 09:08:42.776648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.323 [2024-05-15 09:08:42.776677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.323 qpair failed and we were unable to recover it. 00:42:48.323 [2024-05-15 09:08:42.776805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.323 [2024-05-15 09:08:42.776926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.323 [2024-05-15 09:08:42.776955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.323 qpair failed and we were unable to recover it. 00:42:48.323 [2024-05-15 09:08:42.777101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.323 [2024-05-15 09:08:42.777206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.323 [2024-05-15 09:08:42.777238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.323 qpair failed and we were unable to recover it. 00:42:48.323 [2024-05-15 09:08:42.777379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.323 [2024-05-15 09:08:42.777509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.323 [2024-05-15 09:08:42.777535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.323 qpair failed and we were unable to recover it. 00:42:48.323 [2024-05-15 09:08:42.777647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.323 [2024-05-15 09:08:42.777747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.323 [2024-05-15 09:08:42.777772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.323 qpair failed and we were unable to recover it. 00:42:48.323 [2024-05-15 09:08:42.777876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.323 [2024-05-15 09:08:42.778010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.323 [2024-05-15 09:08:42.778034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.323 qpair failed and we were unable to recover it. 00:42:48.323 [2024-05-15 09:08:42.778150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.323 [2024-05-15 09:08:42.778259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.323 [2024-05-15 09:08:42.778285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.323 qpair failed and we were unable to recover it. 00:42:48.323 [2024-05-15 09:08:42.778394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.323 [2024-05-15 09:08:42.778496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.323 [2024-05-15 09:08:42.778524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.323 qpair failed and we were unable to recover it. 00:42:48.323 [2024-05-15 09:08:42.778638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.323 [2024-05-15 09:08:42.778737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.323 [2024-05-15 09:08:42.778762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.323 qpair failed and we were unable to recover it. 00:42:48.323 [2024-05-15 09:08:42.778875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.323 [2024-05-15 09:08:42.779007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.324 [2024-05-15 09:08:42.779032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.324 qpair failed and we were unable to recover it. 00:42:48.324 [2024-05-15 09:08:42.779142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.324 [2024-05-15 09:08:42.779262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.324 [2024-05-15 09:08:42.779289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.324 qpair failed and we were unable to recover it. 00:42:48.324 [2024-05-15 09:08:42.779389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.324 [2024-05-15 09:08:42.779500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.324 [2024-05-15 09:08:42.779526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.324 qpair failed and we were unable to recover it. 00:42:48.324 [2024-05-15 09:08:42.779624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.324 [2024-05-15 09:08:42.779771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.324 [2024-05-15 09:08:42.779796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.324 qpair failed and we were unable to recover it. 00:42:48.324 [2024-05-15 09:08:42.779896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.324 [2024-05-15 09:08:42.780032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.324 [2024-05-15 09:08:42.780058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.324 qpair failed and we were unable to recover it. 00:42:48.324 [2024-05-15 09:08:42.780168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.324 [2024-05-15 09:08:42.780319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.324 [2024-05-15 09:08:42.780347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.324 qpair failed and we were unable to recover it. 00:42:48.324 [2024-05-15 09:08:42.780474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.324 [2024-05-15 09:08:42.780576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.324 [2024-05-15 09:08:42.780601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.324 qpair failed and we were unable to recover it. 00:42:48.324 [2024-05-15 09:08:42.780705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.324 [2024-05-15 09:08:42.780808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.324 [2024-05-15 09:08:42.780832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.324 qpair failed and we were unable to recover it. 00:42:48.324 [2024-05-15 09:08:42.780929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.324 [2024-05-15 09:08:42.781025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.324 [2024-05-15 09:08:42.781049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.324 qpair failed and we were unable to recover it. 00:42:48.324 [2024-05-15 09:08:42.781186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.324 [2024-05-15 09:08:42.781318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.324 [2024-05-15 09:08:42.781343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.324 qpair failed and we were unable to recover it. 00:42:48.324 [2024-05-15 09:08:42.781440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.324 [2024-05-15 09:08:42.781542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.324 [2024-05-15 09:08:42.781566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.324 qpair failed and we were unable to recover it. 00:42:48.324 [2024-05-15 09:08:42.781656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.324 [2024-05-15 09:08:42.781791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.324 [2024-05-15 09:08:42.781816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.324 qpair failed and we were unable to recover it. 00:42:48.324 [2024-05-15 09:08:42.781911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.324 [2024-05-15 09:08:42.782021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.324 [2024-05-15 09:08:42.782046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.324 qpair failed and we were unable to recover it. 00:42:48.324 [2024-05-15 09:08:42.782159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.324 [2024-05-15 09:08:42.782281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.324 [2024-05-15 09:08:42.782307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.324 qpair failed and we were unable to recover it. 00:42:48.324 [2024-05-15 09:08:42.782401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.324 [2024-05-15 09:08:42.782504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.324 [2024-05-15 09:08:42.782529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.324 qpair failed and we were unable to recover it. 00:42:48.324 [2024-05-15 09:08:42.782627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.324 [2024-05-15 09:08:42.782728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.324 [2024-05-15 09:08:42.782753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.324 qpair failed and we were unable to recover it. 00:42:48.324 [2024-05-15 09:08:42.782860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.324 [2024-05-15 09:08:42.782966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.324 [2024-05-15 09:08:42.782990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.324 qpair failed and we were unable to recover it. 00:42:48.324 [2024-05-15 09:08:42.783092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.324 [2024-05-15 09:08:42.783231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.324 [2024-05-15 09:08:42.783257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.324 qpair failed and we were unable to recover it. 00:42:48.324 [2024-05-15 09:08:42.783361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.324 [2024-05-15 09:08:42.783471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.324 [2024-05-15 09:08:42.783495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.324 qpair failed and we were unable to recover it. 00:42:48.324 [2024-05-15 09:08:42.783599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.324 [2024-05-15 09:08:42.783694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.324 [2024-05-15 09:08:42.783718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.324 qpair failed and we were unable to recover it. 00:42:48.324 [2024-05-15 09:08:42.783815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.324 [2024-05-15 09:08:42.783912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.324 [2024-05-15 09:08:42.783936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.324 qpair failed and we were unable to recover it. 00:42:48.324 [2024-05-15 09:08:42.784058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.324 [2024-05-15 09:08:42.784162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.324 [2024-05-15 09:08:42.784191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.324 qpair failed and we were unable to recover it. 00:42:48.324 [2024-05-15 09:08:42.784323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.324 [2024-05-15 09:08:42.784446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.324 [2024-05-15 09:08:42.784471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.324 qpair failed and we were unable to recover it. 00:42:48.324 [2024-05-15 09:08:42.784576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.324 [2024-05-15 09:08:42.784699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.324 [2024-05-15 09:08:42.784724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.324 qpair failed and we were unable to recover it. 00:42:48.324 [2024-05-15 09:08:42.784833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.324 [2024-05-15 09:08:42.784944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.324 [2024-05-15 09:08:42.784969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.324 qpair failed and we were unable to recover it. 00:42:48.324 [2024-05-15 09:08:42.785069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.324 [2024-05-15 09:08:42.785176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.324 [2024-05-15 09:08:42.785201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.324 qpair failed and we were unable to recover it. 00:42:48.324 [2024-05-15 09:08:42.785319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.324 [2024-05-15 09:08:42.785430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.324 [2024-05-15 09:08:42.785454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.324 qpair failed and we were unable to recover it. 00:42:48.324 [2024-05-15 09:08:42.785556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.324 [2024-05-15 09:08:42.785659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.324 [2024-05-15 09:08:42.785683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.324 qpair failed and we were unable to recover it. 00:42:48.324 [2024-05-15 09:08:42.785790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.324 [2024-05-15 09:08:42.785899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.325 [2024-05-15 09:08:42.785923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.325 qpair failed and we were unable to recover it. 00:42:48.325 [2024-05-15 09:08:42.786022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.325 [2024-05-15 09:08:42.786116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.325 [2024-05-15 09:08:42.786141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.325 qpair failed and we were unable to recover it. 00:42:48.325 [2024-05-15 09:08:42.786247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.325 [2024-05-15 09:08:42.786348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.325 [2024-05-15 09:08:42.786372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.325 qpair failed and we were unable to recover it. 00:42:48.325 [2024-05-15 09:08:42.786485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.325 [2024-05-15 09:08:42.786589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.325 [2024-05-15 09:08:42.786613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.325 qpair failed and we were unable to recover it. 00:42:48.325 [2024-05-15 09:08:42.786718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.325 [2024-05-15 09:08:42.786823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.325 [2024-05-15 09:08:42.786848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.325 qpair failed and we were unable to recover it. 00:42:48.325 [2024-05-15 09:08:42.786963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.325 [2024-05-15 09:08:42.787092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.325 [2024-05-15 09:08:42.787117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.325 qpair failed and we were unable to recover it. 00:42:48.325 [2024-05-15 09:08:42.787227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.325 [2024-05-15 09:08:42.787322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.325 [2024-05-15 09:08:42.787347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.325 qpair failed and we were unable to recover it. 00:42:48.325 [2024-05-15 09:08:42.787448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.325 [2024-05-15 09:08:42.787546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.325 [2024-05-15 09:08:42.787571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.325 qpair failed and we were unable to recover it. 00:42:48.325 [2024-05-15 09:08:42.787674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.325 [2024-05-15 09:08:42.787779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.325 [2024-05-15 09:08:42.787805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.325 qpair failed and we were unable to recover it. 00:42:48.325 [2024-05-15 09:08:42.787906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.325 [2024-05-15 09:08:42.788001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.325 [2024-05-15 09:08:42.788025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.325 qpair failed and we were unable to recover it. 00:42:48.325 [2024-05-15 09:08:42.788152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.325 [2024-05-15 09:08:42.788278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.325 [2024-05-15 09:08:42.788304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.325 qpair failed and we were unable to recover it. 00:42:48.325 [2024-05-15 09:08:42.788413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.325 [2024-05-15 09:08:42.788513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.325 [2024-05-15 09:08:42.788538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.325 qpair failed and we were unable to recover it. 00:42:48.325 [2024-05-15 09:08:42.788654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.325 [2024-05-15 09:08:42.788763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.325 [2024-05-15 09:08:42.788788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.325 qpair failed and we were unable to recover it. 00:42:48.325 [2024-05-15 09:08:42.788887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.325 [2024-05-15 09:08:42.788983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.325 [2024-05-15 09:08:42.789008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.325 qpair failed and we were unable to recover it. 00:42:48.325 [2024-05-15 09:08:42.789143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.325 [2024-05-15 09:08:42.789297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.325 [2024-05-15 09:08:42.789322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.325 qpair failed and we were unable to recover it. 00:42:48.325 [2024-05-15 09:08:42.789428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.325 [2024-05-15 09:08:42.789525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.325 [2024-05-15 09:08:42.789550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.325 qpair failed and we were unable to recover it. 00:42:48.325 [2024-05-15 09:08:42.789648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.325 [2024-05-15 09:08:42.789747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.325 [2024-05-15 09:08:42.789773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.325 qpair failed and we were unable to recover it. 00:42:48.325 [2024-05-15 09:08:42.789876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.325 [2024-05-15 09:08:42.789981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.325 [2024-05-15 09:08:42.790005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.325 qpair failed and we were unable to recover it. 00:42:48.325 [2024-05-15 09:08:42.790112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.325 [2024-05-15 09:08:42.790220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.325 [2024-05-15 09:08:42.790246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.325 qpair failed and we were unable to recover it. 00:42:48.325 [2024-05-15 09:08:42.790350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.325 [2024-05-15 09:08:42.790478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.325 [2024-05-15 09:08:42.790503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.325 qpair failed and we were unable to recover it. 00:42:48.325 [2024-05-15 09:08:42.790606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.325 [2024-05-15 09:08:42.790732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.325 [2024-05-15 09:08:42.790757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.325 qpair failed and we were unable to recover it. 00:42:48.325 [2024-05-15 09:08:42.790860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.325 [2024-05-15 09:08:42.791012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.325 [2024-05-15 09:08:42.791037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.325 qpair failed and we were unable to recover it. 00:42:48.325 [2024-05-15 09:08:42.791135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.325 [2024-05-15 09:08:42.791261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.325 [2024-05-15 09:08:42.791287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.325 qpair failed and we were unable to recover it. 00:42:48.325 [2024-05-15 09:08:42.791401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.325 [2024-05-15 09:08:42.791504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.325 [2024-05-15 09:08:42.791530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.325 qpair failed and we were unable to recover it. 00:42:48.325 [2024-05-15 09:08:42.791634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.325 [2024-05-15 09:08:42.791729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.325 [2024-05-15 09:08:42.791755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.325 qpair failed and we were unable to recover it. 00:42:48.325 [2024-05-15 09:08:42.791848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.325 [2024-05-15 09:08:42.791956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.325 [2024-05-15 09:08:42.791980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.325 qpair failed and we were unable to recover it. 00:42:48.325 [2024-05-15 09:08:42.792087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.325 [2024-05-15 09:08:42.792197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.325 [2024-05-15 09:08:42.792229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.325 qpair failed and we were unable to recover it. 00:42:48.325 [2024-05-15 09:08:42.792339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.325 [2024-05-15 09:08:42.792453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.325 [2024-05-15 09:08:42.792477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.325 qpair failed and we were unable to recover it. 00:42:48.325 [2024-05-15 09:08:42.792576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.325 [2024-05-15 09:08:42.792704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.325 [2024-05-15 09:08:42.792729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.325 qpair failed and we were unable to recover it. 00:42:48.325 [2024-05-15 09:08:42.792903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.326 [2024-05-15 09:08:42.793006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.326 [2024-05-15 09:08:42.793031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.326 qpair failed and we were unable to recover it. 00:42:48.326 [2024-05-15 09:08:42.793133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.326 [2024-05-15 09:08:42.793244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.326 [2024-05-15 09:08:42.793270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.326 qpair failed and we were unable to recover it. 00:42:48.326 [2024-05-15 09:08:42.793409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.326 [2024-05-15 09:08:42.793523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.326 [2024-05-15 09:08:42.793548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.326 qpair failed and we were unable to recover it. 00:42:48.326 [2024-05-15 09:08:42.793654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.326 [2024-05-15 09:08:42.793776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.326 [2024-05-15 09:08:42.793801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.326 qpair failed and we were unable to recover it. 00:42:48.326 [2024-05-15 09:08:42.793940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.326 [2024-05-15 09:08:42.794053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.326 [2024-05-15 09:08:42.794078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.326 qpair failed and we were unable to recover it. 00:42:48.326 [2024-05-15 09:08:42.794202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.326 [2024-05-15 09:08:42.794352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.326 [2024-05-15 09:08:42.794378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.326 qpair failed and we were unable to recover it. 00:42:48.326 [2024-05-15 09:08:42.794485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.326 [2024-05-15 09:08:42.794620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.326 [2024-05-15 09:08:42.794644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.326 qpair failed and we were unable to recover it. 00:42:48.326 [2024-05-15 09:08:42.794764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.326 [2024-05-15 09:08:42.794861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.326 [2024-05-15 09:08:42.794886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.326 qpair failed and we were unable to recover it. 00:42:48.326 [2024-05-15 09:08:42.794993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.326 [2024-05-15 09:08:42.795089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.326 [2024-05-15 09:08:42.795115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.326 qpair failed and we were unable to recover it. 00:42:48.326 [2024-05-15 09:08:42.795228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.326 [2024-05-15 09:08:42.795353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.326 [2024-05-15 09:08:42.795378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.326 qpair failed and we were unable to recover it. 00:42:48.326 [2024-05-15 09:08:42.795482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.326 [2024-05-15 09:08:42.795579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.326 [2024-05-15 09:08:42.795604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.326 qpair failed and we were unable to recover it. 00:42:48.326 [2024-05-15 09:08:42.795709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.326 [2024-05-15 09:08:42.795810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.326 [2024-05-15 09:08:42.795835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.326 qpair failed and we were unable to recover it. 00:42:48.326 [2024-05-15 09:08:42.795945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.326 [2024-05-15 09:08:42.796076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.326 [2024-05-15 09:08:42.796101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.326 qpair failed and we were unable to recover it. 00:42:48.326 [2024-05-15 09:08:42.796192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.326 [2024-05-15 09:08:42.796301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.326 [2024-05-15 09:08:42.796327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.326 qpair failed and we were unable to recover it. 00:42:48.326 [2024-05-15 09:08:42.796456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.326 [2024-05-15 09:08:42.796562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.326 [2024-05-15 09:08:42.796587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.326 qpair failed and we were unable to recover it. 00:42:48.326 [2024-05-15 09:08:42.796692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.326 [2024-05-15 09:08:42.796795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.326 [2024-05-15 09:08:42.796824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.326 qpair failed and we were unable to recover it. 00:42:48.326 [2024-05-15 09:08:42.796927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.326 [2024-05-15 09:08:42.797035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.326 [2024-05-15 09:08:42.797060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.326 qpair failed and we were unable to recover it. 00:42:48.326 [2024-05-15 09:08:42.797167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.326 [2024-05-15 09:08:42.797272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.326 [2024-05-15 09:08:42.797297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.326 qpair failed and we were unable to recover it. 00:42:48.326 [2024-05-15 09:08:42.797404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.326 [2024-05-15 09:08:42.797518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.326 [2024-05-15 09:08:42.797543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.326 qpair failed and we were unable to recover it. 00:42:48.326 [2024-05-15 09:08:42.797655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.326 [2024-05-15 09:08:42.797781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.326 [2024-05-15 09:08:42.797806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.326 qpair failed and we were unable to recover it. 00:42:48.326 [2024-05-15 09:08:42.797904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.326 [2024-05-15 09:08:42.798002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.326 [2024-05-15 09:08:42.798027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.326 qpair failed and we were unable to recover it. 00:42:48.326 [2024-05-15 09:08:42.798141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.326 [2024-05-15 09:08:42.798248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.326 [2024-05-15 09:08:42.798274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.326 qpair failed and we were unable to recover it. 00:42:48.326 [2024-05-15 09:08:42.798381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.326 [2024-05-15 09:08:42.798482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.326 [2024-05-15 09:08:42.798507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.326 qpair failed and we were unable to recover it. 00:42:48.326 [2024-05-15 09:08:42.798614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.326 [2024-05-15 09:08:42.798785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.326 [2024-05-15 09:08:42.798810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.326 qpair failed and we were unable to recover it. 00:42:48.326 [2024-05-15 09:08:42.798943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.326 [2024-05-15 09:08:42.799047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.326 [2024-05-15 09:08:42.799072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.326 qpair failed and we were unable to recover it. 00:42:48.326 [2024-05-15 09:08:42.799170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.326 [2024-05-15 09:08:42.799283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.326 [2024-05-15 09:08:42.799309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.326 qpair failed and we were unable to recover it. 00:42:48.326 [2024-05-15 09:08:42.799421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.326 [2024-05-15 09:08:42.799552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.326 [2024-05-15 09:08:42.799577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.326 qpair failed and we were unable to recover it. 00:42:48.326 [2024-05-15 09:08:42.799682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.326 [2024-05-15 09:08:42.799800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.326 [2024-05-15 09:08:42.799826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.326 qpair failed and we were unable to recover it. 00:42:48.326 [2024-05-15 09:08:42.799921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.326 [2024-05-15 09:08:42.800023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.327 [2024-05-15 09:08:42.800048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.327 qpair failed and we were unable to recover it. 00:42:48.327 [2024-05-15 09:08:42.800146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.327 [2024-05-15 09:08:42.800265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.327 [2024-05-15 09:08:42.800290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.327 qpair failed and we were unable to recover it. 00:42:48.327 [2024-05-15 09:08:42.800404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.327 [2024-05-15 09:08:42.800511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.327 [2024-05-15 09:08:42.800536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.327 qpair failed and we were unable to recover it. 00:42:48.327 [2024-05-15 09:08:42.800647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.327 [2024-05-15 09:08:42.800752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.327 [2024-05-15 09:08:42.800777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.327 qpair failed and we were unable to recover it. 00:42:48.327 [2024-05-15 09:08:42.800876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.327 [2024-05-15 09:08:42.801030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.327 [2024-05-15 09:08:42.801055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.327 qpair failed and we were unable to recover it. 00:42:48.327 [2024-05-15 09:08:42.801159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.327 [2024-05-15 09:08:42.801260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.327 [2024-05-15 09:08:42.801286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.327 qpair failed and we were unable to recover it. 00:42:48.327 [2024-05-15 09:08:42.801398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.327 [2024-05-15 09:08:42.801531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.327 [2024-05-15 09:08:42.801556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.327 qpair failed and we were unable to recover it. 00:42:48.327 [2024-05-15 09:08:42.801665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.327 [2024-05-15 09:08:42.801770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.327 [2024-05-15 09:08:42.801795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.327 qpair failed and we were unable to recover it. 00:42:48.327 [2024-05-15 09:08:42.801905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.327 [2024-05-15 09:08:42.802031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.327 [2024-05-15 09:08:42.802055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.327 qpair failed and we were unable to recover it. 00:42:48.327 [2024-05-15 09:08:42.802188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.327 [2024-05-15 09:08:42.802291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.327 [2024-05-15 09:08:42.802316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.327 qpair failed and we were unable to recover it. 00:42:48.327 [2024-05-15 09:08:42.802417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.327 [2024-05-15 09:08:42.802523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.327 [2024-05-15 09:08:42.802548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.327 qpair failed and we were unable to recover it. 00:42:48.327 [2024-05-15 09:08:42.802649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.327 [2024-05-15 09:08:42.802808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.327 [2024-05-15 09:08:42.802833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.327 qpair failed and we were unable to recover it. 00:42:48.327 [2024-05-15 09:08:42.802945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.327 [2024-05-15 09:08:42.803051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.327 [2024-05-15 09:08:42.803076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.327 qpair failed and we were unable to recover it. 00:42:48.327 [2024-05-15 09:08:42.803174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.327 [2024-05-15 09:08:42.803298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.327 [2024-05-15 09:08:42.803323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.327 qpair failed and we were unable to recover it. 00:42:48.327 [2024-05-15 09:08:42.803434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.327 [2024-05-15 09:08:42.803545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.327 [2024-05-15 09:08:42.803570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.327 qpair failed and we were unable to recover it. 00:42:48.327 [2024-05-15 09:08:42.803669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.327 [2024-05-15 09:08:42.803768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.327 [2024-05-15 09:08:42.803793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.327 qpair failed and we were unable to recover it. 00:42:48.327 [2024-05-15 09:08:42.803915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.327 [2024-05-15 09:08:42.804045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.327 [2024-05-15 09:08:42.804069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.327 qpair failed and we were unable to recover it. 00:42:48.327 [2024-05-15 09:08:42.804179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.327 [2024-05-15 09:08:42.804355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.327 [2024-05-15 09:08:42.804381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.327 qpair failed and we were unable to recover it. 00:42:48.327 [2024-05-15 09:08:42.804516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.327 [2024-05-15 09:08:42.804619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.327 [2024-05-15 09:08:42.804644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.327 qpair failed and we were unable to recover it. 00:42:48.327 [2024-05-15 09:08:42.804756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.327 [2024-05-15 09:08:42.804887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.327 [2024-05-15 09:08:42.804912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.327 qpair failed and we were unable to recover it. 00:42:48.327 [2024-05-15 09:08:42.805013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.327 [2024-05-15 09:08:42.805117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.327 [2024-05-15 09:08:42.805141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.327 qpair failed and we were unable to recover it. 00:42:48.327 [2024-05-15 09:08:42.805248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.327 [2024-05-15 09:08:42.805351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.327 [2024-05-15 09:08:42.805376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.327 qpair failed and we were unable to recover it. 00:42:48.327 [2024-05-15 09:08:42.805478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.327 [2024-05-15 09:08:42.805575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.327 [2024-05-15 09:08:42.805600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.327 qpair failed and we were unable to recover it. 00:42:48.327 [2024-05-15 09:08:42.805703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.327 [2024-05-15 09:08:42.805805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.327 [2024-05-15 09:08:42.805830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.327 qpair failed and we were unable to recover it. 00:42:48.327 [2024-05-15 09:08:42.805956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.327 [2024-05-15 09:08:42.806057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.327 [2024-05-15 09:08:42.806082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.327 qpair failed and we were unable to recover it. 00:42:48.327 [2024-05-15 09:08:42.806187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.327 [2024-05-15 09:08:42.806296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.327 [2024-05-15 09:08:42.806321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.327 qpair failed and we were unable to recover it. 00:42:48.327 [2024-05-15 09:08:42.806420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.327 [2024-05-15 09:08:42.806565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.327 [2024-05-15 09:08:42.806589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.327 qpair failed and we were unable to recover it. 00:42:48.327 [2024-05-15 09:08:42.806698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.327 [2024-05-15 09:08:42.806816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.327 [2024-05-15 09:08:42.806840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.327 qpair failed and we were unable to recover it. 00:42:48.327 [2024-05-15 09:08:42.806975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.327 [2024-05-15 09:08:42.807080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.327 [2024-05-15 09:08:42.807105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.328 qpair failed and we were unable to recover it. 00:42:48.328 [2024-05-15 09:08:42.807209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.328 [2024-05-15 09:08:42.807344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.328 [2024-05-15 09:08:42.807369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.328 qpair failed and we were unable to recover it. 00:42:48.328 [2024-05-15 09:08:42.807482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.328 [2024-05-15 09:08:42.807579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.328 [2024-05-15 09:08:42.807604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.328 qpair failed and we were unable to recover it. 00:42:48.328 [2024-05-15 09:08:42.807708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.328 [2024-05-15 09:08:42.807821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.328 [2024-05-15 09:08:42.807845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.328 qpair failed and we were unable to recover it. 00:42:48.328 [2024-05-15 09:08:42.807976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.328 [2024-05-15 09:08:42.808096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.328 [2024-05-15 09:08:42.808121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.328 qpair failed and we were unable to recover it. 00:42:48.328 [2024-05-15 09:08:42.808223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.328 [2024-05-15 09:08:42.808334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.328 [2024-05-15 09:08:42.808359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.328 qpair failed and we were unable to recover it. 00:42:48.328 [2024-05-15 09:08:42.808486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.328 [2024-05-15 09:08:42.808638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.328 [2024-05-15 09:08:42.808662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.328 qpair failed and we were unable to recover it. 00:42:48.328 [2024-05-15 09:08:42.808765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.328 [2024-05-15 09:08:42.808895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.328 [2024-05-15 09:08:42.808920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.328 qpair failed and we were unable to recover it. 00:42:48.328 [2024-05-15 09:08:42.809021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.328 [2024-05-15 09:08:42.809122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.328 [2024-05-15 09:08:42.809146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.328 qpair failed and we were unable to recover it. 00:42:48.328 [2024-05-15 09:08:42.809249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.328 [2024-05-15 09:08:42.809377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.328 [2024-05-15 09:08:42.809401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.328 qpair failed and we were unable to recover it. 00:42:48.328 [2024-05-15 09:08:42.809502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.328 [2024-05-15 09:08:42.809597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.328 [2024-05-15 09:08:42.809626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.328 qpair failed and we were unable to recover it. 00:42:48.328 [2024-05-15 09:08:42.809731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.328 [2024-05-15 09:08:42.809838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.328 [2024-05-15 09:08:42.809862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.328 qpair failed and we were unable to recover it. 00:42:48.328 [2024-05-15 09:08:42.809963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.328 [2024-05-15 09:08:42.810066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.328 [2024-05-15 09:08:42.810090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.328 qpair failed and we were unable to recover it. 00:42:48.328 [2024-05-15 09:08:42.810224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.328 [2024-05-15 09:08:42.810330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.328 [2024-05-15 09:08:42.810355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.328 qpair failed and we were unable to recover it. 00:42:48.328 [2024-05-15 09:08:42.810456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.328 [2024-05-15 09:08:42.810569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.328 [2024-05-15 09:08:42.810593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.328 qpair failed and we were unable to recover it. 00:42:48.328 [2024-05-15 09:08:42.810692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.328 [2024-05-15 09:08:42.810786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.328 [2024-05-15 09:08:42.810810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.328 qpair failed and we were unable to recover it. 00:42:48.328 [2024-05-15 09:08:42.810914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.328 [2024-05-15 09:08:42.811012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.328 [2024-05-15 09:08:42.811036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.328 qpair failed and we were unable to recover it. 00:42:48.328 [2024-05-15 09:08:42.811160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.328 [2024-05-15 09:08:42.811256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.328 [2024-05-15 09:08:42.811282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.328 qpair failed and we were unable to recover it. 00:42:48.328 [2024-05-15 09:08:42.811382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.328 [2024-05-15 09:08:42.811511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.328 [2024-05-15 09:08:42.811535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.328 qpair failed and we were unable to recover it. 00:42:48.328 [2024-05-15 09:08:42.811646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.328 [2024-05-15 09:08:42.811750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.328 [2024-05-15 09:08:42.811774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.328 qpair failed and we were unable to recover it. 00:42:48.328 [2024-05-15 09:08:42.811881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.328 [2024-05-15 09:08:42.811983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.328 [2024-05-15 09:08:42.812011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.328 qpair failed and we were unable to recover it. 00:42:48.328 [2024-05-15 09:08:42.812115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.328 [2024-05-15 09:08:42.812225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.328 [2024-05-15 09:08:42.812251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.328 qpair failed and we were unable to recover it. 00:42:48.328 [2024-05-15 09:08:42.812368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.328 [2024-05-15 09:08:42.812479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.328 [2024-05-15 09:08:42.812505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.328 qpair failed and we were unable to recover it. 00:42:48.328 [2024-05-15 09:08:42.812614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.328 [2024-05-15 09:08:42.812717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.328 [2024-05-15 09:08:42.812743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.328 qpair failed and we were unable to recover it. 00:42:48.328 [2024-05-15 09:08:42.812875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.328 [2024-05-15 09:08:42.812974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.329 [2024-05-15 09:08:42.812999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.329 qpair failed and we were unable to recover it. 00:42:48.329 [2024-05-15 09:08:42.813130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.329 [2024-05-15 09:08:42.813244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.329 [2024-05-15 09:08:42.813269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.329 qpair failed and we were unable to recover it. 00:42:48.329 [2024-05-15 09:08:42.813372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.329 [2024-05-15 09:08:42.813474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.329 [2024-05-15 09:08:42.813499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.329 qpair failed and we were unable to recover it. 00:42:48.329 [2024-05-15 09:08:42.813632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.329 [2024-05-15 09:08:42.813756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.329 [2024-05-15 09:08:42.813781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.329 qpair failed and we were unable to recover it. 00:42:48.329 [2024-05-15 09:08:42.813916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.329 [2024-05-15 09:08:42.814022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.329 [2024-05-15 09:08:42.814047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.329 qpair failed and we were unable to recover it. 00:42:48.329 [2024-05-15 09:08:42.814147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.329 [2024-05-15 09:08:42.814255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.329 [2024-05-15 09:08:42.814280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.329 qpair failed and we were unable to recover it. 00:42:48.329 [2024-05-15 09:08:42.814379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.329 [2024-05-15 09:08:42.814485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.329 [2024-05-15 09:08:42.814511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.329 qpair failed and we were unable to recover it. 00:42:48.329 [2024-05-15 09:08:42.814643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.329 [2024-05-15 09:08:42.814769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.329 [2024-05-15 09:08:42.814794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.329 qpair failed and we were unable to recover it. 00:42:48.329 [2024-05-15 09:08:42.814919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.329 [2024-05-15 09:08:42.815019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.329 [2024-05-15 09:08:42.815043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.329 qpair failed and we were unable to recover it. 00:42:48.329 [2024-05-15 09:08:42.815144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.329 [2024-05-15 09:08:42.815261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.329 [2024-05-15 09:08:42.815287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.329 qpair failed and we were unable to recover it. 00:42:48.329 [2024-05-15 09:08:42.815413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.329 [2024-05-15 09:08:42.815518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.329 [2024-05-15 09:08:42.815542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.329 qpair failed and we were unable to recover it. 00:42:48.329 [2024-05-15 09:08:42.815642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.329 [2024-05-15 09:08:42.815745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.329 [2024-05-15 09:08:42.815769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.329 qpair failed and we were unable to recover it. 00:42:48.329 [2024-05-15 09:08:42.815922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.329 [2024-05-15 09:08:42.816024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.329 [2024-05-15 09:08:42.816049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.329 qpair failed and we were unable to recover it. 00:42:48.329 [2024-05-15 09:08:42.816179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.329 [2024-05-15 09:08:42.816308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.329 [2024-05-15 09:08:42.816333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.329 qpair failed and we were unable to recover it. 00:42:48.329 [2024-05-15 09:08:42.816428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.329 [2024-05-15 09:08:42.816531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.329 [2024-05-15 09:08:42.816555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.329 qpair failed and we were unable to recover it. 00:42:48.329 [2024-05-15 09:08:42.816683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.329 [2024-05-15 09:08:42.816779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.329 [2024-05-15 09:08:42.816803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.329 qpair failed and we were unable to recover it. 00:42:48.329 [2024-05-15 09:08:42.816935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.329 [2024-05-15 09:08:42.817029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.329 [2024-05-15 09:08:42.817053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.329 qpair failed and we were unable to recover it. 00:42:48.329 [2024-05-15 09:08:42.817189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.329 [2024-05-15 09:08:42.817298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.329 [2024-05-15 09:08:42.817322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.329 qpair failed and we were unable to recover it. 00:42:48.329 [2024-05-15 09:08:42.817426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.329 [2024-05-15 09:08:42.817561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.329 [2024-05-15 09:08:42.817586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.329 qpair failed and we were unable to recover it. 00:42:48.329 [2024-05-15 09:08:42.817713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.329 [2024-05-15 09:08:42.817816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.329 [2024-05-15 09:08:42.817840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.329 qpair failed and we were unable to recover it. 00:42:48.329 [2024-05-15 09:08:42.817943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.329 [2024-05-15 09:08:42.818069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.329 [2024-05-15 09:08:42.818093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.329 qpair failed and we were unable to recover it. 00:42:48.329 [2024-05-15 09:08:42.818195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.329 [2024-05-15 09:08:42.818308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.329 [2024-05-15 09:08:42.818333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.329 qpair failed and we were unable to recover it. 00:42:48.329 [2024-05-15 09:08:42.818437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.329 [2024-05-15 09:08:42.818566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.329 [2024-05-15 09:08:42.818590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.329 qpair failed and we were unable to recover it. 00:42:48.329 [2024-05-15 09:08:42.818694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.329 [2024-05-15 09:08:42.818794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.329 [2024-05-15 09:08:42.818817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.329 qpair failed and we were unable to recover it. 00:42:48.329 [2024-05-15 09:08:42.818923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.329 [2024-05-15 09:08:42.819049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.329 [2024-05-15 09:08:42.819074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.329 qpair failed and we were unable to recover it. 00:42:48.329 [2024-05-15 09:08:42.819177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.329 [2024-05-15 09:08:42.819289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.329 [2024-05-15 09:08:42.819314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.329 qpair failed and we were unable to recover it. 00:42:48.329 [2024-05-15 09:08:42.819423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.329 [2024-05-15 09:08:42.819517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.329 [2024-05-15 09:08:42.819541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.329 qpair failed and we were unable to recover it. 00:42:48.329 [2024-05-15 09:08:42.819674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.329 [2024-05-15 09:08:42.819844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.329 [2024-05-15 09:08:42.819869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.329 qpair failed and we were unable to recover it. 00:42:48.329 [2024-05-15 09:08:42.819976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.329 [2024-05-15 09:08:42.820074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.329 [2024-05-15 09:08:42.820099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.329 qpair failed and we were unable to recover it. 00:42:48.329 [2024-05-15 09:08:42.820203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.330 [2024-05-15 09:08:42.820312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.330 [2024-05-15 09:08:42.820336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.330 qpair failed and we were unable to recover it. 00:42:48.330 [2024-05-15 09:08:42.820440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.330 [2024-05-15 09:08:42.820557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.330 [2024-05-15 09:08:42.820581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.330 qpair failed and we were unable to recover it. 00:42:48.330 [2024-05-15 09:08:42.820709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.330 [2024-05-15 09:08:42.820806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.330 [2024-05-15 09:08:42.820831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.330 qpair failed and we were unable to recover it. 00:42:48.330 [2024-05-15 09:08:42.820937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.330 [2024-05-15 09:08:42.821035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.330 [2024-05-15 09:08:42.821060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.330 qpair failed and we were unable to recover it. 00:42:48.330 [2024-05-15 09:08:42.821169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.330 [2024-05-15 09:08:42.821276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.330 [2024-05-15 09:08:42.821300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.330 qpair failed and we were unable to recover it. 00:42:48.330 [2024-05-15 09:08:42.821410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.330 [2024-05-15 09:08:42.821513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.330 [2024-05-15 09:08:42.821536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.330 qpair failed and we were unable to recover it. 00:42:48.330 [2024-05-15 09:08:42.821640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.330 [2024-05-15 09:08:42.821736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.330 [2024-05-15 09:08:42.821759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.330 qpair failed and we were unable to recover it. 00:42:48.330 [2024-05-15 09:08:42.821868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.330 [2024-05-15 09:08:42.821987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.330 [2024-05-15 09:08:42.822012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.330 qpair failed and we were unable to recover it. 00:42:48.330 [2024-05-15 09:08:42.822144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.330 [2024-05-15 09:08:42.822251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.330 [2024-05-15 09:08:42.822276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.330 qpair failed and we were unable to recover it. 00:42:48.330 [2024-05-15 09:08:42.822370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.330 [2024-05-15 09:08:42.822471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.330 [2024-05-15 09:08:42.822494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.330 qpair failed and we were unable to recover it. 00:42:48.330 [2024-05-15 09:08:42.822597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.330 [2024-05-15 09:08:42.822706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.330 [2024-05-15 09:08:42.822731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.330 qpair failed and we were unable to recover it. 00:42:48.330 [2024-05-15 09:08:42.822844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.330 [2024-05-15 09:08:42.822952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.330 [2024-05-15 09:08:42.822976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.330 qpair failed and we were unable to recover it. 00:42:48.330 [2024-05-15 09:08:42.823068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.330 [2024-05-15 09:08:42.823175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.330 [2024-05-15 09:08:42.823199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.330 qpair failed and we were unable to recover it. 00:42:48.330 [2024-05-15 09:08:42.823349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.330 [2024-05-15 09:08:42.823480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.330 [2024-05-15 09:08:42.823505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.330 qpair failed and we were unable to recover it. 00:42:48.330 [2024-05-15 09:08:42.823633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.330 [2024-05-15 09:08:42.823735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.330 [2024-05-15 09:08:42.823759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.330 qpair failed and we were unable to recover it. 00:42:48.330 [2024-05-15 09:08:42.823867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.330 [2024-05-15 09:08:42.823974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.330 [2024-05-15 09:08:42.823998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.330 qpair failed and we were unable to recover it. 00:42:48.330 [2024-05-15 09:08:42.824096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.330 [2024-05-15 09:08:42.824225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.330 [2024-05-15 09:08:42.824250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.330 qpair failed and we were unable to recover it. 00:42:48.330 [2024-05-15 09:08:42.824357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.330 [2024-05-15 09:08:42.824464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.330 [2024-05-15 09:08:42.824490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.330 qpair failed and we were unable to recover it. 00:42:48.330 [2024-05-15 09:08:42.824586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.330 [2024-05-15 09:08:42.824678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.330 [2024-05-15 09:08:42.824706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.330 qpair failed and we were unable to recover it. 00:42:48.330 [2024-05-15 09:08:42.824859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.330 [2024-05-15 09:08:42.824960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.330 [2024-05-15 09:08:42.824984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.330 qpair failed and we were unable to recover it. 00:42:48.330 [2024-05-15 09:08:42.825090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.330 [2024-05-15 09:08:42.825190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.330 [2024-05-15 09:08:42.825220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.330 qpair failed and we were unable to recover it. 00:42:48.330 [2024-05-15 09:08:42.825324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.330 [2024-05-15 09:08:42.825428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.330 [2024-05-15 09:08:42.825452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.330 qpair failed and we were unable to recover it. 00:42:48.330 [2024-05-15 09:08:42.825551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.330 [2024-05-15 09:08:42.825651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.330 [2024-05-15 09:08:42.825676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.330 qpair failed and we were unable to recover it. 00:42:48.330 [2024-05-15 09:08:42.825781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.330 [2024-05-15 09:08:42.825873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.330 [2024-05-15 09:08:42.825898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.330 qpair failed and we were unable to recover it. 00:42:48.330 [2024-05-15 09:08:42.825997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.330 [2024-05-15 09:08:42.826095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.330 [2024-05-15 09:08:42.826119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.330 qpair failed and we were unable to recover it. 00:42:48.330 [2024-05-15 09:08:42.826224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.330 [2024-05-15 09:08:42.826337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.330 [2024-05-15 09:08:42.826361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.330 qpair failed and we were unable to recover it. 00:42:48.330 [2024-05-15 09:08:42.826465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.330 [2024-05-15 09:08:42.826595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.330 [2024-05-15 09:08:42.826619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.330 qpair failed and we were unable to recover it. 00:42:48.330 [2024-05-15 09:08:42.826727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.330 [2024-05-15 09:08:42.826826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.330 [2024-05-15 09:08:42.826851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.330 qpair failed and we were unable to recover it. 00:42:48.330 [2024-05-15 09:08:42.826952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.330 [2024-05-15 09:08:42.827048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.330 [2024-05-15 09:08:42.827073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.330 qpair failed and we were unable to recover it. 00:42:48.330 [2024-05-15 09:08:42.827193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.331 [2024-05-15 09:08:42.827327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.331 [2024-05-15 09:08:42.827351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.331 qpair failed and we were unable to recover it. 00:42:48.331 [2024-05-15 09:08:42.827474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.331 [2024-05-15 09:08:42.827577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.331 [2024-05-15 09:08:42.827600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.331 qpair failed and we were unable to recover it. 00:42:48.331 [2024-05-15 09:08:42.827719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.331 [2024-05-15 09:08:42.827822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.331 [2024-05-15 09:08:42.827846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.331 qpair failed and we were unable to recover it. 00:42:48.331 [2024-05-15 09:08:42.827952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.331 [2024-05-15 09:08:42.828059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.331 [2024-05-15 09:08:42.828083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.331 qpair failed and we were unable to recover it. 00:42:48.331 [2024-05-15 09:08:42.828196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.331 [2024-05-15 09:08:42.828312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.331 [2024-05-15 09:08:42.828336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.331 qpair failed and we were unable to recover it. 00:42:48.331 [2024-05-15 09:08:42.828438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.331 [2024-05-15 09:08:42.828562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.331 [2024-05-15 09:08:42.828586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.331 qpair failed and we were unable to recover it. 00:42:48.331 [2024-05-15 09:08:42.828725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.331 [2024-05-15 09:08:42.828831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.331 [2024-05-15 09:08:42.828855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.331 qpair failed and we were unable to recover it. 00:42:48.331 [2024-05-15 09:08:42.828959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.331 [2024-05-15 09:08:42.829066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.331 [2024-05-15 09:08:42.829094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.331 qpair failed and we were unable to recover it. 00:42:48.331 [2024-05-15 09:08:42.829198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.331 [2024-05-15 09:08:42.829301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.331 [2024-05-15 09:08:42.829326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.331 qpair failed and we were unable to recover it. 00:42:48.331 [2024-05-15 09:08:42.829438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.331 [2024-05-15 09:08:42.829535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.331 [2024-05-15 09:08:42.829560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.331 qpair failed and we were unable to recover it. 00:42:48.331 [2024-05-15 09:08:42.829694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.331 [2024-05-15 09:08:42.829796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.331 [2024-05-15 09:08:42.829820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.331 qpair failed and we were unable to recover it. 00:42:48.331 [2024-05-15 09:08:42.829919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.331 [2024-05-15 09:08:42.830039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.331 [2024-05-15 09:08:42.830064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.331 qpair failed and we were unable to recover it. 00:42:48.331 [2024-05-15 09:08:42.830169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.331 [2024-05-15 09:08:42.830295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.331 [2024-05-15 09:08:42.830321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.331 qpair failed and we were unable to recover it. 00:42:48.331 [2024-05-15 09:08:42.830422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.331 [2024-05-15 09:08:42.830523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.331 [2024-05-15 09:08:42.830553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.331 qpair failed and we were unable to recover it. 00:42:48.331 [2024-05-15 09:08:42.830685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.331 [2024-05-15 09:08:42.830787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.331 [2024-05-15 09:08:42.830812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.331 qpair failed and we were unable to recover it. 00:42:48.331 [2024-05-15 09:08:42.830939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.331 [2024-05-15 09:08:42.831044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.331 [2024-05-15 09:08:42.831069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.331 qpair failed and we were unable to recover it. 00:42:48.331 [2024-05-15 09:08:42.831196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.331 [2024-05-15 09:08:42.831309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.331 [2024-05-15 09:08:42.831333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.331 qpair failed and we were unable to recover it. 00:42:48.331 [2024-05-15 09:08:42.831443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.331 [2024-05-15 09:08:42.831538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.331 [2024-05-15 09:08:42.831562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.331 qpair failed and we were unable to recover it. 00:42:48.331 [2024-05-15 09:08:42.831691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.331 [2024-05-15 09:08:42.831842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.331 [2024-05-15 09:08:42.831867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.331 qpair failed and we were unable to recover it. 00:42:48.331 [2024-05-15 09:08:42.831989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.331 [2024-05-15 09:08:42.832087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.331 [2024-05-15 09:08:42.832112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1570 with addr=10.0.0.2, port=4420 00:42:48.331 qpair failed and we were unable to recover it. 00:42:48.331 [2024-05-15 09:08:42.832265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.331 [2024-05-15 09:08:42.832397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.331 [2024-05-15 09:08:42.832428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.331 qpair failed and we were unable to recover it. 00:42:48.331 [2024-05-15 09:08:42.832551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.331 [2024-05-15 09:08:42.832667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.331 [2024-05-15 09:08:42.832694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.331 qpair failed and we were unable to recover it. 00:42:48.331 [2024-05-15 09:08:42.832814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.331 [2024-05-15 09:08:42.832929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.331 [2024-05-15 09:08:42.832959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.331 qpair failed and we were unable to recover it. 00:42:48.331 [2024-05-15 09:08:42.833068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.331 [2024-05-15 09:08:42.833175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.331 [2024-05-15 09:08:42.833207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.331 qpair failed and we were unable to recover it. 00:42:48.331 [2024-05-15 09:08:42.833340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.331 [2024-05-15 09:08:42.833448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.331 [2024-05-15 09:08:42.833478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.331 qpair failed and we were unable to recover it. 00:42:48.331 [2024-05-15 09:08:42.833596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.331 [2024-05-15 09:08:42.833700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.331 [2024-05-15 09:08:42.833730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.331 qpair failed and we were unable to recover it. 00:42:48.331 [2024-05-15 09:08:42.833849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.331 [2024-05-15 09:08:42.833961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.331 [2024-05-15 09:08:42.833992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.331 qpair failed and we were unable to recover it. 00:42:48.331 [2024-05-15 09:08:42.834134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.331 [2024-05-15 09:08:42.834255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.331 [2024-05-15 09:08:42.834283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.331 qpair failed and we were unable to recover it. 00:42:48.331 [2024-05-15 09:08:42.834405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.331 [2024-05-15 09:08:42.834521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.331 [2024-05-15 09:08:42.834548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.331 qpair failed and we were unable to recover it. 00:42:48.332 [2024-05-15 09:08:42.834710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.332 [2024-05-15 09:08:42.834821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.332 [2024-05-15 09:08:42.834850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.332 qpair failed and we were unable to recover it. 00:42:48.332 [2024-05-15 09:08:42.835020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.332 [2024-05-15 09:08:42.835168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.332 [2024-05-15 09:08:42.835196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.332 qpair failed and we were unable to recover it. 00:42:48.332 [2024-05-15 09:08:42.835311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.332 [2024-05-15 09:08:42.835416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.332 [2024-05-15 09:08:42.835445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.332 qpair failed and we were unable to recover it. 00:42:48.332 [2024-05-15 09:08:42.835560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.332 [2024-05-15 09:08:42.835694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.332 [2024-05-15 09:08:42.835720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.332 qpair failed and we were unable to recover it. 00:42:48.332 [2024-05-15 09:08:42.835845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.332 [2024-05-15 09:08:42.836072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.332 [2024-05-15 09:08:42.836101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.332 qpair failed and we were unable to recover it. 00:42:48.332 [2024-05-15 09:08:42.836226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.332 [2024-05-15 09:08:42.836342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.332 [2024-05-15 09:08:42.836374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.332 qpair failed and we were unable to recover it. 00:42:48.332 [2024-05-15 09:08:42.836491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.332 [2024-05-15 09:08:42.836606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.332 [2024-05-15 09:08:42.836636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.332 qpair failed and we were unable to recover it. 00:42:48.332 [2024-05-15 09:08:42.836755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.332 [2024-05-15 09:08:42.836908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.332 [2024-05-15 09:08:42.836939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.332 qpair failed and we were unable to recover it. 00:42:48.332 [2024-05-15 09:08:42.837046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.332 [2024-05-15 09:08:42.837183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.332 [2024-05-15 09:08:42.837211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.332 qpair failed and we were unable to recover it. 00:42:48.332 [2024-05-15 09:08:42.837342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.332 [2024-05-15 09:08:42.837455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.332 [2024-05-15 09:08:42.837484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.332 qpair failed and we were unable to recover it. 00:42:48.332 [2024-05-15 09:08:42.837627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.332 [2024-05-15 09:08:42.837741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.332 [2024-05-15 09:08:42.837771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.332 qpair failed and we were unable to recover it. 00:42:48.332 [2024-05-15 09:08:42.837887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.332 [2024-05-15 09:08:42.838002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.332 [2024-05-15 09:08:42.838032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.332 qpair failed and we were unable to recover it. 00:42:48.332 [2024-05-15 09:08:42.838144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.332 [2024-05-15 09:08:42.838256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.332 [2024-05-15 09:08:42.838287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.332 qpair failed and we were unable to recover it. 00:42:48.332 [2024-05-15 09:08:42.838404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.332 [2024-05-15 09:08:42.838517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.332 [2024-05-15 09:08:42.838549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.332 qpair failed and we were unable to recover it. 00:42:48.332 [2024-05-15 09:08:42.838662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.332 [2024-05-15 09:08:42.838775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.332 [2024-05-15 09:08:42.838805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.332 qpair failed and we were unable to recover it. 00:42:48.332 [2024-05-15 09:08:42.838925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.332 [2024-05-15 09:08:42.839030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.332 [2024-05-15 09:08:42.839062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.332 qpair failed and we were unable to recover it. 00:42:48.332 [2024-05-15 09:08:42.839182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.332 [2024-05-15 09:08:42.839300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.332 [2024-05-15 09:08:42.839330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.332 qpair failed and we were unable to recover it. 00:42:48.332 [2024-05-15 09:08:42.839438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.332 [2024-05-15 09:08:42.839549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.332 [2024-05-15 09:08:42.839576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.332 qpair failed and we were unable to recover it. 00:42:48.332 [2024-05-15 09:08:42.839708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.332 [2024-05-15 09:08:42.839814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.332 [2024-05-15 09:08:42.839842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.332 qpair failed and we were unable to recover it. 00:42:48.332 [2024-05-15 09:08:42.839972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.332 [2024-05-15 09:08:42.840082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.332 [2024-05-15 09:08:42.840111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.332 qpair failed and we were unable to recover it. 00:42:48.332 [2024-05-15 09:08:42.840229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.332 [2024-05-15 09:08:42.840456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.332 [2024-05-15 09:08:42.840486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.332 qpair failed and we were unable to recover it. 00:42:48.332 [2024-05-15 09:08:42.840594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.332 [2024-05-15 09:08:42.840717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.332 [2024-05-15 09:08:42.840743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.332 qpair failed and we were unable to recover it. 00:42:48.332 [2024-05-15 09:08:42.840865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.332 [2024-05-15 09:08:42.840977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.332 [2024-05-15 09:08:42.841005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.332 qpair failed and we were unable to recover it. 00:42:48.332 [2024-05-15 09:08:42.841119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.332 [2024-05-15 09:08:42.841231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.332 [2024-05-15 09:08:42.841259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.332 qpair failed and we were unable to recover it. 00:42:48.332 [2024-05-15 09:08:42.841484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.332 [2024-05-15 09:08:42.841604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.333 [2024-05-15 09:08:42.841635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.333 qpair failed and we were unable to recover it. 00:42:48.333 [2024-05-15 09:08:42.841754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.333 [2024-05-15 09:08:42.841868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.333 [2024-05-15 09:08:42.841897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.333 qpair failed and we were unable to recover it. 00:42:48.333 [2024-05-15 09:08:42.842029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.333 [2024-05-15 09:08:42.842140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.333 [2024-05-15 09:08:42.842170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.333 qpair failed and we were unable to recover it. 00:42:48.333 [2024-05-15 09:08:42.842302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.333 [2024-05-15 09:08:42.842403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.333 [2024-05-15 09:08:42.842433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.333 qpair failed and we were unable to recover it. 00:42:48.333 [2024-05-15 09:08:42.842545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.333 [2024-05-15 09:08:42.842649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.333 [2024-05-15 09:08:42.842677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.333 qpair failed and we were unable to recover it. 00:42:48.333 [2024-05-15 09:08:42.842822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.333 [2024-05-15 09:08:42.842935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.333 [2024-05-15 09:08:42.842967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.333 qpair failed and we were unable to recover it. 00:42:48.333 [2024-05-15 09:08:42.843104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.333 [2024-05-15 09:08:42.843222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.333 [2024-05-15 09:08:42.843257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.333 qpair failed and we were unable to recover it. 00:42:48.333 [2024-05-15 09:08:42.843373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.333 [2024-05-15 09:08:42.843484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.333 [2024-05-15 09:08:42.843512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.333 qpair failed and we were unable to recover it. 00:42:48.333 [2024-05-15 09:08:42.843631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.333 [2024-05-15 09:08:42.843743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.333 [2024-05-15 09:08:42.843772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.333 qpair failed and we were unable to recover it. 00:42:48.333 [2024-05-15 09:08:42.843902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.333 [2024-05-15 09:08:42.844010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.333 [2024-05-15 09:08:42.844039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.333 qpair failed and we were unable to recover it. 00:42:48.333 [2024-05-15 09:08:42.844152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.333 [2024-05-15 09:08:42.844256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.333 [2024-05-15 09:08:42.844284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.333 qpair failed and we were unable to recover it. 00:42:48.333 [2024-05-15 09:08:42.844398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.333 [2024-05-15 09:08:42.844509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.333 [2024-05-15 09:08:42.844537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.333 qpair failed and we were unable to recover it. 00:42:48.333 [2024-05-15 09:08:42.844655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.333 [2024-05-15 09:08:42.844759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.333 [2024-05-15 09:08:42.844788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.333 qpair failed and we were unable to recover it. 00:42:48.333 [2024-05-15 09:08:42.844908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.333 [2024-05-15 09:08:42.845015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.333 [2024-05-15 09:08:42.845041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.333 qpair failed and we were unable to recover it. 00:42:48.333 [2024-05-15 09:08:42.845157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.333 [2024-05-15 09:08:42.845270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.333 [2024-05-15 09:08:42.845301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.333 qpair failed and we were unable to recover it. 00:42:48.333 [2024-05-15 09:08:42.845420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.333 [2024-05-15 09:08:42.845556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.333 [2024-05-15 09:08:42.845586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.333 qpair failed and we were unable to recover it. 00:42:48.333 [2024-05-15 09:08:42.845710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.333 [2024-05-15 09:08:42.845863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.333 [2024-05-15 09:08:42.845892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.333 qpair failed and we were unable to recover it. 00:42:48.333 [2024-05-15 09:08:42.846000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.333 [2024-05-15 09:08:42.846126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.333 [2024-05-15 09:08:42.846163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.333 qpair failed and we were unable to recover it. 00:42:48.333 [2024-05-15 09:08:42.846280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.333 [2024-05-15 09:08:42.846392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.333 [2024-05-15 09:08:42.846421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.333 qpair failed and we were unable to recover it. 00:42:48.333 [2024-05-15 09:08:42.846534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.333 [2024-05-15 09:08:42.846650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.333 [2024-05-15 09:08:42.846676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.333 qpair failed and we were unable to recover it. 00:42:48.333 [2024-05-15 09:08:42.846785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.333 [2024-05-15 09:08:42.846894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.333 [2024-05-15 09:08:42.846923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.333 qpair failed and we were unable to recover it. 00:42:48.333 [2024-05-15 09:08:42.847038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.333 [2024-05-15 09:08:42.847171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.333 [2024-05-15 09:08:42.847204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.333 qpair failed and we were unable to recover it. 00:42:48.333 [2024-05-15 09:08:42.847354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.333 [2024-05-15 09:08:42.847474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.333 [2024-05-15 09:08:42.847505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.333 qpair failed and we were unable to recover it. 00:42:48.333 [2024-05-15 09:08:42.847608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.333 [2024-05-15 09:08:42.847720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.333 [2024-05-15 09:08:42.847745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.333 qpair failed and we were unable to recover it. 00:42:48.333 [2024-05-15 09:08:42.847867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.333 [2024-05-15 09:08:42.847973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.333 [2024-05-15 09:08:42.847999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.333 qpair failed and we were unable to recover it. 00:42:48.333 [2024-05-15 09:08:42.848121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.333 [2024-05-15 09:08:42.848236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.333 [2024-05-15 09:08:42.848267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.333 qpair failed and we were unable to recover it. 00:42:48.333 [2024-05-15 09:08:42.848389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.333 [2024-05-15 09:08:42.848501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.333 [2024-05-15 09:08:42.848528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.333 qpair failed and we were unable to recover it. 00:42:48.333 [2024-05-15 09:08:42.848641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.333 [2024-05-15 09:08:42.848769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.333 [2024-05-15 09:08:42.848800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.333 qpair failed and we were unable to recover it. 00:42:48.333 [2024-05-15 09:08:42.848925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.333 [2024-05-15 09:08:42.849030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.333 [2024-05-15 09:08:42.849058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.333 qpair failed and we were unable to recover it. 00:42:48.334 [2024-05-15 09:08:42.849188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.334 [2024-05-15 09:08:42.849316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.334 [2024-05-15 09:08:42.849347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.334 qpair failed and we were unable to recover it. 00:42:48.334 [2024-05-15 09:08:42.849459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.334 [2024-05-15 09:08:42.849565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.334 [2024-05-15 09:08:42.849590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.334 qpair failed and we were unable to recover it. 00:42:48.334 [2024-05-15 09:08:42.849702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.334 [2024-05-15 09:08:42.849814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.334 [2024-05-15 09:08:42.849843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.334 qpair failed and we were unable to recover it. 00:42:48.334 [2024-05-15 09:08:42.849943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.334 [2024-05-15 09:08:42.850061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.334 [2024-05-15 09:08:42.850089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.334 qpair failed and we were unable to recover it. 00:42:48.334 [2024-05-15 09:08:42.850187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.334 [2024-05-15 09:08:42.850316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.334 [2024-05-15 09:08:42.850345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.334 qpair failed and we were unable to recover it. 00:42:48.334 [2024-05-15 09:08:42.850465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.334 [2024-05-15 09:08:42.850602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.334 [2024-05-15 09:08:42.850632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.334 qpair failed and we were unable to recover it. 00:42:48.334 [2024-05-15 09:08:42.850747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.334 [2024-05-15 09:08:42.850863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.334 [2024-05-15 09:08:42.850892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.334 qpair failed and we were unable to recover it. 00:42:48.334 [2024-05-15 09:08:42.851010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.334 [2024-05-15 09:08:42.851118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.334 [2024-05-15 09:08:42.851145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.334 qpair failed and we were unable to recover it. 00:42:48.334 [2024-05-15 09:08:42.851295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.334 [2024-05-15 09:08:42.851409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.334 [2024-05-15 09:08:42.851443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.334 qpair failed and we were unable to recover it. 00:42:48.334 [2024-05-15 09:08:42.851572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.334 [2024-05-15 09:08:42.851701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.334 [2024-05-15 09:08:42.851729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.334 qpair failed and we were unable to recover it. 00:42:48.334 [2024-05-15 09:08:42.851848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.334 [2024-05-15 09:08:42.851956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.334 [2024-05-15 09:08:42.851987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.334 qpair failed and we were unable to recover it. 00:42:48.334 [2024-05-15 09:08:42.852103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.334 [2024-05-15 09:08:42.852224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.334 [2024-05-15 09:08:42.852254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.334 qpair failed and we were unable to recover it. 00:42:48.334 [2024-05-15 09:08:42.852372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.334 [2024-05-15 09:08:42.852479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.334 [2024-05-15 09:08:42.852507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.334 qpair failed and we were unable to recover it. 00:42:48.334 [2024-05-15 09:08:42.852621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.334 [2024-05-15 09:08:42.852751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.334 [2024-05-15 09:08:42.852778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.334 qpair failed and we were unable to recover it. 00:42:48.334 [2024-05-15 09:08:42.852896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.334 [2024-05-15 09:08:42.853031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.334 [2024-05-15 09:08:42.853063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.334 qpair failed and we were unable to recover it. 00:42:48.334 [2024-05-15 09:08:42.853172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.334 [2024-05-15 09:08:42.853295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.334 [2024-05-15 09:08:42.853326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.334 qpair failed and we were unable to recover it. 00:42:48.334 [2024-05-15 09:08:42.853477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.334 [2024-05-15 09:08:42.853590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.334 [2024-05-15 09:08:42.853621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.334 qpair failed and we were unable to recover it. 00:42:48.334 [2024-05-15 09:08:42.853751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.334 [2024-05-15 09:08:42.853860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.334 [2024-05-15 09:08:42.853891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.334 qpair failed and we were unable to recover it. 00:42:48.334 [2024-05-15 09:08:42.854008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.334 [2024-05-15 09:08:42.854123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.334 [2024-05-15 09:08:42.854149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.334 qpair failed and we were unable to recover it. 00:42:48.334 [2024-05-15 09:08:42.854273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.334 [2024-05-15 09:08:42.854387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.334 [2024-05-15 09:08:42.854420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.334 qpair failed and we were unable to recover it. 00:42:48.334 [2024-05-15 09:08:42.854527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.334 [2024-05-15 09:08:42.854640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.334 [2024-05-15 09:08:42.854669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.334 qpair failed and we were unable to recover it. 00:42:48.334 [2024-05-15 09:08:42.854788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.334 [2024-05-15 09:08:42.854896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.334 [2024-05-15 09:08:42.854922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.334 qpair failed and we were unable to recover it. 00:42:48.334 [2024-05-15 09:08:42.855033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.334 [2024-05-15 09:08:42.855147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.334 [2024-05-15 09:08:42.855176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.334 qpair failed and we were unable to recover it. 00:42:48.334 [2024-05-15 09:08:42.855299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.334 [2024-05-15 09:08:42.855407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.334 [2024-05-15 09:08:42.855434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.334 qpair failed and we were unable to recover it. 00:42:48.334 [2024-05-15 09:08:42.855548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.334 [2024-05-15 09:08:42.855662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.334 [2024-05-15 09:08:42.855691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.334 qpair failed and we were unable to recover it. 00:42:48.334 [2024-05-15 09:08:42.855802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.334 [2024-05-15 09:08:42.855937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.334 [2024-05-15 09:08:42.855966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.334 qpair failed and we were unable to recover it. 00:42:48.334 [2024-05-15 09:08:42.856088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.334 [2024-05-15 09:08:42.856206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.334 [2024-05-15 09:08:42.856243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.334 qpair failed and we were unable to recover it. 00:42:48.334 [2024-05-15 09:08:42.856353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.334 [2024-05-15 09:08:42.856469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.334 [2024-05-15 09:08:42.856499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.334 qpair failed and we were unable to recover it. 00:42:48.334 [2024-05-15 09:08:42.856614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.334 [2024-05-15 09:08:42.856735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.334 [2024-05-15 09:08:42.856765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.335 qpair failed and we were unable to recover it. 00:42:48.335 [2024-05-15 09:08:42.856918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.335 [2024-05-15 09:08:42.857029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.335 [2024-05-15 09:08:42.857054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.335 qpair failed and we were unable to recover it. 00:42:48.335 [2024-05-15 09:08:42.857197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.335 [2024-05-15 09:08:42.857303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.335 [2024-05-15 09:08:42.857332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.335 qpair failed and we were unable to recover it. 00:42:48.335 [2024-05-15 09:08:42.857473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.335 [2024-05-15 09:08:42.857617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.335 [2024-05-15 09:08:42.857643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.335 qpair failed and we were unable to recover it. 00:42:48.335 [2024-05-15 09:08:42.857786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.335 [2024-05-15 09:08:42.857897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.335 [2024-05-15 09:08:42.857924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.335 qpair failed and we were unable to recover it. 00:42:48.335 [2024-05-15 09:08:42.858044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.335 [2024-05-15 09:08:42.858146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.335 [2024-05-15 09:08:42.858172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.335 qpair failed and we were unable to recover it. 00:42:48.335 [2024-05-15 09:08:42.858277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.335 [2024-05-15 09:08:42.858390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.335 [2024-05-15 09:08:42.858416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.335 qpair failed and we were unable to recover it. 00:42:48.335 [2024-05-15 09:08:42.858542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.335 [2024-05-15 09:08:42.858655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.335 [2024-05-15 09:08:42.858688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.335 qpair failed and we were unable to recover it. 00:42:48.335 [2024-05-15 09:08:42.858813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.335 [2024-05-15 09:08:42.858928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.335 [2024-05-15 09:08:42.858953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.335 qpair failed and we were unable to recover it. 00:42:48.335 [2024-05-15 09:08:42.859064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.335 [2024-05-15 09:08:42.859171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.335 [2024-05-15 09:08:42.859196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.335 qpair failed and we were unable to recover it. 00:42:48.335 [2024-05-15 09:08:42.859322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.335 [2024-05-15 09:08:42.859433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.335 [2024-05-15 09:08:42.859462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.335 qpair failed and we were unable to recover it. 00:42:48.335 [2024-05-15 09:08:42.859602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.335 [2024-05-15 09:08:42.859721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.335 [2024-05-15 09:08:42.859754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.335 qpair failed and we were unable to recover it. 00:42:48.335 [2024-05-15 09:08:42.859985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.335 [2024-05-15 09:08:42.860122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.335 [2024-05-15 09:08:42.860151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.335 qpair failed and we were unable to recover it. 00:42:48.335 [2024-05-15 09:08:42.860281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.335 [2024-05-15 09:08:42.860409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.335 [2024-05-15 09:08:42.860439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.335 qpair failed and we were unable to recover it. 00:42:48.335 [2024-05-15 09:08:42.860559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.335 [2024-05-15 09:08:42.860680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.335 [2024-05-15 09:08:42.860710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.335 qpair failed and we were unable to recover it. 00:42:48.335 [2024-05-15 09:08:42.860823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.335 [2024-05-15 09:08:42.860932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.335 [2024-05-15 09:08:42.860958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.335 qpair failed and we were unable to recover it. 00:42:48.335 [2024-05-15 09:08:42.861111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.335 [2024-05-15 09:08:42.861211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.335 [2024-05-15 09:08:42.861245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.335 qpair failed and we were unable to recover it. 00:42:48.335 [2024-05-15 09:08:42.861351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.335 [2024-05-15 09:08:42.861483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.335 [2024-05-15 09:08:42.861508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.335 qpair failed and we were unable to recover it. 00:42:48.335 [2024-05-15 09:08:42.861611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.335 [2024-05-15 09:08:42.861708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.335 [2024-05-15 09:08:42.861733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.335 qpair failed and we were unable to recover it. 00:42:48.335 09:08:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:42:48.335 [2024-05-15 09:08:42.861839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.335 09:08:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@861 -- # return 0 00:42:48.335 [2024-05-15 09:08:42.861940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.335 [2024-05-15 09:08:42.861966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.335 qpair failed and we were unable to recover it. 00:42:48.335 09:08:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:42:48.335 [2024-05-15 09:08:42.862095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.335 09:08:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@727 -- # xtrace_disable 00:42:48.335 [2024-05-15 09:08:42.862224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.335 [2024-05-15 09:08:42.862251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.335 qpair failed and we were unable to recover it. 00:42:48.335 09:08:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:42:48.335 [2024-05-15 09:08:42.862353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.335 [2024-05-15 09:08:42.862458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.335 [2024-05-15 09:08:42.862484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.335 qpair failed and we were unable to recover it. 00:42:48.335 [2024-05-15 09:08:42.862582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.335 [2024-05-15 09:08:42.862721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.335 [2024-05-15 09:08:42.862745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.335 qpair failed and we were unable to recover it. 00:42:48.335 [2024-05-15 09:08:42.862848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.335 [2024-05-15 09:08:42.862956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.335 [2024-05-15 09:08:42.862981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.335 qpair failed and we were unable to recover it. 00:42:48.335 [2024-05-15 09:08:42.863137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.335 [2024-05-15 09:08:42.863246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.335 [2024-05-15 09:08:42.863299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.335 qpair failed and we were unable to recover it. 00:42:48.335 [2024-05-15 09:08:42.863403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.335 [2024-05-15 09:08:42.863502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.335 [2024-05-15 09:08:42.863527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.335 qpair failed and we were unable to recover it. 00:42:48.335 [2024-05-15 09:08:42.863627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.335 [2024-05-15 09:08:42.863726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.335 [2024-05-15 09:08:42.863750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.335 qpair failed and we were unable to recover it. 00:42:48.335 [2024-05-15 09:08:42.863854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.335 [2024-05-15 09:08:42.863958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.335 [2024-05-15 09:08:42.863982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.335 qpair failed and we were unable to recover it. 00:42:48.336 [2024-05-15 09:08:42.864115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.336 [2024-05-15 09:08:42.864251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.336 [2024-05-15 09:08:42.864276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.336 qpair failed and we were unable to recover it. 00:42:48.336 [2024-05-15 09:08:42.864382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.336 [2024-05-15 09:08:42.864509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.336 [2024-05-15 09:08:42.864539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.336 qpair failed and we were unable to recover it. 00:42:48.336 [2024-05-15 09:08:42.864646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.336 [2024-05-15 09:08:42.864745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.336 [2024-05-15 09:08:42.864770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.336 qpair failed and we were unable to recover it. 00:42:48.336 [2024-05-15 09:08:42.864870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.336 [2024-05-15 09:08:42.864997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.336 [2024-05-15 09:08:42.865021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.336 qpair failed and we were unable to recover it. 00:42:48.336 [2024-05-15 09:08:42.865129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.336 [2024-05-15 09:08:42.865234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.336 [2024-05-15 09:08:42.865260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.336 qpair failed and we were unable to recover it. 00:42:48.336 [2024-05-15 09:08:42.865368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.336 [2024-05-15 09:08:42.865465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.336 [2024-05-15 09:08:42.865490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.336 qpair failed and we were unable to recover it. 00:42:48.336 [2024-05-15 09:08:42.865601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.336 [2024-05-15 09:08:42.865732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.336 [2024-05-15 09:08:42.865756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.336 qpair failed and we were unable to recover it. 00:42:48.336 [2024-05-15 09:08:42.865855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.336 [2024-05-15 09:08:42.865955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.336 [2024-05-15 09:08:42.865979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.336 qpair failed and we were unable to recover it. 00:42:48.336 [2024-05-15 09:08:42.866087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.336 [2024-05-15 09:08:42.866223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.336 [2024-05-15 09:08:42.866248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.336 qpair failed and we were unable to recover it. 00:42:48.336 [2024-05-15 09:08:42.866357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.336 [2024-05-15 09:08:42.866490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.336 [2024-05-15 09:08:42.866514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.336 qpair failed and we were unable to recover it. 00:42:48.336 [2024-05-15 09:08:42.866651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.336 [2024-05-15 09:08:42.866791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.336 [2024-05-15 09:08:42.866815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.336 qpair failed and we were unable to recover it. 00:42:48.336 [2024-05-15 09:08:42.866911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.336 [2024-05-15 09:08:42.867014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.336 [2024-05-15 09:08:42.867042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.336 qpair failed and we were unable to recover it. 00:42:48.336 [2024-05-15 09:08:42.867151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.336 [2024-05-15 09:08:42.867263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.336 [2024-05-15 09:08:42.867297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.336 qpair failed and we were unable to recover it. 00:42:48.336 [2024-05-15 09:08:42.867405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.336 [2024-05-15 09:08:42.867507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.336 [2024-05-15 09:08:42.867532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.336 qpair failed and we were unable to recover it. 00:42:48.336 [2024-05-15 09:08:42.867649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.336 [2024-05-15 09:08:42.867750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.336 [2024-05-15 09:08:42.867774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.336 qpair failed and we were unable to recover it. 00:42:48.336 [2024-05-15 09:08:42.867886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.336 [2024-05-15 09:08:42.867987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.336 [2024-05-15 09:08:42.868011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.336 qpair failed and we were unable to recover it. 00:42:48.336 [2024-05-15 09:08:42.868144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.336 [2024-05-15 09:08:42.868252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.336 [2024-05-15 09:08:42.868289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.336 qpair failed and we were unable to recover it. 00:42:48.336 [2024-05-15 09:08:42.868389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.336 [2024-05-15 09:08:42.868516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.336 [2024-05-15 09:08:42.868540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.336 qpair failed and we were unable to recover it. 00:42:48.336 [2024-05-15 09:08:42.868653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.336 [2024-05-15 09:08:42.868749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.336 [2024-05-15 09:08:42.868776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.336 qpair failed and we were unable to recover it. 00:42:48.336 [2024-05-15 09:08:42.868873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.336 [2024-05-15 09:08:42.868999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.336 [2024-05-15 09:08:42.869024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.336 qpair failed and we were unable to recover it. 00:42:48.336 [2024-05-15 09:08:42.869126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.336 [2024-05-15 09:08:42.869259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.336 [2024-05-15 09:08:42.869285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.336 qpair failed and we were unable to recover it. 00:42:48.336 [2024-05-15 09:08:42.869387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.336 [2024-05-15 09:08:42.869509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.336 [2024-05-15 09:08:42.869538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.336 qpair failed and we were unable to recover it. 00:42:48.336 [2024-05-15 09:08:42.869650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.336 [2024-05-15 09:08:42.869766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.336 [2024-05-15 09:08:42.869791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.336 qpair failed and we were unable to recover it. 00:42:48.336 [2024-05-15 09:08:42.869921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.336 [2024-05-15 09:08:42.870024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.336 [2024-05-15 09:08:42.870049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.336 qpair failed and we were unable to recover it. 00:42:48.336 [2024-05-15 09:08:42.870181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.336 [2024-05-15 09:08:42.870322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.336 [2024-05-15 09:08:42.870347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.336 qpair failed and we were unable to recover it. 00:42:48.336 [2024-05-15 09:08:42.870452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.336 [2024-05-15 09:08:42.870578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.336 [2024-05-15 09:08:42.870603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.336 qpair failed and we were unable to recover it. 00:42:48.336 [2024-05-15 09:08:42.870709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.336 [2024-05-15 09:08:42.870817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.336 [2024-05-15 09:08:42.870841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.336 qpair failed and we were unable to recover it. 00:42:48.336 [2024-05-15 09:08:42.870951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.336 [2024-05-15 09:08:42.871057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.336 [2024-05-15 09:08:42.871081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.336 qpair failed and we were unable to recover it. 00:42:48.336 [2024-05-15 09:08:42.871191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.336 [2024-05-15 09:08:42.871312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.337 [2024-05-15 09:08:42.871337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.337 qpair failed and we were unable to recover it. 00:42:48.337 [2024-05-15 09:08:42.871473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.337 [2024-05-15 09:08:42.871582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.337 [2024-05-15 09:08:42.871606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.337 qpair failed and we were unable to recover it. 00:42:48.337 [2024-05-15 09:08:42.871715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.337 [2024-05-15 09:08:42.871821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.337 [2024-05-15 09:08:42.871845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.337 qpair failed and we were unable to recover it. 00:42:48.337 [2024-05-15 09:08:42.871949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.337 [2024-05-15 09:08:42.872052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.337 [2024-05-15 09:08:42.872081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.337 qpair failed and we were unable to recover it. 00:42:48.337 [2024-05-15 09:08:42.872192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.337 [2024-05-15 09:08:42.872317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.337 [2024-05-15 09:08:42.872342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.337 qpair failed and we were unable to recover it. 00:42:48.337 [2024-05-15 09:08:42.872450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.337 [2024-05-15 09:08:42.872616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.337 [2024-05-15 09:08:42.872642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.337 qpair failed and we were unable to recover it. 00:42:48.337 [2024-05-15 09:08:42.872750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.337 [2024-05-15 09:08:42.872847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.337 [2024-05-15 09:08:42.872871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.337 qpair failed and we were unable to recover it. 00:42:48.337 [2024-05-15 09:08:42.872970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.337 [2024-05-15 09:08:42.873067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.337 [2024-05-15 09:08:42.873091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.337 qpair failed and we were unable to recover it. 00:42:48.337 [2024-05-15 09:08:42.873247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.337 [2024-05-15 09:08:42.873354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.337 [2024-05-15 09:08:42.873381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.337 qpair failed and we were unable to recover it. 00:42:48.337 [2024-05-15 09:08:42.873489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.337 [2024-05-15 09:08:42.873598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.337 [2024-05-15 09:08:42.873624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.337 qpair failed and we were unable to recover it. 00:42:48.337 [2024-05-15 09:08:42.873728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.337 [2024-05-15 09:08:42.873825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.337 [2024-05-15 09:08:42.873850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.337 qpair failed and we were unable to recover it. 00:42:48.337 [2024-05-15 09:08:42.873982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.337 [2024-05-15 09:08:42.874082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.337 [2024-05-15 09:08:42.874107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.337 qpair failed and we were unable to recover it. 00:42:48.337 [2024-05-15 09:08:42.874240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.337 [2024-05-15 09:08:42.874366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.337 [2024-05-15 09:08:42.874390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.337 qpair failed and we were unable to recover it. 00:42:48.337 [2024-05-15 09:08:42.874491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.337 [2024-05-15 09:08:42.874600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.337 [2024-05-15 09:08:42.874626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.337 qpair failed and we were unable to recover it. 00:42:48.337 [2024-05-15 09:08:42.874732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.337 [2024-05-15 09:08:42.874864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.337 [2024-05-15 09:08:42.874889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.337 qpair failed and we were unable to recover it. 00:42:48.337 [2024-05-15 09:08:42.874987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.337 [2024-05-15 09:08:42.875112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.337 [2024-05-15 09:08:42.875137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.337 qpair failed and we were unable to recover it. 00:42:48.337 [2024-05-15 09:08:42.875242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.337 [2024-05-15 09:08:42.875348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.337 [2024-05-15 09:08:42.875373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.337 qpair failed and we were unable to recover it. 00:42:48.337 [2024-05-15 09:08:42.875480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.337 [2024-05-15 09:08:42.875613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.337 [2024-05-15 09:08:42.875639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.337 qpair failed and we were unable to recover it. 00:42:48.337 [2024-05-15 09:08:42.875768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.337 [2024-05-15 09:08:42.875882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.337 [2024-05-15 09:08:42.875907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.337 qpair failed and we were unable to recover it. 00:42:48.337 [2024-05-15 09:08:42.876012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.337 [2024-05-15 09:08:42.876109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.337 [2024-05-15 09:08:42.876136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.337 qpair failed and we were unable to recover it. 00:42:48.337 [2024-05-15 09:08:42.876244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.337 [2024-05-15 09:08:42.876350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.337 [2024-05-15 09:08:42.876374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.337 qpair failed and we were unable to recover it. 00:42:48.337 [2024-05-15 09:08:42.876478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.337 [2024-05-15 09:08:42.876579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.337 [2024-05-15 09:08:42.876603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.337 qpair failed and we were unable to recover it. 00:42:48.337 [2024-05-15 09:08:42.876734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.337 [2024-05-15 09:08:42.876842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.337 [2024-05-15 09:08:42.876868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.337 qpair failed and we were unable to recover it. 00:42:48.337 [2024-05-15 09:08:42.877003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.337 [2024-05-15 09:08:42.877126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.337 [2024-05-15 09:08:42.877150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.337 qpair failed and we were unable to recover it. 00:42:48.337 [2024-05-15 09:08:42.877256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.337 [2024-05-15 09:08:42.877365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.337 [2024-05-15 09:08:42.877389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.337 qpair failed and we were unable to recover it. 00:42:48.337 [2024-05-15 09:08:42.877492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.338 [2024-05-15 09:08:42.877596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.338 [2024-05-15 09:08:42.877620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.338 qpair failed and we were unable to recover it. 00:42:48.338 [2024-05-15 09:08:42.877719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.338 [2024-05-15 09:08:42.877817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.338 [2024-05-15 09:08:42.877841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.338 qpair failed and we were unable to recover it. 00:42:48.338 [2024-05-15 09:08:42.877947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.338 [2024-05-15 09:08:42.878036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.338 [2024-05-15 09:08:42.878061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.338 qpair failed and we were unable to recover it. 00:42:48.338 [2024-05-15 09:08:42.878169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.338 [2024-05-15 09:08:42.878287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.338 [2024-05-15 09:08:42.878313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.338 qpair failed and we were unable to recover it. 00:42:48.338 [2024-05-15 09:08:42.878415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.338 [2024-05-15 09:08:42.878551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.338 [2024-05-15 09:08:42.878575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.338 qpair failed and we were unable to recover it. 00:42:48.338 [2024-05-15 09:08:42.878708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.338 [2024-05-15 09:08:42.878835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.338 [2024-05-15 09:08:42.878859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.338 qpair failed and we were unable to recover it. 00:42:48.338 [2024-05-15 09:08:42.878982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.338 [2024-05-15 09:08:42.879111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.338 [2024-05-15 09:08:42.879136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.338 qpair failed and we were unable to recover it. 00:42:48.338 [2024-05-15 09:08:42.879236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.338 [2024-05-15 09:08:42.879335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.338 [2024-05-15 09:08:42.879360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.338 qpair failed and we were unable to recover it. 00:42:48.338 [2024-05-15 09:08:42.879466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.338 [2024-05-15 09:08:42.879577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.338 [2024-05-15 09:08:42.879602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.338 qpair failed and we were unable to recover it. 00:42:48.338 [2024-05-15 09:08:42.879763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.338 [2024-05-15 09:08:42.879866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.338 [2024-05-15 09:08:42.879890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.338 qpair failed and we were unable to recover it. 00:42:48.338 [2024-05-15 09:08:42.880002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.338 [2024-05-15 09:08:42.880109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.338 [2024-05-15 09:08:42.880134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.338 qpair failed and we were unable to recover it. 00:42:48.338 [2024-05-15 09:08:42.880244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.338 [2024-05-15 09:08:42.880346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.338 [2024-05-15 09:08:42.880370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.338 qpair failed and we were unable to recover it. 00:42:48.338 [2024-05-15 09:08:42.880463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.338 [2024-05-15 09:08:42.880569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.338 [2024-05-15 09:08:42.880593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.338 qpair failed and we were unable to recover it. 00:42:48.338 [2024-05-15 09:08:42.880692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.338 [2024-05-15 09:08:42.880792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.338 [2024-05-15 09:08:42.880816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.338 qpair failed and we were unable to recover it. 00:42:48.338 [2024-05-15 09:08:42.880917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.338 [2024-05-15 09:08:42.881015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.338 [2024-05-15 09:08:42.881041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.338 qpair failed and we were unable to recover it. 00:42:48.338 [2024-05-15 09:08:42.881169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.338 [2024-05-15 09:08:42.881285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.338 [2024-05-15 09:08:42.881311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.338 qpair failed and we were unable to recover it. 00:42:48.338 [2024-05-15 09:08:42.881414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.338 [2024-05-15 09:08:42.881535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.338 [2024-05-15 09:08:42.881560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.338 qpair failed and we were unable to recover it. 00:42:48.338 [2024-05-15 09:08:42.881659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.338 [2024-05-15 09:08:42.881761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.338 [2024-05-15 09:08:42.881786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.338 qpair failed and we were unable to recover it. 00:42:48.338 [2024-05-15 09:08:42.881892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.338 [2024-05-15 09:08:42.881995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.338 [2024-05-15 09:08:42.882021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.338 qpair failed and we were unable to recover it. 00:42:48.338 [2024-05-15 09:08:42.882130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.338 [2024-05-15 09:08:42.882239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.338 [2024-05-15 09:08:42.882277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.338 qpair failed and we were unable to recover it. 00:42:48.338 [2024-05-15 09:08:42.882385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.338 [2024-05-15 09:08:42.882520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.338 [2024-05-15 09:08:42.882545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.338 qpair failed and we were unable to recover it. 00:42:48.338 [2024-05-15 09:08:42.882651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.338 [2024-05-15 09:08:42.882750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.338 [2024-05-15 09:08:42.882776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.338 qpair failed and we were unable to recover it. 00:42:48.338 [2024-05-15 09:08:42.882883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.338 [2024-05-15 09:08:42.883008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.338 [2024-05-15 09:08:42.883033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.338 qpair failed and we were unable to recover it. 00:42:48.338 [2024-05-15 09:08:42.883132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.338 [2024-05-15 09:08:42.883231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.338 [2024-05-15 09:08:42.883257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.338 qpair failed and we were unable to recover it. 00:42:48.338 [2024-05-15 09:08:42.883388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.338 [2024-05-15 09:08:42.883490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.338 [2024-05-15 09:08:42.883525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.338 qpair failed and we were unable to recover it. 00:42:48.338 [2024-05-15 09:08:42.883659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.338 [2024-05-15 09:08:42.883765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.338 [2024-05-15 09:08:42.883789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.338 qpair failed and we were unable to recover it. 00:42:48.338 [2024-05-15 09:08:42.883923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.338 [2024-05-15 09:08:42.884023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.338 [2024-05-15 09:08:42.884048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.338 qpair failed and we were unable to recover it. 00:42:48.338 [2024-05-15 09:08:42.884181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.338 [2024-05-15 09:08:42.884291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.338 [2024-05-15 09:08:42.884315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.339 qpair failed and we were unable to recover it. 00:42:48.339 [2024-05-15 09:08:42.884413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.339 [2024-05-15 09:08:42.884516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.339 [2024-05-15 09:08:42.884540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.339 qpair failed and we were unable to recover it. 00:42:48.339 [2024-05-15 09:08:42.884670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.339 [2024-05-15 09:08:42.884775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.339 [2024-05-15 09:08:42.884799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.339 qpair failed and we were unable to recover it. 00:42:48.339 [2024-05-15 09:08:42.884916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.339 09:08:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:48.339 [2024-05-15 09:08:42.885045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.339 [2024-05-15 09:08:42.885070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.339 qpair failed and we were unable to recover it. 00:42:48.339 [2024-05-15 09:08:42.885195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.339 [2024-05-15 09:08:42.885313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.339 [2024-05-15 09:08:42.885340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.339 qpair failed and we were unable to recover it. 00:42:48.339 09:08:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:42:48.339 [2024-05-15 09:08:42.885440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.339 09:08:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:48.339 [2024-05-15 09:08:42.885569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.339 [2024-05-15 09:08:42.885595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.339 qpair failed and we were unable to recover it. 00:42:48.339 09:08:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:42:48.339 [2024-05-15 09:08:42.885749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.339 [2024-05-15 09:08:42.885851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.339 [2024-05-15 09:08:42.885875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.339 qpair failed and we were unable to recover it. 00:42:48.339 [2024-05-15 09:08:42.885980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.339 [2024-05-15 09:08:42.886133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.339 [2024-05-15 09:08:42.886157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.339 qpair failed and we were unable to recover it. 00:42:48.339 [2024-05-15 09:08:42.886275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.339 [2024-05-15 09:08:42.886377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.339 [2024-05-15 09:08:42.886402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.339 qpair failed and we were unable to recover it. 00:42:48.339 [2024-05-15 09:08:42.886531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.339 [2024-05-15 09:08:42.886655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.339 [2024-05-15 09:08:42.886679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.339 qpair failed and we were unable to recover it. 00:42:48.339 [2024-05-15 09:08:42.886782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.339 [2024-05-15 09:08:42.886878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.339 [2024-05-15 09:08:42.886907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.339 qpair failed and we were unable to recover it. 00:42:48.339 [2024-05-15 09:08:42.887022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.339 [2024-05-15 09:08:42.887177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.339 [2024-05-15 09:08:42.887201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.339 qpair failed and we were unable to recover it. 00:42:48.339 [2024-05-15 09:08:42.887318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.339 [2024-05-15 09:08:42.887428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.339 [2024-05-15 09:08:42.887452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.339 qpair failed and we were unable to recover it. 00:42:48.339 [2024-05-15 09:08:42.887589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.339 [2024-05-15 09:08:42.887690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.339 [2024-05-15 09:08:42.887714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.339 qpair failed and we were unable to recover it. 00:42:48.339 [2024-05-15 09:08:42.887836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.339 [2024-05-15 09:08:42.887964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.339 [2024-05-15 09:08:42.887990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.339 qpair failed and we were unable to recover it. 00:42:48.339 [2024-05-15 09:08:42.888098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.339 [2024-05-15 09:08:42.888202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.339 [2024-05-15 09:08:42.888236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.339 qpair failed and we were unable to recover it. 00:42:48.339 [2024-05-15 09:08:42.888346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.339 [2024-05-15 09:08:42.888447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.339 [2024-05-15 09:08:42.888471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.339 qpair failed and we were unable to recover it. 00:42:48.339 [2024-05-15 09:08:42.888583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.339 [2024-05-15 09:08:42.888685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.339 [2024-05-15 09:08:42.888709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.339 qpair failed and we were unable to recover it. 00:42:48.339 [2024-05-15 09:08:42.888808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.339 [2024-05-15 09:08:42.888930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.339 [2024-05-15 09:08:42.888955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.339 qpair failed and we were unable to recover it. 00:42:48.339 [2024-05-15 09:08:42.889063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.339 [2024-05-15 09:08:42.889207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.339 [2024-05-15 09:08:42.889249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.339 qpair failed and we were unable to recover it. 00:42:48.339 [2024-05-15 09:08:42.889361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.339 [2024-05-15 09:08:42.889484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.339 [2024-05-15 09:08:42.889509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.339 qpair failed and we were unable to recover it. 00:42:48.339 [2024-05-15 09:08:42.889621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.339 [2024-05-15 09:08:42.889716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.339 [2024-05-15 09:08:42.889741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.339 qpair failed and we were unable to recover it. 00:42:48.339 [2024-05-15 09:08:42.889857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.339 [2024-05-15 09:08:42.889962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.339 [2024-05-15 09:08:42.889987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.339 qpair failed and we were unable to recover it. 00:42:48.339 [2024-05-15 09:08:42.890111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.339 [2024-05-15 09:08:42.890207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.339 [2024-05-15 09:08:42.890240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.339 qpair failed and we were unable to recover it. 00:42:48.339 [2024-05-15 09:08:42.890350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.339 [2024-05-15 09:08:42.890457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.339 [2024-05-15 09:08:42.890481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.339 qpair failed and we were unable to recover it. 00:42:48.339 [2024-05-15 09:08:42.890594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.339 [2024-05-15 09:08:42.890723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.339 [2024-05-15 09:08:42.890748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.339 qpair failed and we were unable to recover it. 00:42:48.339 [2024-05-15 09:08:42.890845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.339 [2024-05-15 09:08:42.890948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.339 [2024-05-15 09:08:42.890973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.339 qpair failed and we were unable to recover it. 00:42:48.339 [2024-05-15 09:08:42.891072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.339 [2024-05-15 09:08:42.891205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.339 [2024-05-15 09:08:42.891237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.339 qpair failed and we were unable to recover it. 00:42:48.340 [2024-05-15 09:08:42.891347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.340 [2024-05-15 09:08:42.891484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.340 [2024-05-15 09:08:42.891509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.340 qpair failed and we were unable to recover it. 00:42:48.340 [2024-05-15 09:08:42.891660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.340 [2024-05-15 09:08:42.891770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.340 [2024-05-15 09:08:42.891796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.340 qpair failed and we were unable to recover it. 00:42:48.340 [2024-05-15 09:08:42.891901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.340 [2024-05-15 09:08:42.891999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.340 [2024-05-15 09:08:42.892025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.340 qpair failed and we were unable to recover it. 00:42:48.340 [2024-05-15 09:08:42.892162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.340 [2024-05-15 09:08:42.892330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.340 [2024-05-15 09:08:42.892356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.340 qpair failed and we were unable to recover it. 00:42:48.340 [2024-05-15 09:08:42.892456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.340 [2024-05-15 09:08:42.892562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.340 [2024-05-15 09:08:42.892587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.340 qpair failed and we were unable to recover it. 00:42:48.340 [2024-05-15 09:08:42.892723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.340 [2024-05-15 09:08:42.892818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.340 [2024-05-15 09:08:42.892843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.340 qpair failed and we were unable to recover it. 00:42:48.340 [2024-05-15 09:08:42.892967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.340 [2024-05-15 09:08:42.893063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.340 [2024-05-15 09:08:42.893088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.340 qpair failed and we were unable to recover it. 00:42:48.340 [2024-05-15 09:08:42.893247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.340 [2024-05-15 09:08:42.893362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.340 [2024-05-15 09:08:42.893387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.340 qpair failed and we were unable to recover it. 00:42:48.340 [2024-05-15 09:08:42.893525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.340 [2024-05-15 09:08:42.893629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.340 [2024-05-15 09:08:42.893656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.340 qpair failed and we were unable to recover it. 00:42:48.340 [2024-05-15 09:08:42.893767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.340 [2024-05-15 09:08:42.893907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.340 [2024-05-15 09:08:42.893932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.340 qpair failed and we were unable to recover it. 00:42:48.340 [2024-05-15 09:08:42.894023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.340 [2024-05-15 09:08:42.894133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.340 [2024-05-15 09:08:42.894157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.340 qpair failed and we were unable to recover it. 00:42:48.340 [2024-05-15 09:08:42.894266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.340 [2024-05-15 09:08:42.894374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.340 [2024-05-15 09:08:42.894399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.340 qpair failed and we were unable to recover it. 00:42:48.340 [2024-05-15 09:08:42.894532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.340 [2024-05-15 09:08:42.894636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.340 [2024-05-15 09:08:42.894660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.340 qpair failed and we were unable to recover it. 00:42:48.340 [2024-05-15 09:08:42.894770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.340 [2024-05-15 09:08:42.894901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.340 [2024-05-15 09:08:42.894925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.340 qpair failed and we were unable to recover it. 00:42:48.340 [2024-05-15 09:08:42.895030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.340 [2024-05-15 09:08:42.895153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.340 [2024-05-15 09:08:42.895177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.340 qpair failed and we were unable to recover it. 00:42:48.340 [2024-05-15 09:08:42.895294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.340 [2024-05-15 09:08:42.895389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.340 [2024-05-15 09:08:42.895415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.340 qpair failed and we were unable to recover it. 00:42:48.340 [2024-05-15 09:08:42.895527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.340 [2024-05-15 09:08:42.895643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.340 [2024-05-15 09:08:42.895669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.340 qpair failed and we were unable to recover it. 00:42:48.340 [2024-05-15 09:08:42.895773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.340 [2024-05-15 09:08:42.895872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.340 [2024-05-15 09:08:42.895898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.340 qpair failed and we were unable to recover it. 00:42:48.340 [2024-05-15 09:08:42.896004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.340 [2024-05-15 09:08:42.896101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.340 [2024-05-15 09:08:42.896126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.340 qpair failed and we were unable to recover it. 00:42:48.340 [2024-05-15 09:08:42.896233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.340 [2024-05-15 09:08:42.896338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.340 [2024-05-15 09:08:42.896363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.340 qpair failed and we were unable to recover it. 00:42:48.340 [2024-05-15 09:08:42.896487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.340 [2024-05-15 09:08:42.896608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.340 [2024-05-15 09:08:42.896633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.340 qpair failed and we were unable to recover it. 00:42:48.340 [2024-05-15 09:08:42.896740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.340 [2024-05-15 09:08:42.896852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.340 [2024-05-15 09:08:42.896877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.340 qpair failed and we were unable to recover it. 00:42:48.340 [2024-05-15 09:08:42.896988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.340 [2024-05-15 09:08:42.897090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.340 [2024-05-15 09:08:42.897114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.340 qpair failed and we were unable to recover it. 00:42:48.340 [2024-05-15 09:08:42.897226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.340 [2024-05-15 09:08:42.897378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.340 [2024-05-15 09:08:42.897404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.340 qpair failed and we were unable to recover it. 00:42:48.340 [2024-05-15 09:08:42.897516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.340 [2024-05-15 09:08:42.897619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.340 [2024-05-15 09:08:42.897643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.340 qpair failed and we were unable to recover it. 00:42:48.340 [2024-05-15 09:08:42.897794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.340 [2024-05-15 09:08:42.897908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.340 [2024-05-15 09:08:42.897932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.340 qpair failed and we were unable to recover it. 00:42:48.340 [2024-05-15 09:08:42.898085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.340 [2024-05-15 09:08:42.898187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.340 [2024-05-15 09:08:42.898211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.340 qpair failed and we were unable to recover it. 00:42:48.340 [2024-05-15 09:08:42.898340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.340 [2024-05-15 09:08:42.898438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.340 [2024-05-15 09:08:42.898463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.340 qpair failed and we were unable to recover it. 00:42:48.340 [2024-05-15 09:08:42.898595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.340 [2024-05-15 09:08:42.898705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.341 [2024-05-15 09:08:42.898732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.341 qpair failed and we were unable to recover it. 00:42:48.341 [2024-05-15 09:08:42.898832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.341 [2024-05-15 09:08:42.898957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.341 [2024-05-15 09:08:42.898982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.341 qpair failed and we were unable to recover it. 00:42:48.341 [2024-05-15 09:08:42.899090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.341 [2024-05-15 09:08:42.899186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.341 [2024-05-15 09:08:42.899211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.341 qpair failed and we were unable to recover it. 00:42:48.341 [2024-05-15 09:08:42.899334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.341 [2024-05-15 09:08:42.899437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.341 [2024-05-15 09:08:42.899462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.341 qpair failed and we were unable to recover it. 00:42:48.341 [2024-05-15 09:08:42.899562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.341 [2024-05-15 09:08:42.899687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.341 [2024-05-15 09:08:42.899712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.341 qpair failed and we were unable to recover it. 00:42:48.341 [2024-05-15 09:08:42.899855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.341 [2024-05-15 09:08:42.899959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.341 [2024-05-15 09:08:42.899986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.341 qpair failed and we were unable to recover it. 00:42:48.341 [2024-05-15 09:08:42.900125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.341 [2024-05-15 09:08:42.900227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.341 [2024-05-15 09:08:42.900253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.341 qpair failed and we were unable to recover it. 00:42:48.341 [2024-05-15 09:08:42.900364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.341 [2024-05-15 09:08:42.900468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.341 [2024-05-15 09:08:42.900494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.341 qpair failed and we were unable to recover it. 00:42:48.341 [2024-05-15 09:08:42.900628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.341 [2024-05-15 09:08:42.900758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.341 [2024-05-15 09:08:42.900783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.341 qpair failed and we were unable to recover it. 00:42:48.341 [2024-05-15 09:08:42.900918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.341 [2024-05-15 09:08:42.901073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.341 [2024-05-15 09:08:42.901097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.341 qpair failed and we were unable to recover it. 00:42:48.341 [2024-05-15 09:08:42.901210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.341 [2024-05-15 09:08:42.901361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.341 [2024-05-15 09:08:42.901388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.341 qpair failed and we were unable to recover it. 00:42:48.341 [2024-05-15 09:08:42.901507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.341 [2024-05-15 09:08:42.901617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.341 [2024-05-15 09:08:42.901643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.341 qpair failed and we were unable to recover it. 00:42:48.341 [2024-05-15 09:08:42.901748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.341 [2024-05-15 09:08:42.901872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.341 [2024-05-15 09:08:42.901897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.341 qpair failed and we were unable to recover it. 00:42:48.341 [2024-05-15 09:08:42.902000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.341 [2024-05-15 09:08:42.902129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.341 [2024-05-15 09:08:42.902153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.341 qpair failed and we were unable to recover it. 00:42:48.341 [2024-05-15 09:08:42.902283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.341 [2024-05-15 09:08:42.902381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.341 [2024-05-15 09:08:42.902406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.341 qpair failed and we were unable to recover it. 00:42:48.341 [2024-05-15 09:08:42.902514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.341 [2024-05-15 09:08:42.902652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.341 [2024-05-15 09:08:42.902676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.341 qpair failed and we were unable to recover it. 00:42:48.341 [2024-05-15 09:08:42.902786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.341 [2024-05-15 09:08:42.902890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.341 [2024-05-15 09:08:42.902916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.341 qpair failed and we were unable to recover it. 00:42:48.341 [2024-05-15 09:08:42.903020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.341 [2024-05-15 09:08:42.903121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.341 [2024-05-15 09:08:42.903145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.341 qpair failed and we were unable to recover it. 00:42:48.341 [2024-05-15 09:08:42.903254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.341 [2024-05-15 09:08:42.903368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.341 [2024-05-15 09:08:42.903393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.341 qpair failed and we were unable to recover it. 00:42:48.341 [2024-05-15 09:08:42.903491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.341 [2024-05-15 09:08:42.903597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.341 [2024-05-15 09:08:42.903621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.341 qpair failed and we were unable to recover it. 00:42:48.341 [2024-05-15 09:08:42.903716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.341 [2024-05-15 09:08:42.903840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.341 [2024-05-15 09:08:42.903866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.341 qpair failed and we were unable to recover it. 00:42:48.341 [2024-05-15 09:08:42.903977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.341 [2024-05-15 09:08:42.904102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.341 [2024-05-15 09:08:42.904127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.341 qpair failed and we were unable to recover it. 00:42:48.341 [2024-05-15 09:08:42.904233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.341 [2024-05-15 09:08:42.904366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.341 [2024-05-15 09:08:42.904391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.341 qpair failed and we were unable to recover it. 00:42:48.341 [2024-05-15 09:08:42.904493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.341 [2024-05-15 09:08:42.904619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.341 [2024-05-15 09:08:42.904644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.341 qpair failed and we were unable to recover it. 00:42:48.341 [2024-05-15 09:08:42.904746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.341 [2024-05-15 09:08:42.904882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.342 [2024-05-15 09:08:42.904908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.342 qpair failed and we were unable to recover it. 00:42:48.342 [2024-05-15 09:08:42.905016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.342 [2024-05-15 09:08:42.905141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.342 [2024-05-15 09:08:42.905165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.342 qpair failed and we were unable to recover it. 00:42:48.342 [2024-05-15 09:08:42.905282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.342 [2024-05-15 09:08:42.905391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.342 [2024-05-15 09:08:42.905415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.342 qpair failed and we were unable to recover it. 00:42:48.342 [2024-05-15 09:08:42.905515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.342 [2024-05-15 09:08:42.905648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.342 [2024-05-15 09:08:42.905672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.342 qpair failed and we were unable to recover it. 00:42:48.342 [2024-05-15 09:08:42.905779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.342 [2024-05-15 09:08:42.905882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.342 [2024-05-15 09:08:42.905906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.342 qpair failed and we were unable to recover it. 00:42:48.342 [2024-05-15 09:08:42.906006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.342 [2024-05-15 09:08:42.906166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.342 [2024-05-15 09:08:42.906190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.342 qpair failed and we were unable to recover it. 00:42:48.342 [2024-05-15 09:08:42.906345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.342 [2024-05-15 09:08:42.906449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.342 [2024-05-15 09:08:42.906474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.342 qpair failed and we were unable to recover it. 00:42:48.342 [2024-05-15 09:08:42.906591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.342 [2024-05-15 09:08:42.906724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.342 [2024-05-15 09:08:42.906749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.342 qpair failed and we were unable to recover it. 00:42:48.342 [2024-05-15 09:08:42.906874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.342 [2024-05-15 09:08:42.907006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.342 [2024-05-15 09:08:42.907031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.342 qpair failed and we were unable to recover it. 00:42:48.342 [2024-05-15 09:08:42.907136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.342 [2024-05-15 09:08:42.907240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.342 [2024-05-15 09:08:42.907265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.342 qpair failed and we were unable to recover it. 00:42:48.342 [2024-05-15 09:08:42.907379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.342 [2024-05-15 09:08:42.907517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.342 [2024-05-15 09:08:42.907547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.342 qpair failed and we were unable to recover it. 00:42:48.342 [2024-05-15 09:08:42.907664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.342 [2024-05-15 09:08:42.907765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.342 [2024-05-15 09:08:42.907790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.342 qpair failed and we were unable to recover it. 00:42:48.342 [2024-05-15 09:08:42.907889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.342 [2024-05-15 09:08:42.908022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.342 [2024-05-15 09:08:42.908046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.342 qpair failed and we were unable to recover it. 00:42:48.342 [2024-05-15 09:08:42.908168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.342 [2024-05-15 09:08:42.908302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.342 [2024-05-15 09:08:42.908328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.342 qpair failed and we were unable to recover it. 00:42:48.342 [2024-05-15 09:08:42.908457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.342 [2024-05-15 09:08:42.908597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.342 [2024-05-15 09:08:42.908621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.342 qpair failed and we were unable to recover it. 00:42:48.342 [2024-05-15 09:08:42.908747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.342 [2024-05-15 09:08:42.908851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.342 [2024-05-15 09:08:42.908875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.342 qpair failed and we were unable to recover it. 00:42:48.342 [2024-05-15 09:08:42.909006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.342 [2024-05-15 09:08:42.909139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.342 [2024-05-15 09:08:42.909163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.342 qpair failed and we were unable to recover it. 00:42:48.342 [2024-05-15 09:08:42.909290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.342 Malloc0 00:42:48.342 [2024-05-15 09:08:42.909400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.342 [2024-05-15 09:08:42.909425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.342 qpair failed and we were unable to recover it. 00:42:48.342 [2024-05-15 09:08:42.909535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.342 [2024-05-15 09:08:42.909649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.342 [2024-05-15 09:08:42.909673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.342 qpair failed and we were unable to recover it. 00:42:48.342 [2024-05-15 09:08:42.909772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.342 09:08:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:48.342 [2024-05-15 09:08:42.909873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.342 [2024-05-15 09:08:42.909898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.342 qpair failed and we were unable to recover it. 00:42:48.342 09:08:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:42:48.342 [2024-05-15 09:08:42.910033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.342 09:08:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:48.342 [2024-05-15 09:08:42.910144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.342 [2024-05-15 09:08:42.910169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.342 qpair failed and we were unable to recover it. 00:42:48.342 09:08:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:42:48.342 [2024-05-15 09:08:42.910267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.342 [2024-05-15 09:08:42.910376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.342 [2024-05-15 09:08:42.910402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.342 qpair failed and we were unable to recover it. 00:42:48.342 [2024-05-15 09:08:42.910530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.342 [2024-05-15 09:08:42.910632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.342 [2024-05-15 09:08:42.910658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.342 qpair failed and we were unable to recover it. 00:42:48.342 [2024-05-15 09:08:42.910767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.342 [2024-05-15 09:08:42.910859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.342 [2024-05-15 09:08:42.910883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.342 qpair failed and we were unable to recover it. 00:42:48.342 [2024-05-15 09:08:42.911040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.342 [2024-05-15 09:08:42.911135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.342 [2024-05-15 09:08:42.911159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.342 qpair failed and we were unable to recover it. 00:42:48.342 [2024-05-15 09:08:42.911282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.342 [2024-05-15 09:08:42.911396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.342 [2024-05-15 09:08:42.911421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.342 qpair failed and we were unable to recover it. 00:42:48.342 [2024-05-15 09:08:42.911518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.342 [2024-05-15 09:08:42.911641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.342 [2024-05-15 09:08:42.911668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.342 qpair failed and we were unable to recover it. 00:42:48.342 [2024-05-15 09:08:42.911771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.342 [2024-05-15 09:08:42.911876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.342 [2024-05-15 09:08:42.911900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.342 qpair failed and we were unable to recover it. 00:42:48.343 [2024-05-15 09:08:42.912001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.343 [2024-05-15 09:08:42.912120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.343 [2024-05-15 09:08:42.912146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.343 qpair failed and we were unable to recover it. 00:42:48.343 [2024-05-15 09:08:42.912255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.343 [2024-05-15 09:08:42.912374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.343 [2024-05-15 09:08:42.912400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.343 qpair failed and we were unable to recover it. 00:42:48.343 [2024-05-15 09:08:42.912498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.343 [2024-05-15 09:08:42.912609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.343 [2024-05-15 09:08:42.912634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.343 qpair failed and we were unable to recover it. 00:42:48.343 [2024-05-15 09:08:42.912740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.343 [2024-05-15 09:08:42.912834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.343 [2024-05-15 09:08:42.912858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.343 qpair failed and we were unable to recover it. 00:42:48.343 [2024-05-15 09:08:42.912962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.343 [2024-05-15 09:08:42.913070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.343 [2024-05-15 09:08:42.913094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b9[2024-05-15 09:08:42.913080] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:48.343 0 with addr=10.0.0.2, port=4420 00:42:48.343 qpair failed and we were unable to recover it. 00:42:48.343 [2024-05-15 09:08:42.913210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.343 [2024-05-15 09:08:42.913328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.343 [2024-05-15 09:08:42.913353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.343 qpair failed and we were unable to recover it. 00:42:48.343 [2024-05-15 09:08:42.913453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.343 [2024-05-15 09:08:42.913591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.343 [2024-05-15 09:08:42.913616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.343 qpair failed and we were unable to recover it. 00:42:48.343 [2024-05-15 09:08:42.913747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.343 [2024-05-15 09:08:42.913844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.343 [2024-05-15 09:08:42.913869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.343 qpair failed and we were unable to recover it. 00:42:48.343 [2024-05-15 09:08:42.913996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.343 [2024-05-15 09:08:42.914101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.343 [2024-05-15 09:08:42.914125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.343 qpair failed and we were unable to recover it. 00:42:48.343 [2024-05-15 09:08:42.914226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.343 [2024-05-15 09:08:42.914358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.343 [2024-05-15 09:08:42.914382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.343 qpair failed and we were unable to recover it. 00:42:48.343 [2024-05-15 09:08:42.914483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.343 [2024-05-15 09:08:42.914593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.343 [2024-05-15 09:08:42.914618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.343 qpair failed and we were unable to recover it. 00:42:48.343 [2024-05-15 09:08:42.914750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.343 [2024-05-15 09:08:42.914876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.343 [2024-05-15 09:08:42.914901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.343 qpair failed and we were unable to recover it. 00:42:48.343 [2024-05-15 09:08:42.915017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.343 [2024-05-15 09:08:42.915120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.343 [2024-05-15 09:08:42.915144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.343 qpair failed and we were unable to recover it. 00:42:48.343 [2024-05-15 09:08:42.915249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.343 [2024-05-15 09:08:42.915357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.343 [2024-05-15 09:08:42.915382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.343 qpair failed and we were unable to recover it. 00:42:48.343 [2024-05-15 09:08:42.915510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.343 [2024-05-15 09:08:42.915609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.343 [2024-05-15 09:08:42.915634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.343 qpair failed and we were unable to recover it. 00:42:48.343 [2024-05-15 09:08:42.915740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.343 [2024-05-15 09:08:42.915863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.343 [2024-05-15 09:08:42.915888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.343 qpair failed and we were unable to recover it. 00:42:48.343 [2024-05-15 09:08:42.915990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.343 [2024-05-15 09:08:42.916090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.343 [2024-05-15 09:08:42.916114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.343 qpair failed and we were unable to recover it. 00:42:48.343 [2024-05-15 09:08:42.916244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.343 [2024-05-15 09:08:42.916389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.343 [2024-05-15 09:08:42.916414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.343 qpair failed and we were unable to recover it. 00:42:48.343 [2024-05-15 09:08:42.916517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.343 [2024-05-15 09:08:42.916620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.343 [2024-05-15 09:08:42.916645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.343 qpair failed and we were unable to recover it. 00:42:48.343 [2024-05-15 09:08:42.916771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.343 [2024-05-15 09:08:42.916867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.343 [2024-05-15 09:08:42.916891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.343 qpair failed and we were unable to recover it. 00:42:48.343 [2024-05-15 09:08:42.917024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.343 [2024-05-15 09:08:42.917137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.343 [2024-05-15 09:08:42.917161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.343 qpair failed and we were unable to recover it. 00:42:48.343 [2024-05-15 09:08:42.917277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.343 [2024-05-15 09:08:42.917378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.343 [2024-05-15 09:08:42.917404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.343 qpair failed and we were unable to recover it. 00:42:48.343 [2024-05-15 09:08:42.917525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.343 [2024-05-15 09:08:42.917660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.343 [2024-05-15 09:08:42.917684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.343 qpair failed and we were unable to recover it. 00:42:48.343 [2024-05-15 09:08:42.917788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.343 [2024-05-15 09:08:42.917894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.343 [2024-05-15 09:08:42.917919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.343 qpair failed and we were unable to recover it. 00:42:48.343 [2024-05-15 09:08:42.918044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.343 [2024-05-15 09:08:42.918167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.343 [2024-05-15 09:08:42.918190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.343 qpair failed and we were unable to recover it. 00:42:48.343 [2024-05-15 09:08:42.918318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.343 [2024-05-15 09:08:42.918414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.343 [2024-05-15 09:08:42.918437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.343 qpair failed and we were unable to recover it. 00:42:48.343 [2024-05-15 09:08:42.918536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.343 [2024-05-15 09:08:42.918737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.343 [2024-05-15 09:08:42.918761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.343 qpair failed and we were unable to recover it. 00:42:48.343 [2024-05-15 09:08:42.918857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.343 [2024-05-15 09:08:42.918987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.343 [2024-05-15 09:08:42.919012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.343 qpair failed and we were unable to recover it. 00:42:48.343 [2024-05-15 09:08:42.919113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.343 [2024-05-15 09:08:42.919213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.344 [2024-05-15 09:08:42.919242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.344 qpair failed and we were unable to recover it. 00:42:48.344 [2024-05-15 09:08:42.919344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.344 [2024-05-15 09:08:42.919444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.344 [2024-05-15 09:08:42.919470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.344 qpair failed and we were unable to recover it. 00:42:48.344 [2024-05-15 09:08:42.919571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.344 [2024-05-15 09:08:42.919724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.344 [2024-05-15 09:08:42.919749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.344 qpair failed and we were unable to recover it. 00:42:48.344 [2024-05-15 09:08:42.919851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.344 [2024-05-15 09:08:42.919959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.344 [2024-05-15 09:08:42.919985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.344 qpair failed and we were unable to recover it. 00:42:48.344 [2024-05-15 09:08:42.920122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.344 [2024-05-15 09:08:42.920235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.344 [2024-05-15 09:08:42.920260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.344 qpair failed and we were unable to recover it. 00:42:48.344 [2024-05-15 09:08:42.920394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.344 [2024-05-15 09:08:42.920496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.344 [2024-05-15 09:08:42.920521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.344 qpair failed and we were unable to recover it. 00:42:48.344 [2024-05-15 09:08:42.920621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.344 [2024-05-15 09:08:42.920728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.344 [2024-05-15 09:08:42.920752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.344 qpair failed and we were unable to recover it. 00:42:48.344 [2024-05-15 09:08:42.920889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.344 [2024-05-15 09:08:42.920995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.344 [2024-05-15 09:08:42.921020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.344 qpair failed and we were unable to recover it. 00:42:48.344 [2024-05-15 09:08:42.921121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.344 [2024-05-15 09:08:42.921227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.344 [2024-05-15 09:08:42.921252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.344 qpair failed and we were unable to recover it. 00:42:48.344 09:08:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:48.344 [2024-05-15 09:08:42.921371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.344 09:08:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:42:48.344 [2024-05-15 09:08:42.921472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.344 [2024-05-15 09:08:42.921497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.344 qpair failed and we were unable to recover it. 00:42:48.344 09:08:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:48.344 [2024-05-15 09:08:42.921599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.344 09:08:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:42:48.344 [2024-05-15 09:08:42.921704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.344 [2024-05-15 09:08:42.921728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.344 qpair failed and we were unable to recover it. 00:42:48.344 [2024-05-15 09:08:42.921842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.344 [2024-05-15 09:08:42.921947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.344 [2024-05-15 09:08:42.921971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.344 qpair failed and we were unable to recover it. 00:42:48.344 [2024-05-15 09:08:42.922075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.344 [2024-05-15 09:08:42.922203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.344 [2024-05-15 09:08:42.922232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.344 qpair failed and we were unable to recover it. 00:42:48.344 [2024-05-15 09:08:42.922345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.344 [2024-05-15 09:08:42.922471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.344 [2024-05-15 09:08:42.922496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.344 qpair failed and we were unable to recover it. 00:42:48.344 [2024-05-15 09:08:42.922607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.344 [2024-05-15 09:08:42.922710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.344 [2024-05-15 09:08:42.922734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.344 qpair failed and we were unable to recover it. 00:42:48.344 [2024-05-15 09:08:42.922866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.344 [2024-05-15 09:08:42.922961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.344 [2024-05-15 09:08:42.922986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.344 qpair failed and we were unable to recover it. 00:42:48.344 [2024-05-15 09:08:42.923085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.344 [2024-05-15 09:08:42.923197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.344 [2024-05-15 09:08:42.923238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.344 qpair failed and we were unable to recover it. 00:42:48.344 [2024-05-15 09:08:42.923349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.344 [2024-05-15 09:08:42.923451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.344 [2024-05-15 09:08:42.923477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.344 qpair failed and we were unable to recover it. 00:42:48.344 [2024-05-15 09:08:42.923588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.344 [2024-05-15 09:08:42.923684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.344 [2024-05-15 09:08:42.923709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.344 qpair failed and we were unable to recover it. 00:42:48.344 [2024-05-15 09:08:42.923811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.344 [2024-05-15 09:08:42.923910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.344 [2024-05-15 09:08:42.923934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.344 qpair failed and we were unable to recover it. 00:42:48.344 [2024-05-15 09:08:42.924040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.344 [2024-05-15 09:08:42.924153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.344 [2024-05-15 09:08:42.924177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.344 qpair failed and we were unable to recover it. 00:42:48.344 [2024-05-15 09:08:42.924294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.344 [2024-05-15 09:08:42.924399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.344 [2024-05-15 09:08:42.924424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.344 qpair failed and we were unable to recover it. 00:42:48.344 [2024-05-15 09:08:42.924537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.344 [2024-05-15 09:08:42.924658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.344 [2024-05-15 09:08:42.924683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.344 qpair failed and we were unable to recover it. 00:42:48.344 [2024-05-15 09:08:42.924791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.344 [2024-05-15 09:08:42.924887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.344 [2024-05-15 09:08:42.924913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.344 qpair failed and we were unable to recover it. 00:42:48.344 [2024-05-15 09:08:42.925017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.344 [2024-05-15 09:08:42.925149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.344 [2024-05-15 09:08:42.925174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.344 qpair failed and we were unable to recover it. 00:42:48.344 [2024-05-15 09:08:42.925284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.344 [2024-05-15 09:08:42.925391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.344 [2024-05-15 09:08:42.925417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.344 qpair failed and we were unable to recover it. 00:42:48.344 [2024-05-15 09:08:42.925548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.344 [2024-05-15 09:08:42.925675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.344 [2024-05-15 09:08:42.925700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.344 qpair failed and we were unable to recover it. 00:42:48.344 [2024-05-15 09:08:42.925844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.344 [2024-05-15 09:08:42.925948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.345 [2024-05-15 09:08:42.925973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.345 qpair failed and we were unable to recover it. 00:42:48.345 [2024-05-15 09:08:42.926076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.345 [2024-05-15 09:08:42.926177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.345 [2024-05-15 09:08:42.926201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.345 qpair failed and we were unable to recover it. 00:42:48.345 [2024-05-15 09:08:42.926346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.345 [2024-05-15 09:08:42.926455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.345 [2024-05-15 09:08:42.926481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.345 qpair failed and we were unable to recover it. 00:42:48.345 [2024-05-15 09:08:42.926584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.345 [2024-05-15 09:08:42.926712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.345 [2024-05-15 09:08:42.926736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.345 qpair failed and we were unable to recover it. 00:42:48.345 [2024-05-15 09:08:42.926862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.345 [2024-05-15 09:08:42.926962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.345 [2024-05-15 09:08:42.926987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.345 qpair failed and we were unable to recover it. 00:42:48.345 [2024-05-15 09:08:42.927115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.345 [2024-05-15 09:08:42.927224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.345 [2024-05-15 09:08:42.927250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.345 qpair failed and we were unable to recover it. 00:42:48.345 [2024-05-15 09:08:42.927362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.345 [2024-05-15 09:08:42.927466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.345 [2024-05-15 09:08:42.927491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.345 qpair failed and we were unable to recover it. 00:42:48.345 [2024-05-15 09:08:42.927597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.345 [2024-05-15 09:08:42.927697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.345 [2024-05-15 09:08:42.927721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.345 qpair failed and we were unable to recover it. 00:42:48.345 [2024-05-15 09:08:42.927851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.345 [2024-05-15 09:08:42.927949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.345 [2024-05-15 09:08:42.927973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.345 qpair failed and we were unable to recover it. 00:42:48.345 [2024-05-15 09:08:42.928094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.345 [2024-05-15 09:08:42.928197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.345 [2024-05-15 09:08:42.928228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.345 qpair failed and we were unable to recover it. 00:42:48.345 [2024-05-15 09:08:42.928334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.345 [2024-05-15 09:08:42.928440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.345 [2024-05-15 09:08:42.928465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.345 qpair failed and we were unable to recover it. 00:42:48.345 [2024-05-15 09:08:42.928562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.345 [2024-05-15 09:08:42.928664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.345 [2024-05-15 09:08:42.928689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.345 qpair failed and we were unable to recover it. 00:42:48.345 [2024-05-15 09:08:42.928820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.345 [2024-05-15 09:08:42.928921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.345 [2024-05-15 09:08:42.928946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.345 qpair failed and we were unable to recover it. 00:42:48.345 [2024-05-15 09:08:42.929045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.345 [2024-05-15 09:08:42.929148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.345 [2024-05-15 09:08:42.929173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.345 qpair failed and we were unable to recover it. 00:42:48.345 [2024-05-15 09:08:42.929279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.345 09:08:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:48.345 [2024-05-15 09:08:42.929422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.345 09:08:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:42:48.345 [2024-05-15 09:08:42.929449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.345 qpair failed and we were unable to recover it. 00:42:48.345 [2024-05-15 09:08:42.929568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.345 09:08:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:48.345 [2024-05-15 09:08:42.929693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.345 09:08:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:42:48.345 [2024-05-15 09:08:42.929719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.345 qpair failed and we were unable to recover it. 00:42:48.345 [2024-05-15 09:08:42.929824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.345 [2024-05-15 09:08:42.929950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.345 [2024-05-15 09:08:42.929974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.345 qpair failed and we were unable to recover it. 00:42:48.345 [2024-05-15 09:08:42.930096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.345 [2024-05-15 09:08:42.930191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.345 [2024-05-15 09:08:42.930248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.345 qpair failed and we were unable to recover it. 00:42:48.345 [2024-05-15 09:08:42.930360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.345 [2024-05-15 09:08:42.930468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.345 [2024-05-15 09:08:42.930493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.345 qpair failed and we were unable to recover it. 00:42:48.345 [2024-05-15 09:08:42.930592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.345 [2024-05-15 09:08:42.930727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.345 [2024-05-15 09:08:42.930752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.345 qpair failed and we were unable to recover it. 00:42:48.345 [2024-05-15 09:08:42.930852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.345 [2024-05-15 09:08:42.930950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.345 [2024-05-15 09:08:42.930975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.345 qpair failed and we were unable to recover it. 00:42:48.345 [2024-05-15 09:08:42.931100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.345 [2024-05-15 09:08:42.931207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.345 [2024-05-15 09:08:42.931239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.345 qpair failed and we were unable to recover it. 00:42:48.345 [2024-05-15 09:08:42.931338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.345 [2024-05-15 09:08:42.931441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.345 [2024-05-15 09:08:42.931466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.345 qpair failed and we were unable to recover it. 00:42:48.345 [2024-05-15 09:08:42.931570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.345 [2024-05-15 09:08:42.931695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.345 [2024-05-15 09:08:42.931720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.345 qpair failed and we were unable to recover it. 00:42:48.345 [2024-05-15 09:08:42.931819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.345 [2024-05-15 09:08:42.931942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.345 [2024-05-15 09:08:42.931966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.345 qpair failed and we were unable to recover it. 00:42:48.345 [2024-05-15 09:08:42.932072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.345 [2024-05-15 09:08:42.932169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.345 [2024-05-15 09:08:42.932194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.345 qpair failed and we were unable to recover it. 00:42:48.345 [2024-05-15 09:08:42.932303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.346 [2024-05-15 09:08:42.932412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.346 [2024-05-15 09:08:42.932437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.346 qpair failed and we were unable to recover it. 00:42:48.346 [2024-05-15 09:08:42.932542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.346 [2024-05-15 09:08:42.932639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.346 [2024-05-15 09:08:42.932663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.346 qpair failed and we were unable to recover it. 00:42:48.346 [2024-05-15 09:08:42.932768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.346 [2024-05-15 09:08:42.932871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.346 [2024-05-15 09:08:42.932897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.346 qpair failed and we were unable to recover it. 00:42:48.346 [2024-05-15 09:08:42.933031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.346 [2024-05-15 09:08:42.933136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.346 [2024-05-15 09:08:42.933161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.346 qpair failed and we were unable to recover it. 00:42:48.346 [2024-05-15 09:08:42.933290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.346 [2024-05-15 09:08:42.933396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.346 [2024-05-15 09:08:42.933421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.346 qpair failed and we were unable to recover it. 00:42:48.346 [2024-05-15 09:08:42.933524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.346 [2024-05-15 09:08:42.933623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.346 [2024-05-15 09:08:42.933648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.346 qpair failed and we were unable to recover it. 00:42:48.346 [2024-05-15 09:08:42.933785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.346 [2024-05-15 09:08:42.933888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.346 [2024-05-15 09:08:42.933912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.346 qpair failed and we were unable to recover it. 00:42:48.346 [2024-05-15 09:08:42.934019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.346 [2024-05-15 09:08:42.934122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.346 [2024-05-15 09:08:42.934146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.346 qpair failed and we were unable to recover it. 00:42:48.346 [2024-05-15 09:08:42.934247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.346 [2024-05-15 09:08:42.934345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.346 [2024-05-15 09:08:42.934370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.346 qpair failed and we were unable to recover it. 00:42:48.346 [2024-05-15 09:08:42.934479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.346 [2024-05-15 09:08:42.934606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.346 [2024-05-15 09:08:42.934630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.346 qpair failed and we were unable to recover it. 00:42:48.346 [2024-05-15 09:08:42.934738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.346 [2024-05-15 09:08:42.934867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.346 [2024-05-15 09:08:42.934891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.346 qpair failed and we were unable to recover it. 00:42:48.346 [2024-05-15 09:08:42.934990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.346 [2024-05-15 09:08:42.935082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.346 [2024-05-15 09:08:42.935106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.346 qpair failed and we were unable to recover it. 00:42:48.346 [2024-05-15 09:08:42.935213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.346 [2024-05-15 09:08:42.935328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.346 [2024-05-15 09:08:42.935353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.346 qpair failed and we were unable to recover it. 00:42:48.346 [2024-05-15 09:08:42.935463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.346 [2024-05-15 09:08:42.935564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.346 [2024-05-15 09:08:42.935589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.346 qpair failed and we were unable to recover it. 00:42:48.346 [2024-05-15 09:08:42.935721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.346 [2024-05-15 09:08:42.935823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.346 [2024-05-15 09:08:42.935848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.346 qpair failed and we were unable to recover it. 00:42:48.346 [2024-05-15 09:08:42.935955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.346 [2024-05-15 09:08:42.936069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.346 [2024-05-15 09:08:42.936094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.346 qpair failed and we were unable to recover it. 00:42:48.346 [2024-05-15 09:08:42.936192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.346 [2024-05-15 09:08:42.936324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.346 [2024-05-15 09:08:42.936349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.346 qpair failed and we were unable to recover it. 00:42:48.346 [2024-05-15 09:08:42.936445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.346 [2024-05-15 09:08:42.936553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.346 [2024-05-15 09:08:42.936578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.346 qpair failed and we were unable to recover it. 00:42:48.346 [2024-05-15 09:08:42.936705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.346 [2024-05-15 09:08:42.936809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.346 [2024-05-15 09:08:42.936835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.346 qpair failed and we were unable to recover it. 00:42:48.346 [2024-05-15 09:08:42.936938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.346 [2024-05-15 09:08:42.937031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.346 [2024-05-15 09:08:42.937055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.346 qpair failed and we were unable to recover it. 00:42:48.346 [2024-05-15 09:08:42.937165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.346 [2024-05-15 09:08:42.937294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.346 [2024-05-15 09:08:42.937320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.346 09:08:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:48.346 qpair failed and we were unable to recover it. 00:42:48.346 [2024-05-15 09:08:42.937422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.346 09:08:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:48.346 [2024-05-15 09:08:42.937566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.346 [2024-05-15 09:08:42.937591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.346 09:08:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:48.346 qpair failed and we were unable to recover it. 00:42:48.346 09:08:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:42:48.346 [2024-05-15 09:08:42.937719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.346 [2024-05-15 09:08:42.937819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.346 [2024-05-15 09:08:42.937845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.346 qpair failed and we were unable to recover it. 00:42:48.346 [2024-05-15 09:08:42.937950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.347 [2024-05-15 09:08:42.938044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.347 [2024-05-15 09:08:42.938069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.347 qpair failed and we were unable to recover it. 00:42:48.347 [2024-05-15 09:08:42.938202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.347 [2024-05-15 09:08:42.938314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.347 [2024-05-15 09:08:42.938340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.347 qpair failed and we were unable to recover it. 00:42:48.347 [2024-05-15 09:08:42.938445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.347 [2024-05-15 09:08:42.938572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.347 [2024-05-15 09:08:42.938596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.347 qpair failed and we were unable to recover it. 00:42:48.347 [2024-05-15 09:08:42.938693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.347 [2024-05-15 09:08:42.938789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.347 [2024-05-15 09:08:42.938814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.347 qpair failed and we were unable to recover it. 00:42:48.347 [2024-05-15 09:08:42.938917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.347 [2024-05-15 09:08:42.939018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.347 [2024-05-15 09:08:42.939046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.347 qpair failed and we were unable to recover it. 00:42:48.347 [2024-05-15 09:08:42.939155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.347 [2024-05-15 09:08:42.939260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.347 [2024-05-15 09:08:42.939285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.347 qpair failed and we were unable to recover it. 00:42:48.347 [2024-05-15 09:08:42.939390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.347 [2024-05-15 09:08:42.939509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.347 [2024-05-15 09:08:42.939533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.347 qpair failed and we were unable to recover it. 00:42:48.347 [2024-05-15 09:08:42.939660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.347 [2024-05-15 09:08:42.939771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.347 [2024-05-15 09:08:42.939795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.347 qpair failed and we were unable to recover it. 00:42:48.347 [2024-05-15 09:08:42.939924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.347 [2024-05-15 09:08:42.940036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.347 [2024-05-15 09:08:42.940061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.347 qpair failed and we were unable to recover it. 00:42:48.347 [2024-05-15 09:08:42.940163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.347 [2024-05-15 09:08:42.940266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.347 [2024-05-15 09:08:42.940291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.347 qpair failed and we were unable to recover it. 00:42:48.347 [2024-05-15 09:08:42.940422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.347 [2024-05-15 09:08:42.940526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.347 [2024-05-15 09:08:42.940551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.347 qpair failed and we were unable to recover it. 00:42:48.347 [2024-05-15 09:08:42.940652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.347 [2024-05-15 09:08:42.940753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.347 [2024-05-15 09:08:42.940777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.347 qpair failed and we were unable to recover it. 00:42:48.347 [2024-05-15 09:08:42.940879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.347 [2024-05-15 09:08:42.940985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.347 [2024-05-15 09:08:42.941009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.347 qpair failed and we were unable to recover it. 00:42:48.347 [2024-05-15 09:08:42.941073] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature[2024-05-15 09:08:42.941118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.347 [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:42:48.347 [2024-05-15 09:08:42.941246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:48.347 [2024-05-15 09:08:42.941271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f40000b90 with addr=10.0.0.2, port=4420 00:42:48.347 qpair failed and we were unable to recover it. 00:42:48.347 A controller has encountered a failure and is being reset. 00:42:48.347 [2024-05-15 09:08:42.941412] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:48.347 [2024-05-15 09:08:42.943852] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.347 [2024-05-15 09:08:42.944012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.347 [2024-05-15 09:08:42.944046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.347 [2024-05-15 09:08:42.944064] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.347 [2024-05-15 09:08:42.944077] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.347 [2024-05-15 09:08:42.944115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.347 qpair failed and we were unable to recover it. 00:42:48.347 09:08:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:48.347 09:08:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:42:48.347 09:08:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:48.347 09:08:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:42:48.347 09:08:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:48.347 09:08:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2441366 00:42:48.347 [2024-05-15 09:08:42.953661] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.347 [2024-05-15 09:08:42.953776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.347 [2024-05-15 09:08:42.953804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.347 [2024-05-15 09:08:42.953818] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.347 [2024-05-15 09:08:42.953831] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.347 [2024-05-15 09:08:42.953861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.347 qpair failed and we were unable to recover it. 00:42:48.347 [2024-05-15 09:08:42.963677] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.347 [2024-05-15 09:08:42.963790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.347 [2024-05-15 09:08:42.963818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.347 [2024-05-15 09:08:42.963833] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.347 [2024-05-15 09:08:42.963845] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.347 [2024-05-15 09:08:42.963875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.347 qpair failed and we were unable to recover it. 00:42:48.347 [2024-05-15 09:08:42.973705] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.347 [2024-05-15 09:08:42.973816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.347 [2024-05-15 09:08:42.973843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.347 [2024-05-15 09:08:42.973863] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.347 [2024-05-15 09:08:42.973876] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.347 [2024-05-15 09:08:42.973907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.347 qpair failed and we were unable to recover it. 00:42:48.347 [2024-05-15 09:08:42.983677] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.347 [2024-05-15 09:08:42.983782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.347 [2024-05-15 09:08:42.983809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.347 [2024-05-15 09:08:42.983824] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.347 [2024-05-15 09:08:42.983836] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.347 [2024-05-15 09:08:42.983867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.347 qpair failed and we were unable to recover it. 00:42:48.347 [2024-05-15 09:08:42.993761] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.347 [2024-05-15 09:08:42.993900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.347 [2024-05-15 09:08:42.993927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.347 [2024-05-15 09:08:42.993943] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.348 [2024-05-15 09:08:42.993955] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.348 [2024-05-15 09:08:42.993986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.348 qpair failed and we were unable to recover it. 00:42:48.348 [2024-05-15 09:08:43.003797] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.348 [2024-05-15 09:08:43.003903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.348 [2024-05-15 09:08:43.003929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.348 [2024-05-15 09:08:43.003944] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.348 [2024-05-15 09:08:43.003956] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.348 [2024-05-15 09:08:43.003986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.348 qpair failed and we were unable to recover it. 00:42:48.348 [2024-05-15 09:08:43.013748] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.348 [2024-05-15 09:08:43.013855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.348 [2024-05-15 09:08:43.013880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.348 [2024-05-15 09:08:43.013894] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.348 [2024-05-15 09:08:43.013906] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.348 [2024-05-15 09:08:43.013936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.348 qpair failed and we were unable to recover it. 00:42:48.348 [2024-05-15 09:08:43.023748] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.348 [2024-05-15 09:08:43.023857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.348 [2024-05-15 09:08:43.023883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.348 [2024-05-15 09:08:43.023897] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.348 [2024-05-15 09:08:43.023909] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.348 [2024-05-15 09:08:43.023938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.348 qpair failed and we were unable to recover it. 00:42:48.348 [2024-05-15 09:08:43.033792] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.348 [2024-05-15 09:08:43.033899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.348 [2024-05-15 09:08:43.033929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.348 [2024-05-15 09:08:43.033945] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.348 [2024-05-15 09:08:43.033957] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.348 [2024-05-15 09:08:43.033987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.348 qpair failed and we were unable to recover it. 00:42:48.348 [2024-05-15 09:08:43.043827] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.348 [2024-05-15 09:08:43.043931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.348 [2024-05-15 09:08:43.043958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.348 [2024-05-15 09:08:43.043972] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.348 [2024-05-15 09:08:43.043984] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.348 [2024-05-15 09:08:43.044013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.348 qpair failed and we were unable to recover it. 00:42:48.348 [2024-05-15 09:08:43.053925] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.348 [2024-05-15 09:08:43.054038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.348 [2024-05-15 09:08:43.054065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.348 [2024-05-15 09:08:43.054079] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.348 [2024-05-15 09:08:43.054091] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.348 [2024-05-15 09:08:43.054119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.348 qpair failed and we were unable to recover it. 00:42:48.348 [2024-05-15 09:08:43.063872] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.348 [2024-05-15 09:08:43.063970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.348 [2024-05-15 09:08:43.064001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.348 [2024-05-15 09:08:43.064016] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.348 [2024-05-15 09:08:43.064028] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.348 [2024-05-15 09:08:43.064057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.348 qpair failed and we were unable to recover it. 00:42:48.348 [2024-05-15 09:08:43.073895] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.348 [2024-05-15 09:08:43.073997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.348 [2024-05-15 09:08:43.074038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.348 [2024-05-15 09:08:43.074056] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.348 [2024-05-15 09:08:43.074069] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.348 [2024-05-15 09:08:43.074101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.348 qpair failed and we were unable to recover it. 00:42:48.348 [2024-05-15 09:08:43.083918] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.348 [2024-05-15 09:08:43.084045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.348 [2024-05-15 09:08:43.084071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.348 [2024-05-15 09:08:43.084085] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.348 [2024-05-15 09:08:43.084098] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.348 [2024-05-15 09:08:43.084127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.348 qpair failed and we were unable to recover it. 00:42:48.348 [2024-05-15 09:08:43.093920] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.348 [2024-05-15 09:08:43.094057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.348 [2024-05-15 09:08:43.094083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.348 [2024-05-15 09:08:43.094098] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.348 [2024-05-15 09:08:43.094110] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.348 [2024-05-15 09:08:43.094139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.348 qpair failed and we were unable to recover it. 00:42:48.607 [2024-05-15 09:08:43.103980] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.607 [2024-05-15 09:08:43.104089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.607 [2024-05-15 09:08:43.104115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.607 [2024-05-15 09:08:43.104129] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.607 [2024-05-15 09:08:43.104142] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.607 [2024-05-15 09:08:43.104176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.607 qpair failed and we were unable to recover it. 00:42:48.607 [2024-05-15 09:08:43.114003] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.607 [2024-05-15 09:08:43.114132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.607 [2024-05-15 09:08:43.114162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.607 [2024-05-15 09:08:43.114177] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.607 [2024-05-15 09:08:43.114189] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.607 [2024-05-15 09:08:43.114226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.607 qpair failed and we were unable to recover it. 00:42:48.607 [2024-05-15 09:08:43.124029] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.607 [2024-05-15 09:08:43.124131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.607 [2024-05-15 09:08:43.124158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.607 [2024-05-15 09:08:43.124172] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.607 [2024-05-15 09:08:43.124185] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.607 [2024-05-15 09:08:43.124224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.607 qpair failed and we were unable to recover it. 00:42:48.607 [2024-05-15 09:08:43.134047] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.607 [2024-05-15 09:08:43.134163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.607 [2024-05-15 09:08:43.134189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.607 [2024-05-15 09:08:43.134203] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.607 [2024-05-15 09:08:43.134221] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.607 [2024-05-15 09:08:43.134252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.607 qpair failed and we were unable to recover it. 00:42:48.607 [2024-05-15 09:08:43.144085] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.607 [2024-05-15 09:08:43.144191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.607 [2024-05-15 09:08:43.144227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.607 [2024-05-15 09:08:43.144244] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.607 [2024-05-15 09:08:43.144256] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.607 [2024-05-15 09:08:43.144285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.607 qpair failed and we were unable to recover it. 00:42:48.607 [2024-05-15 09:08:43.154122] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.607 [2024-05-15 09:08:43.154271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.607 [2024-05-15 09:08:43.154304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.607 [2024-05-15 09:08:43.154319] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.607 [2024-05-15 09:08:43.154331] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.607 [2024-05-15 09:08:43.154361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.607 qpair failed and we were unable to recover it. 00:42:48.607 [2024-05-15 09:08:43.164138] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.607 [2024-05-15 09:08:43.164246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.607 [2024-05-15 09:08:43.164272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.607 [2024-05-15 09:08:43.164286] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.607 [2024-05-15 09:08:43.164298] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.607 [2024-05-15 09:08:43.164327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.607 qpair failed and we were unable to recover it. 00:42:48.607 [2024-05-15 09:08:43.174173] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.608 [2024-05-15 09:08:43.174295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.608 [2024-05-15 09:08:43.174321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.608 [2024-05-15 09:08:43.174335] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.608 [2024-05-15 09:08:43.174348] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.608 [2024-05-15 09:08:43.174377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.608 qpair failed and we were unable to recover it. 00:42:48.608 [2024-05-15 09:08:43.184306] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.608 [2024-05-15 09:08:43.184415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.608 [2024-05-15 09:08:43.184441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.608 [2024-05-15 09:08:43.184455] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.608 [2024-05-15 09:08:43.184467] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.608 [2024-05-15 09:08:43.184496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.608 qpair failed and we were unable to recover it. 00:42:48.608 [2024-05-15 09:08:43.194250] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.608 [2024-05-15 09:08:43.194353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.608 [2024-05-15 09:08:43.194382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.608 [2024-05-15 09:08:43.194398] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.608 [2024-05-15 09:08:43.194417] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.608 [2024-05-15 09:08:43.194460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.608 qpair failed and we were unable to recover it. 00:42:48.608 [2024-05-15 09:08:43.204242] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.608 [2024-05-15 09:08:43.204340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.608 [2024-05-15 09:08:43.204366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.608 [2024-05-15 09:08:43.204380] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.608 [2024-05-15 09:08:43.204392] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.608 [2024-05-15 09:08:43.204422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.608 qpair failed and we were unable to recover it. 00:42:48.608 [2024-05-15 09:08:43.214275] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.608 [2024-05-15 09:08:43.214383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.608 [2024-05-15 09:08:43.214410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.608 [2024-05-15 09:08:43.214424] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.608 [2024-05-15 09:08:43.214436] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.608 [2024-05-15 09:08:43.214478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.608 qpair failed and we were unable to recover it. 00:42:48.608 [2024-05-15 09:08:43.224345] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.608 [2024-05-15 09:08:43.224473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.608 [2024-05-15 09:08:43.224500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.608 [2024-05-15 09:08:43.224514] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.608 [2024-05-15 09:08:43.224526] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.608 [2024-05-15 09:08:43.224555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.608 qpair failed and we were unable to recover it. 00:42:48.608 [2024-05-15 09:08:43.234359] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.608 [2024-05-15 09:08:43.234459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.608 [2024-05-15 09:08:43.234488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.608 [2024-05-15 09:08:43.234502] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.608 [2024-05-15 09:08:43.234515] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.608 [2024-05-15 09:08:43.234544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.608 qpair failed and we were unable to recover it. 00:42:48.608 [2024-05-15 09:08:43.244399] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.608 [2024-05-15 09:08:43.244509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.608 [2024-05-15 09:08:43.244535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.608 [2024-05-15 09:08:43.244549] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.608 [2024-05-15 09:08:43.244561] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.608 [2024-05-15 09:08:43.244590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.608 qpair failed and we were unable to recover it. 00:42:48.608 [2024-05-15 09:08:43.254410] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.608 [2024-05-15 09:08:43.254526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.608 [2024-05-15 09:08:43.254552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.608 [2024-05-15 09:08:43.254566] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.608 [2024-05-15 09:08:43.254579] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.608 [2024-05-15 09:08:43.254608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.608 qpair failed and we were unable to recover it. 00:42:48.608 [2024-05-15 09:08:43.264443] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.608 [2024-05-15 09:08:43.264544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.608 [2024-05-15 09:08:43.264569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.608 [2024-05-15 09:08:43.264583] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.608 [2024-05-15 09:08:43.264595] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.608 [2024-05-15 09:08:43.264637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.608 qpair failed and we were unable to recover it. 00:42:48.608 [2024-05-15 09:08:43.274458] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.608 [2024-05-15 09:08:43.274557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.608 [2024-05-15 09:08:43.274587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.608 [2024-05-15 09:08:43.274602] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.608 [2024-05-15 09:08:43.274614] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.608 [2024-05-15 09:08:43.274643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.608 qpair failed and we were unable to recover it. 00:42:48.608 [2024-05-15 09:08:43.284550] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.608 [2024-05-15 09:08:43.284681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.608 [2024-05-15 09:08:43.284708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.608 [2024-05-15 09:08:43.284722] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.608 [2024-05-15 09:08:43.284740] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.608 [2024-05-15 09:08:43.284784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.608 qpair failed and we were unable to recover it. 00:42:48.608 [2024-05-15 09:08:43.294514] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.608 [2024-05-15 09:08:43.294642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.608 [2024-05-15 09:08:43.294669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.608 [2024-05-15 09:08:43.294684] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.608 [2024-05-15 09:08:43.294696] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.608 [2024-05-15 09:08:43.294727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.608 qpair failed and we were unable to recover it. 00:42:48.608 [2024-05-15 09:08:43.304514] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.608 [2024-05-15 09:08:43.304647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.608 [2024-05-15 09:08:43.304674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.608 [2024-05-15 09:08:43.304688] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.608 [2024-05-15 09:08:43.304700] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.608 [2024-05-15 09:08:43.304729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.608 qpair failed and we were unable to recover it. 00:42:48.608 [2024-05-15 09:08:43.314567] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.608 [2024-05-15 09:08:43.314696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.608 [2024-05-15 09:08:43.314723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.608 [2024-05-15 09:08:43.314737] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.608 [2024-05-15 09:08:43.314749] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.608 [2024-05-15 09:08:43.314778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.608 qpair failed and we were unable to recover it. 00:42:48.608 [2024-05-15 09:08:43.324572] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.608 [2024-05-15 09:08:43.324697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.608 [2024-05-15 09:08:43.324722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.608 [2024-05-15 09:08:43.324736] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.608 [2024-05-15 09:08:43.324748] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.608 [2024-05-15 09:08:43.324790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.608 qpair failed and we were unable to recover it. 00:42:48.608 [2024-05-15 09:08:43.334599] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.608 [2024-05-15 09:08:43.334707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.608 [2024-05-15 09:08:43.334734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.608 [2024-05-15 09:08:43.334749] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.609 [2024-05-15 09:08:43.334761] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.609 [2024-05-15 09:08:43.334790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.609 qpair failed and we were unable to recover it. 00:42:48.609 [2024-05-15 09:08:43.344753] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.609 [2024-05-15 09:08:43.344874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.609 [2024-05-15 09:08:43.344899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.609 [2024-05-15 09:08:43.344914] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.609 [2024-05-15 09:08:43.344926] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.609 [2024-05-15 09:08:43.344955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.609 qpair failed and we were unable to recover it. 00:42:48.609 [2024-05-15 09:08:43.354673] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.609 [2024-05-15 09:08:43.354782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.609 [2024-05-15 09:08:43.354812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.609 [2024-05-15 09:08:43.354827] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.609 [2024-05-15 09:08:43.354839] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.609 [2024-05-15 09:08:43.354869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.609 qpair failed and we were unable to recover it. 00:42:48.609 [2024-05-15 09:08:43.364701] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.609 [2024-05-15 09:08:43.364805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.609 [2024-05-15 09:08:43.364834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.609 [2024-05-15 09:08:43.364849] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.609 [2024-05-15 09:08:43.364862] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.609 [2024-05-15 09:08:43.364892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.609 qpair failed and we were unable to recover it. 00:42:48.609 [2024-05-15 09:08:43.374714] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.609 [2024-05-15 09:08:43.374828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.609 [2024-05-15 09:08:43.374854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.609 [2024-05-15 09:08:43.374875] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.609 [2024-05-15 09:08:43.374888] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.609 [2024-05-15 09:08:43.374917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.609 qpair failed and we were unable to recover it. 00:42:48.609 [2024-05-15 09:08:43.384813] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.609 [2024-05-15 09:08:43.384921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.609 [2024-05-15 09:08:43.384946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.609 [2024-05-15 09:08:43.384960] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.609 [2024-05-15 09:08:43.384973] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.609 [2024-05-15 09:08:43.385002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.609 qpair failed and we were unable to recover it. 00:42:48.609 [2024-05-15 09:08:43.394774] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.609 [2024-05-15 09:08:43.394875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.609 [2024-05-15 09:08:43.394904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.609 [2024-05-15 09:08:43.394919] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.609 [2024-05-15 09:08:43.394931] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.609 [2024-05-15 09:08:43.394960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.609 qpair failed and we were unable to recover it. 00:42:48.868 [2024-05-15 09:08:43.404773] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.868 [2024-05-15 09:08:43.404871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.868 [2024-05-15 09:08:43.404898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.869 [2024-05-15 09:08:43.404912] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.869 [2024-05-15 09:08:43.404924] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.869 [2024-05-15 09:08:43.404953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.869 qpair failed and we were unable to recover it. 00:42:48.869 [2024-05-15 09:08:43.414824] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.869 [2024-05-15 09:08:43.414927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.869 [2024-05-15 09:08:43.414952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.869 [2024-05-15 09:08:43.414967] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.869 [2024-05-15 09:08:43.414979] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.869 [2024-05-15 09:08:43.415007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.869 qpair failed and we were unable to recover it. 00:42:48.869 [2024-05-15 09:08:43.424841] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.869 [2024-05-15 09:08:43.424945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.869 [2024-05-15 09:08:43.424970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.869 [2024-05-15 09:08:43.424984] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.869 [2024-05-15 09:08:43.424996] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.869 [2024-05-15 09:08:43.425026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.869 qpair failed and we were unable to recover it. 00:42:48.869 [2024-05-15 09:08:43.434865] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.869 [2024-05-15 09:08:43.434973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.869 [2024-05-15 09:08:43.435003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.869 [2024-05-15 09:08:43.435018] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.869 [2024-05-15 09:08:43.435030] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.869 [2024-05-15 09:08:43.435059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.869 qpair failed and we were unable to recover it. 00:42:48.869 [2024-05-15 09:08:43.444975] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.869 [2024-05-15 09:08:43.445089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.869 [2024-05-15 09:08:43.445115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.869 [2024-05-15 09:08:43.445129] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.869 [2024-05-15 09:08:43.445142] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.869 [2024-05-15 09:08:43.445171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.869 qpair failed and we were unable to recover it. 00:42:48.869 [2024-05-15 09:08:43.454958] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.869 [2024-05-15 09:08:43.455066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.869 [2024-05-15 09:08:43.455092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.869 [2024-05-15 09:08:43.455106] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.869 [2024-05-15 09:08:43.455119] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.869 [2024-05-15 09:08:43.455148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.869 qpair failed and we were unable to recover it. 00:42:48.869 [2024-05-15 09:08:43.464964] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.869 [2024-05-15 09:08:43.465107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.869 [2024-05-15 09:08:43.465138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.869 [2024-05-15 09:08:43.465154] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.869 [2024-05-15 09:08:43.465166] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.869 [2024-05-15 09:08:43.465208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.869 qpair failed and we were unable to recover it. 00:42:48.869 [2024-05-15 09:08:43.474983] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.869 [2024-05-15 09:08:43.475082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.869 [2024-05-15 09:08:43.475108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.869 [2024-05-15 09:08:43.475127] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.869 [2024-05-15 09:08:43.475139] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.869 [2024-05-15 09:08:43.475168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.869 qpair failed and we were unable to recover it. 00:42:48.869 [2024-05-15 09:08:43.485026] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.869 [2024-05-15 09:08:43.485124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.869 [2024-05-15 09:08:43.485151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.869 [2024-05-15 09:08:43.485165] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.869 [2024-05-15 09:08:43.485177] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.869 [2024-05-15 09:08:43.485225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.869 qpair failed and we were unable to recover it. 00:42:48.869 [2024-05-15 09:08:43.495053] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.869 [2024-05-15 09:08:43.495171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.869 [2024-05-15 09:08:43.495197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.869 [2024-05-15 09:08:43.495212] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.869 [2024-05-15 09:08:43.495232] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.869 [2024-05-15 09:08:43.495262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.869 qpair failed and we were unable to recover it. 00:42:48.869 [2024-05-15 09:08:43.505081] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.869 [2024-05-15 09:08:43.505185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.869 [2024-05-15 09:08:43.505212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.869 [2024-05-15 09:08:43.505236] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.869 [2024-05-15 09:08:43.505249] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.869 [2024-05-15 09:08:43.505284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.869 qpair failed and we were unable to recover it. 00:42:48.869 [2024-05-15 09:08:43.515092] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.869 [2024-05-15 09:08:43.515195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.869 [2024-05-15 09:08:43.515232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.869 [2024-05-15 09:08:43.515250] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.869 [2024-05-15 09:08:43.515262] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.869 [2024-05-15 09:08:43.515291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.869 qpair failed and we were unable to recover it. 00:42:48.869 [2024-05-15 09:08:43.525119] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.869 [2024-05-15 09:08:43.525223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.869 [2024-05-15 09:08:43.525248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.869 [2024-05-15 09:08:43.525263] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.869 [2024-05-15 09:08:43.525276] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.869 [2024-05-15 09:08:43.525306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.869 qpair failed and we were unable to recover it. 00:42:48.869 [2024-05-15 09:08:43.535189] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.870 [2024-05-15 09:08:43.535303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.870 [2024-05-15 09:08:43.535329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.870 [2024-05-15 09:08:43.535344] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.870 [2024-05-15 09:08:43.535357] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.870 [2024-05-15 09:08:43.535387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.870 qpair failed and we were unable to recover it. 00:42:48.870 [2024-05-15 09:08:43.545206] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.870 [2024-05-15 09:08:43.545325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.870 [2024-05-15 09:08:43.545350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.870 [2024-05-15 09:08:43.545366] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.870 [2024-05-15 09:08:43.545378] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.870 [2024-05-15 09:08:43.545407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.870 qpair failed and we were unable to recover it. 00:42:48.870 [2024-05-15 09:08:43.555243] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.870 [2024-05-15 09:08:43.555365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.870 [2024-05-15 09:08:43.555397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.870 [2024-05-15 09:08:43.555413] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.870 [2024-05-15 09:08:43.555426] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.870 [2024-05-15 09:08:43.555457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.870 qpair failed and we were unable to recover it. 00:42:48.870 [2024-05-15 09:08:43.565243] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.870 [2024-05-15 09:08:43.565349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.870 [2024-05-15 09:08:43.565377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.870 [2024-05-15 09:08:43.565392] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.870 [2024-05-15 09:08:43.565405] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.870 [2024-05-15 09:08:43.565435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.870 qpair failed and we were unable to recover it. 00:42:48.870 [2024-05-15 09:08:43.575306] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.870 [2024-05-15 09:08:43.575439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.870 [2024-05-15 09:08:43.575466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.870 [2024-05-15 09:08:43.575481] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.870 [2024-05-15 09:08:43.575493] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.870 [2024-05-15 09:08:43.575522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.870 qpair failed and we were unable to recover it. 00:42:48.870 [2024-05-15 09:08:43.585317] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.870 [2024-05-15 09:08:43.585435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.870 [2024-05-15 09:08:43.585461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.870 [2024-05-15 09:08:43.585476] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.870 [2024-05-15 09:08:43.585489] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.870 [2024-05-15 09:08:43.585518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.870 qpair failed and we were unable to recover it. 00:42:48.870 [2024-05-15 09:08:43.595371] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.870 [2024-05-15 09:08:43.595487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.870 [2024-05-15 09:08:43.595515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.870 [2024-05-15 09:08:43.595530] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.870 [2024-05-15 09:08:43.595554] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.870 [2024-05-15 09:08:43.595601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.870 qpair failed and we were unable to recover it. 00:42:48.870 [2024-05-15 09:08:43.605374] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.870 [2024-05-15 09:08:43.605487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.870 [2024-05-15 09:08:43.605513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.870 [2024-05-15 09:08:43.605529] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.870 [2024-05-15 09:08:43.605542] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.870 [2024-05-15 09:08:43.605571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.870 qpair failed and we were unable to recover it. 00:42:48.870 [2024-05-15 09:08:43.615490] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.870 [2024-05-15 09:08:43.615636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.870 [2024-05-15 09:08:43.615663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.870 [2024-05-15 09:08:43.615678] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.870 [2024-05-15 09:08:43.615691] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.870 [2024-05-15 09:08:43.615720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.870 qpair failed and we were unable to recover it. 00:42:48.870 [2024-05-15 09:08:43.625434] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.870 [2024-05-15 09:08:43.625533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.870 [2024-05-15 09:08:43.625558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.870 [2024-05-15 09:08:43.625573] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.870 [2024-05-15 09:08:43.625586] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.870 [2024-05-15 09:08:43.625616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.870 qpair failed and we were unable to recover it. 00:42:48.870 [2024-05-15 09:08:43.635453] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.870 [2024-05-15 09:08:43.635565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.870 [2024-05-15 09:08:43.635593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.870 [2024-05-15 09:08:43.635609] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.870 [2024-05-15 09:08:43.635622] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.870 [2024-05-15 09:08:43.635651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.870 qpair failed and we were unable to recover it. 00:42:48.870 [2024-05-15 09:08:43.645494] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.870 [2024-05-15 09:08:43.645599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.870 [2024-05-15 09:08:43.645624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.870 [2024-05-15 09:08:43.645638] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.870 [2024-05-15 09:08:43.645652] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.870 [2024-05-15 09:08:43.645681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.870 qpair failed and we were unable to recover it. 00:42:48.870 [2024-05-15 09:08:43.655508] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:48.870 [2024-05-15 09:08:43.655632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:48.870 [2024-05-15 09:08:43.655659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:48.870 [2024-05-15 09:08:43.655674] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:48.870 [2024-05-15 09:08:43.655687] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:48.870 [2024-05-15 09:08:43.655717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:48.870 qpair failed and we were unable to recover it. 00:42:49.130 [2024-05-15 09:08:43.665536] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.130 [2024-05-15 09:08:43.665666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.130 [2024-05-15 09:08:43.665703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.130 [2024-05-15 09:08:43.665717] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.130 [2024-05-15 09:08:43.665731] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.130 [2024-05-15 09:08:43.665760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.130 qpair failed and we were unable to recover it. 00:42:49.130 [2024-05-15 09:08:43.675595] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.131 [2024-05-15 09:08:43.675714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.131 [2024-05-15 09:08:43.675741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.131 [2024-05-15 09:08:43.675756] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.131 [2024-05-15 09:08:43.675770] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.131 [2024-05-15 09:08:43.675799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.131 qpair failed and we were unable to recover it. 00:42:49.131 [2024-05-15 09:08:43.685586] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.131 [2024-05-15 09:08:43.685701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.131 [2024-05-15 09:08:43.685727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.131 [2024-05-15 09:08:43.685743] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.131 [2024-05-15 09:08:43.685762] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.131 [2024-05-15 09:08:43.685793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.131 qpair failed and we were unable to recover it. 00:42:49.131 [2024-05-15 09:08:43.695741] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.131 [2024-05-15 09:08:43.695863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.131 [2024-05-15 09:08:43.695890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.131 [2024-05-15 09:08:43.695905] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.131 [2024-05-15 09:08:43.695918] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.131 [2024-05-15 09:08:43.695948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.131 qpair failed and we were unable to recover it. 00:42:49.131 [2024-05-15 09:08:43.705682] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.131 [2024-05-15 09:08:43.705795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.131 [2024-05-15 09:08:43.705821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.131 [2024-05-15 09:08:43.705837] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.131 [2024-05-15 09:08:43.705849] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.131 [2024-05-15 09:08:43.705879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.131 qpair failed and we were unable to recover it. 00:42:49.131 [2024-05-15 09:08:43.715686] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.131 [2024-05-15 09:08:43.715794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.131 [2024-05-15 09:08:43.715825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.131 [2024-05-15 09:08:43.715841] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.131 [2024-05-15 09:08:43.715854] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.131 [2024-05-15 09:08:43.715885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.131 qpair failed and we were unable to recover it. 00:42:49.131 [2024-05-15 09:08:43.725784] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.131 [2024-05-15 09:08:43.725899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.131 [2024-05-15 09:08:43.725926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.131 [2024-05-15 09:08:43.725941] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.131 [2024-05-15 09:08:43.725953] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.131 [2024-05-15 09:08:43.725983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.131 qpair failed and we were unable to recover it. 00:42:49.131 [2024-05-15 09:08:43.735973] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.131 [2024-05-15 09:08:43.736121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.131 [2024-05-15 09:08:43.736163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.131 [2024-05-15 09:08:43.736178] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.131 [2024-05-15 09:08:43.736190] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.131 [2024-05-15 09:08:43.736233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.131 qpair failed and we were unable to recover it. 00:42:49.131 [2024-05-15 09:08:43.745835] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.131 [2024-05-15 09:08:43.745951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.131 [2024-05-15 09:08:43.745976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.131 [2024-05-15 09:08:43.745991] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.131 [2024-05-15 09:08:43.746003] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.131 [2024-05-15 09:08:43.746033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.131 qpair failed and we were unable to recover it. 00:42:49.131 [2024-05-15 09:08:43.755844] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.131 [2024-05-15 09:08:43.755951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.131 [2024-05-15 09:08:43.755978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.131 [2024-05-15 09:08:43.755993] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.131 [2024-05-15 09:08:43.756006] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.131 [2024-05-15 09:08:43.756049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.131 qpair failed and we were unable to recover it. 00:42:49.131 [2024-05-15 09:08:43.765869] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.131 [2024-05-15 09:08:43.765997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.131 [2024-05-15 09:08:43.766024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.131 [2024-05-15 09:08:43.766040] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.131 [2024-05-15 09:08:43.766052] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.131 [2024-05-15 09:08:43.766082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.131 qpair failed and we were unable to recover it. 00:42:49.131 [2024-05-15 09:08:43.775958] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.131 [2024-05-15 09:08:43.776075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.131 [2024-05-15 09:08:43.776102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.131 [2024-05-15 09:08:43.776123] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.131 [2024-05-15 09:08:43.776137] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.131 [2024-05-15 09:08:43.776167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.131 qpair failed and we were unable to recover it. 00:42:49.131 [2024-05-15 09:08:43.785882] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.131 [2024-05-15 09:08:43.785992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.131 [2024-05-15 09:08:43.786019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.131 [2024-05-15 09:08:43.786034] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.131 [2024-05-15 09:08:43.786047] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.131 [2024-05-15 09:08:43.786077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.131 qpair failed and we were unable to recover it. 00:42:49.131 [2024-05-15 09:08:43.795918] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.131 [2024-05-15 09:08:43.796024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.131 [2024-05-15 09:08:43.796054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.131 [2024-05-15 09:08:43.796070] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.131 [2024-05-15 09:08:43.796083] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.131 [2024-05-15 09:08:43.796112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.131 qpair failed and we were unable to recover it. 00:42:49.131 [2024-05-15 09:08:43.805940] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.132 [2024-05-15 09:08:43.806069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.132 [2024-05-15 09:08:43.806095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.132 [2024-05-15 09:08:43.806110] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.132 [2024-05-15 09:08:43.806123] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.132 [2024-05-15 09:08:43.806153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.132 qpair failed and we were unable to recover it. 00:42:49.132 [2024-05-15 09:08:43.815997] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.132 [2024-05-15 09:08:43.816114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.132 [2024-05-15 09:08:43.816140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.132 [2024-05-15 09:08:43.816155] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.132 [2024-05-15 09:08:43.816167] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.132 [2024-05-15 09:08:43.816196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.132 qpair failed and we were unable to recover it. 00:42:49.132 [2024-05-15 09:08:43.826009] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.132 [2024-05-15 09:08:43.826130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.132 [2024-05-15 09:08:43.826156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.132 [2024-05-15 09:08:43.826171] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.132 [2024-05-15 09:08:43.826184] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.132 [2024-05-15 09:08:43.826213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.132 qpair failed and we were unable to recover it. 00:42:49.132 [2024-05-15 09:08:43.836038] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.132 [2024-05-15 09:08:43.836147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.132 [2024-05-15 09:08:43.836176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.132 [2024-05-15 09:08:43.836191] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.132 [2024-05-15 09:08:43.836204] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.132 [2024-05-15 09:08:43.836240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.132 qpair failed and we were unable to recover it. 00:42:49.132 [2024-05-15 09:08:43.846052] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.132 [2024-05-15 09:08:43.846164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.132 [2024-05-15 09:08:43.846190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.132 [2024-05-15 09:08:43.846205] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.132 [2024-05-15 09:08:43.846227] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.132 [2024-05-15 09:08:43.846259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.132 qpair failed and we were unable to recover it. 00:42:49.132 [2024-05-15 09:08:43.856228] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.132 [2024-05-15 09:08:43.856360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.132 [2024-05-15 09:08:43.856402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.132 [2024-05-15 09:08:43.856419] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.132 [2024-05-15 09:08:43.856433] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.132 [2024-05-15 09:08:43.856468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.132 qpair failed and we were unable to recover it. 00:42:49.132 [2024-05-15 09:08:43.866122] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.132 [2024-05-15 09:08:43.866278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.132 [2024-05-15 09:08:43.866309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.132 [2024-05-15 09:08:43.866326] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.132 [2024-05-15 09:08:43.866339] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.132 [2024-05-15 09:08:43.866369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.132 qpair failed and we were unable to recover it. 00:42:49.132 [2024-05-15 09:08:43.876168] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.132 [2024-05-15 09:08:43.876287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.132 [2024-05-15 09:08:43.876313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.132 [2024-05-15 09:08:43.876328] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.132 [2024-05-15 09:08:43.876341] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.132 [2024-05-15 09:08:43.876370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.132 qpair failed and we were unable to recover it. 00:42:49.132 [2024-05-15 09:08:43.886253] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.132 [2024-05-15 09:08:43.886352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.132 [2024-05-15 09:08:43.886379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.132 [2024-05-15 09:08:43.886394] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.132 [2024-05-15 09:08:43.886407] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.132 [2024-05-15 09:08:43.886436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.132 qpair failed and we were unable to recover it. 00:42:49.132 [2024-05-15 09:08:43.896230] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.132 [2024-05-15 09:08:43.896357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.132 [2024-05-15 09:08:43.896386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.132 [2024-05-15 09:08:43.896401] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.132 [2024-05-15 09:08:43.896415] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.132 [2024-05-15 09:08:43.896445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.132 qpair failed and we were unable to recover it. 00:42:49.132 [2024-05-15 09:08:43.906255] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.132 [2024-05-15 09:08:43.906366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.132 [2024-05-15 09:08:43.906393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.132 [2024-05-15 09:08:43.906409] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.132 [2024-05-15 09:08:43.906421] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.132 [2024-05-15 09:08:43.906458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.132 qpair failed and we were unable to recover it. 00:42:49.132 [2024-05-15 09:08:43.916304] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.132 [2024-05-15 09:08:43.916424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.132 [2024-05-15 09:08:43.916451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.132 [2024-05-15 09:08:43.916466] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.132 [2024-05-15 09:08:43.916479] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.132 [2024-05-15 09:08:43.916508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.132 qpair failed and we were unable to recover it. 00:42:49.394 [2024-05-15 09:08:43.926322] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.394 [2024-05-15 09:08:43.926426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.394 [2024-05-15 09:08:43.926464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.394 [2024-05-15 09:08:43.926479] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.394 [2024-05-15 09:08:43.926492] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.394 [2024-05-15 09:08:43.926521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.394 qpair failed and we were unable to recover it. 00:42:49.394 [2024-05-15 09:08:43.936331] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.395 [2024-05-15 09:08:43.936460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.395 [2024-05-15 09:08:43.936486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.395 [2024-05-15 09:08:43.936501] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.395 [2024-05-15 09:08:43.936514] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.395 [2024-05-15 09:08:43.936544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.395 qpair failed and we were unable to recover it. 00:42:49.395 [2024-05-15 09:08:43.946376] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.395 [2024-05-15 09:08:43.946485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.395 [2024-05-15 09:08:43.946511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.395 [2024-05-15 09:08:43.946526] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.395 [2024-05-15 09:08:43.946539] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.395 [2024-05-15 09:08:43.946568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.395 qpair failed and we were unable to recover it. 00:42:49.395 [2024-05-15 09:08:43.956377] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.395 [2024-05-15 09:08:43.956481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.395 [2024-05-15 09:08:43.956517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.395 [2024-05-15 09:08:43.956535] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.395 [2024-05-15 09:08:43.956548] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.395 [2024-05-15 09:08:43.956580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.395 qpair failed and we were unable to recover it. 00:42:49.395 [2024-05-15 09:08:43.966457] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.395 [2024-05-15 09:08:43.966581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.395 [2024-05-15 09:08:43.966609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.395 [2024-05-15 09:08:43.966624] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.395 [2024-05-15 09:08:43.966637] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.395 [2024-05-15 09:08:43.966667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.395 qpair failed and we were unable to recover it. 00:42:49.395 [2024-05-15 09:08:43.976467] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.395 [2024-05-15 09:08:43.976587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.395 [2024-05-15 09:08:43.976614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.395 [2024-05-15 09:08:43.976629] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.395 [2024-05-15 09:08:43.976641] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.395 [2024-05-15 09:08:43.976671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.395 qpair failed and we were unable to recover it. 00:42:49.395 [2024-05-15 09:08:43.986479] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.395 [2024-05-15 09:08:43.986592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.395 [2024-05-15 09:08:43.986619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.395 [2024-05-15 09:08:43.986634] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.395 [2024-05-15 09:08:43.986646] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.395 [2024-05-15 09:08:43.986676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.395 qpair failed and we were unable to recover it. 00:42:49.395 [2024-05-15 09:08:43.996556] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.395 [2024-05-15 09:08:43.996682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.395 [2024-05-15 09:08:43.996709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.395 [2024-05-15 09:08:43.996725] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.395 [2024-05-15 09:08:43.996737] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.395 [2024-05-15 09:08:43.996792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.395 qpair failed and we were unable to recover it. 00:42:49.395 [2024-05-15 09:08:44.006606] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.395 [2024-05-15 09:08:44.006716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.395 [2024-05-15 09:08:44.006744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.395 [2024-05-15 09:08:44.006759] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.395 [2024-05-15 09:08:44.006772] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.395 [2024-05-15 09:08:44.006802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.395 qpair failed and we were unable to recover it. 00:42:49.395 [2024-05-15 09:08:44.016580] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.395 [2024-05-15 09:08:44.016696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.395 [2024-05-15 09:08:44.016722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.395 [2024-05-15 09:08:44.016737] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.395 [2024-05-15 09:08:44.016749] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.395 [2024-05-15 09:08:44.016779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.395 qpair failed and we were unable to recover it. 00:42:49.395 [2024-05-15 09:08:44.026590] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.395 [2024-05-15 09:08:44.026702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.395 [2024-05-15 09:08:44.026729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.395 [2024-05-15 09:08:44.026744] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.395 [2024-05-15 09:08:44.026757] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.395 [2024-05-15 09:08:44.026786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.395 qpair failed and we were unable to recover it. 00:42:49.395 [2024-05-15 09:08:44.036658] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.395 [2024-05-15 09:08:44.036773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.395 [2024-05-15 09:08:44.036799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.395 [2024-05-15 09:08:44.036815] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.395 [2024-05-15 09:08:44.036827] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.395 [2024-05-15 09:08:44.036856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.395 qpair failed and we were unable to recover it. 00:42:49.395 [2024-05-15 09:08:44.046657] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.395 [2024-05-15 09:08:44.046770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.395 [2024-05-15 09:08:44.046797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.395 [2024-05-15 09:08:44.046811] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.395 [2024-05-15 09:08:44.046824] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.395 [2024-05-15 09:08:44.046853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.395 qpair failed and we were unable to recover it. 00:42:49.395 [2024-05-15 09:08:44.056753] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.395 [2024-05-15 09:08:44.056890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.395 [2024-05-15 09:08:44.056916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.395 [2024-05-15 09:08:44.056931] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.395 [2024-05-15 09:08:44.056944] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.395 [2024-05-15 09:08:44.056973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.395 qpair failed and we were unable to recover it. 00:42:49.395 [2024-05-15 09:08:44.066723] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.395 [2024-05-15 09:08:44.066834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.395 [2024-05-15 09:08:44.066860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.396 [2024-05-15 09:08:44.066876] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.396 [2024-05-15 09:08:44.066888] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.396 [2024-05-15 09:08:44.066918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.396 qpair failed and we were unable to recover it. 00:42:49.396 [2024-05-15 09:08:44.076787] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.396 [2024-05-15 09:08:44.076914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.396 [2024-05-15 09:08:44.076943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.396 [2024-05-15 09:08:44.076959] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.396 [2024-05-15 09:08:44.076972] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.396 [2024-05-15 09:08:44.077004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.396 qpair failed and we were unable to recover it. 00:42:49.396 [2024-05-15 09:08:44.086759] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.396 [2024-05-15 09:08:44.086866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.396 [2024-05-15 09:08:44.086894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.396 [2024-05-15 09:08:44.086909] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.396 [2024-05-15 09:08:44.086928] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.396 [2024-05-15 09:08:44.086958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.396 qpair failed and we were unable to recover it. 00:42:49.396 [2024-05-15 09:08:44.096789] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.396 [2024-05-15 09:08:44.096904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.396 [2024-05-15 09:08:44.096930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.396 [2024-05-15 09:08:44.096946] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.396 [2024-05-15 09:08:44.096958] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.396 [2024-05-15 09:08:44.096988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.396 qpair failed and we were unable to recover it. 00:42:49.396 [2024-05-15 09:08:44.106805] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.396 [2024-05-15 09:08:44.106913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.396 [2024-05-15 09:08:44.106940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.396 [2024-05-15 09:08:44.106955] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.396 [2024-05-15 09:08:44.106967] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.396 [2024-05-15 09:08:44.106997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.396 qpair failed and we were unable to recover it. 00:42:49.396 [2024-05-15 09:08:44.116846] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.396 [2024-05-15 09:08:44.116986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.396 [2024-05-15 09:08:44.117012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.396 [2024-05-15 09:08:44.117027] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.396 [2024-05-15 09:08:44.117040] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.396 [2024-05-15 09:08:44.117070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.396 qpair failed and we were unable to recover it. 00:42:49.396 [2024-05-15 09:08:44.126875] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.396 [2024-05-15 09:08:44.126977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.396 [2024-05-15 09:08:44.127005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.396 [2024-05-15 09:08:44.127020] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.396 [2024-05-15 09:08:44.127033] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.396 [2024-05-15 09:08:44.127062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.396 qpair failed and we were unable to recover it. 00:42:49.396 [2024-05-15 09:08:44.136922] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.396 [2024-05-15 09:08:44.137047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.396 [2024-05-15 09:08:44.137074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.396 [2024-05-15 09:08:44.137089] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.396 [2024-05-15 09:08:44.137102] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.396 [2024-05-15 09:08:44.137132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.396 qpair failed and we were unable to recover it. 00:42:49.396 [2024-05-15 09:08:44.146941] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.396 [2024-05-15 09:08:44.147059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.396 [2024-05-15 09:08:44.147086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.396 [2024-05-15 09:08:44.147101] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.396 [2024-05-15 09:08:44.147114] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.396 [2024-05-15 09:08:44.147143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.396 qpair failed and we were unable to recover it. 00:42:49.396 [2024-05-15 09:08:44.156972] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.396 [2024-05-15 09:08:44.157092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.396 [2024-05-15 09:08:44.157119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.396 [2024-05-15 09:08:44.157135] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.396 [2024-05-15 09:08:44.157147] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.396 [2024-05-15 09:08:44.157177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.396 qpair failed and we were unable to recover it. 00:42:49.396 [2024-05-15 09:08:44.167000] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.396 [2024-05-15 09:08:44.167105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.396 [2024-05-15 09:08:44.167131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.396 [2024-05-15 09:08:44.167146] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.396 [2024-05-15 09:08:44.167158] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.396 [2024-05-15 09:08:44.167188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.396 qpair failed and we were unable to recover it. 00:42:49.396 [2024-05-15 09:08:44.177027] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.396 [2024-05-15 09:08:44.177132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.396 [2024-05-15 09:08:44.177157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.396 [2024-05-15 09:08:44.177177] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.396 [2024-05-15 09:08:44.177191] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.396 [2024-05-15 09:08:44.177228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.396 qpair failed and we were unable to recover it. 00:42:49.658 [2024-05-15 09:08:44.187040] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.658 [2024-05-15 09:08:44.187178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.658 [2024-05-15 09:08:44.187204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.658 [2024-05-15 09:08:44.187227] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.658 [2024-05-15 09:08:44.187242] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.658 [2024-05-15 09:08:44.187272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.658 qpair failed and we were unable to recover it. 00:42:49.658 [2024-05-15 09:08:44.197125] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.658 [2024-05-15 09:08:44.197241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.658 [2024-05-15 09:08:44.197267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.658 [2024-05-15 09:08:44.197282] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.658 [2024-05-15 09:08:44.197295] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.658 [2024-05-15 09:08:44.197325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.658 qpair failed and we were unable to recover it. 00:42:49.658 [2024-05-15 09:08:44.207132] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.658 [2024-05-15 09:08:44.207268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.658 [2024-05-15 09:08:44.207293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.658 [2024-05-15 09:08:44.207308] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.658 [2024-05-15 09:08:44.207322] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.658 [2024-05-15 09:08:44.207351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.658 qpair failed and we were unable to recover it. 00:42:49.658 [2024-05-15 09:08:44.217164] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.658 [2024-05-15 09:08:44.217295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.658 [2024-05-15 09:08:44.217321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.658 [2024-05-15 09:08:44.217336] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.658 [2024-05-15 09:08:44.217349] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.658 [2024-05-15 09:08:44.217379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.658 qpair failed and we were unable to recover it. 00:42:49.658 [2024-05-15 09:08:44.227195] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.658 [2024-05-15 09:08:44.227350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.658 [2024-05-15 09:08:44.227376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.658 [2024-05-15 09:08:44.227391] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.658 [2024-05-15 09:08:44.227404] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.658 [2024-05-15 09:08:44.227433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.658 qpair failed and we were unable to recover it. 00:42:49.658 [2024-05-15 09:08:44.237320] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.658 [2024-05-15 09:08:44.237455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.658 [2024-05-15 09:08:44.237480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.658 [2024-05-15 09:08:44.237495] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.658 [2024-05-15 09:08:44.237508] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.658 [2024-05-15 09:08:44.237537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.658 qpair failed and we were unable to recover it. 00:42:49.658 [2024-05-15 09:08:44.247303] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.658 [2024-05-15 09:08:44.247442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.658 [2024-05-15 09:08:44.247468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.658 [2024-05-15 09:08:44.247483] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.658 [2024-05-15 09:08:44.247496] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.658 [2024-05-15 09:08:44.247525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.658 qpair failed and we were unable to recover it. 00:42:49.658 [2024-05-15 09:08:44.257284] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.658 [2024-05-15 09:08:44.257400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.658 [2024-05-15 09:08:44.257425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.658 [2024-05-15 09:08:44.257440] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.658 [2024-05-15 09:08:44.257454] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.658 [2024-05-15 09:08:44.257483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.658 qpair failed and we were unable to recover it. 00:42:49.658 [2024-05-15 09:08:44.267310] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.658 [2024-05-15 09:08:44.267418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.658 [2024-05-15 09:08:44.267444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.658 [2024-05-15 09:08:44.267466] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.658 [2024-05-15 09:08:44.267480] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.658 [2024-05-15 09:08:44.267510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.658 qpair failed and we were unable to recover it. 00:42:49.658 [2024-05-15 09:08:44.277312] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.658 [2024-05-15 09:08:44.277418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.658 [2024-05-15 09:08:44.277447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.658 [2024-05-15 09:08:44.277463] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.658 [2024-05-15 09:08:44.277476] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.658 [2024-05-15 09:08:44.277506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.658 qpair failed and we were unable to recover it. 00:42:49.658 [2024-05-15 09:08:44.287349] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.658 [2024-05-15 09:08:44.287481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.658 [2024-05-15 09:08:44.287506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.658 [2024-05-15 09:08:44.287521] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.659 [2024-05-15 09:08:44.287534] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.659 [2024-05-15 09:08:44.287564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.659 qpair failed and we were unable to recover it. 00:42:49.659 [2024-05-15 09:08:44.297398] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.659 [2024-05-15 09:08:44.297506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.659 [2024-05-15 09:08:44.297532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.659 [2024-05-15 09:08:44.297546] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.659 [2024-05-15 09:08:44.297559] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.659 [2024-05-15 09:08:44.297603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.659 qpair failed and we were unable to recover it. 00:42:49.659 [2024-05-15 09:08:44.307509] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.659 [2024-05-15 09:08:44.307648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.659 [2024-05-15 09:08:44.307674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.659 [2024-05-15 09:08:44.307690] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.659 [2024-05-15 09:08:44.307703] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.659 [2024-05-15 09:08:44.307732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.659 qpair failed and we were unable to recover it. 00:42:49.659 [2024-05-15 09:08:44.317446] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.659 [2024-05-15 09:08:44.317556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.659 [2024-05-15 09:08:44.317585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.659 [2024-05-15 09:08:44.317601] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.659 [2024-05-15 09:08:44.317614] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.659 [2024-05-15 09:08:44.317645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.659 qpair failed and we were unable to recover it. 00:42:49.659 [2024-05-15 09:08:44.327442] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.659 [2024-05-15 09:08:44.327539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.659 [2024-05-15 09:08:44.327565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.659 [2024-05-15 09:08:44.327580] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.659 [2024-05-15 09:08:44.327593] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.659 [2024-05-15 09:08:44.327623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.659 qpair failed and we were unable to recover it. 00:42:49.659 [2024-05-15 09:08:44.337491] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.659 [2024-05-15 09:08:44.337604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.659 [2024-05-15 09:08:44.337629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.659 [2024-05-15 09:08:44.337643] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.659 [2024-05-15 09:08:44.337656] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.659 [2024-05-15 09:08:44.337686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.659 qpair failed and we were unable to recover it. 00:42:49.659 [2024-05-15 09:08:44.347510] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.659 [2024-05-15 09:08:44.347614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.659 [2024-05-15 09:08:44.347639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.659 [2024-05-15 09:08:44.347653] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.659 [2024-05-15 09:08:44.347667] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.659 [2024-05-15 09:08:44.347696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.659 qpair failed and we were unable to recover it. 00:42:49.659 [2024-05-15 09:08:44.357521] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.659 [2024-05-15 09:08:44.357621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.659 [2024-05-15 09:08:44.357651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.659 [2024-05-15 09:08:44.357671] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.659 [2024-05-15 09:08:44.357685] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.659 [2024-05-15 09:08:44.357715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.659 qpair failed and we were unable to recover it. 00:42:49.659 [2024-05-15 09:08:44.367552] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.659 [2024-05-15 09:08:44.367654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.659 [2024-05-15 09:08:44.367679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.659 [2024-05-15 09:08:44.367694] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.659 [2024-05-15 09:08:44.367707] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.659 [2024-05-15 09:08:44.367736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.659 qpair failed and we were unable to recover it. 00:42:49.659 [2024-05-15 09:08:44.377604] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.659 [2024-05-15 09:08:44.377715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.659 [2024-05-15 09:08:44.377740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.659 [2024-05-15 09:08:44.377754] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.659 [2024-05-15 09:08:44.377767] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.659 [2024-05-15 09:08:44.377797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.659 qpair failed and we were unable to recover it. 00:42:49.659 [2024-05-15 09:08:44.387647] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.659 [2024-05-15 09:08:44.387776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.659 [2024-05-15 09:08:44.387802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.659 [2024-05-15 09:08:44.387818] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.659 [2024-05-15 09:08:44.387831] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.659 [2024-05-15 09:08:44.387860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.659 qpair failed and we were unable to recover it. 00:42:49.659 [2024-05-15 09:08:44.397725] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.659 [2024-05-15 09:08:44.397885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.659 [2024-05-15 09:08:44.397913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.659 [2024-05-15 09:08:44.397928] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.659 [2024-05-15 09:08:44.397956] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.659 [2024-05-15 09:08:44.397990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.659 qpair failed and we were unable to recover it. 00:42:49.659 [2024-05-15 09:08:44.407694] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.659 [2024-05-15 09:08:44.407800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.659 [2024-05-15 09:08:44.407825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.659 [2024-05-15 09:08:44.407840] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.659 [2024-05-15 09:08:44.407854] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.659 [2024-05-15 09:08:44.407897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.659 qpair failed and we were unable to recover it. 00:42:49.659 [2024-05-15 09:08:44.417741] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.659 [2024-05-15 09:08:44.417864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.659 [2024-05-15 09:08:44.417897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.659 [2024-05-15 09:08:44.417917] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.659 [2024-05-15 09:08:44.417936] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.659 [2024-05-15 09:08:44.417975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.659 qpair failed and we were unable to recover it. 00:42:49.660 [2024-05-15 09:08:44.427753] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.660 [2024-05-15 09:08:44.427855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.660 [2024-05-15 09:08:44.427881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.660 [2024-05-15 09:08:44.427895] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.660 [2024-05-15 09:08:44.427908] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.660 [2024-05-15 09:08:44.427938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.660 qpair failed and we were unable to recover it. 00:42:49.660 [2024-05-15 09:08:44.437763] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.660 [2024-05-15 09:08:44.437869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.660 [2024-05-15 09:08:44.437906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.660 [2024-05-15 09:08:44.437933] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.660 [2024-05-15 09:08:44.437946] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.660 [2024-05-15 09:08:44.437976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.660 qpair failed and we were unable to recover it. 00:42:49.660 [2024-05-15 09:08:44.447827] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.660 [2024-05-15 09:08:44.447934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.660 [2024-05-15 09:08:44.447971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.660 [2024-05-15 09:08:44.447987] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.660 [2024-05-15 09:08:44.448001] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.660 [2024-05-15 09:08:44.448031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.660 qpair failed and we were unable to recover it. 00:42:49.922 [2024-05-15 09:08:44.457849] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.922 [2024-05-15 09:08:44.457966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.922 [2024-05-15 09:08:44.457990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.922 [2024-05-15 09:08:44.458005] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.922 [2024-05-15 09:08:44.458019] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.922 [2024-05-15 09:08:44.458048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.922 qpair failed and we were unable to recover it. 00:42:49.922 [2024-05-15 09:08:44.467873] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.922 [2024-05-15 09:08:44.467979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.922 [2024-05-15 09:08:44.468005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.922 [2024-05-15 09:08:44.468019] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.922 [2024-05-15 09:08:44.468032] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.922 [2024-05-15 09:08:44.468061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.922 qpair failed and we were unable to recover it. 00:42:49.922 [2024-05-15 09:08:44.477887] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.922 [2024-05-15 09:08:44.478012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.922 [2024-05-15 09:08:44.478038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.922 [2024-05-15 09:08:44.478053] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.922 [2024-05-15 09:08:44.478066] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.922 [2024-05-15 09:08:44.478127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.922 qpair failed and we were unable to recover it. 00:42:49.922 [2024-05-15 09:08:44.487906] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.922 [2024-05-15 09:08:44.488001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.922 [2024-05-15 09:08:44.488027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.922 [2024-05-15 09:08:44.488041] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.922 [2024-05-15 09:08:44.488060] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.922 [2024-05-15 09:08:44.488091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.922 qpair failed and we were unable to recover it. 00:42:49.922 [2024-05-15 09:08:44.497952] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.922 [2024-05-15 09:08:44.498054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.922 [2024-05-15 09:08:44.498080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.922 [2024-05-15 09:08:44.498095] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.922 [2024-05-15 09:08:44.498108] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.922 [2024-05-15 09:08:44.498138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.923 qpair failed and we were unable to recover it. 00:42:49.923 [2024-05-15 09:08:44.507961] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.923 [2024-05-15 09:08:44.508069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.923 [2024-05-15 09:08:44.508094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.923 [2024-05-15 09:08:44.508109] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.923 [2024-05-15 09:08:44.508123] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.923 [2024-05-15 09:08:44.508152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.923 qpair failed and we were unable to recover it. 00:42:49.923 [2024-05-15 09:08:44.517991] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.923 [2024-05-15 09:08:44.518094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.923 [2024-05-15 09:08:44.518120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.923 [2024-05-15 09:08:44.518140] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.923 [2024-05-15 09:08:44.518153] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.923 [2024-05-15 09:08:44.518196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.923 qpair failed and we were unable to recover it. 00:42:49.923 [2024-05-15 09:08:44.528039] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.923 [2024-05-15 09:08:44.528152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.923 [2024-05-15 09:08:44.528178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.923 [2024-05-15 09:08:44.528192] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.923 [2024-05-15 09:08:44.528206] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.923 [2024-05-15 09:08:44.528242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.923 qpair failed and we were unable to recover it. 00:42:49.923 [2024-05-15 09:08:44.538158] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.923 [2024-05-15 09:08:44.538281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.923 [2024-05-15 09:08:44.538308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.923 [2024-05-15 09:08:44.538322] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.923 [2024-05-15 09:08:44.538335] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.923 [2024-05-15 09:08:44.538365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.923 qpair failed and we were unable to recover it. 00:42:49.923 [2024-05-15 09:08:44.548106] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.923 [2024-05-15 09:08:44.548231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.923 [2024-05-15 09:08:44.548257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.923 [2024-05-15 09:08:44.548272] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.923 [2024-05-15 09:08:44.548286] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.923 [2024-05-15 09:08:44.548330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.923 qpair failed and we were unable to recover it. 00:42:49.923 [2024-05-15 09:08:44.558116] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.923 [2024-05-15 09:08:44.558223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.923 [2024-05-15 09:08:44.558254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.923 [2024-05-15 09:08:44.558269] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.923 [2024-05-15 09:08:44.558282] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.923 [2024-05-15 09:08:44.558315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.923 qpair failed and we were unable to recover it. 00:42:49.923 [2024-05-15 09:08:44.568140] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.923 [2024-05-15 09:08:44.568278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.923 [2024-05-15 09:08:44.568304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.923 [2024-05-15 09:08:44.568319] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.923 [2024-05-15 09:08:44.568332] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.923 [2024-05-15 09:08:44.568362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.923 qpair failed and we were unable to recover it. 00:42:49.923 [2024-05-15 09:08:44.578162] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.923 [2024-05-15 09:08:44.578291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.923 [2024-05-15 09:08:44.578318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.923 [2024-05-15 09:08:44.578338] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.923 [2024-05-15 09:08:44.578352] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.923 [2024-05-15 09:08:44.578383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.923 qpair failed and we were unable to recover it. 00:42:49.923 [2024-05-15 09:08:44.588236] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.923 [2024-05-15 09:08:44.588395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.923 [2024-05-15 09:08:44.588421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.923 [2024-05-15 09:08:44.588436] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.923 [2024-05-15 09:08:44.588449] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.923 [2024-05-15 09:08:44.588479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.923 qpair failed and we were unable to recover it. 00:42:49.923 [2024-05-15 09:08:44.598273] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.923 [2024-05-15 09:08:44.598382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.923 [2024-05-15 09:08:44.598408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.923 [2024-05-15 09:08:44.598423] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.923 [2024-05-15 09:08:44.598436] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.923 [2024-05-15 09:08:44.598466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.923 qpair failed and we were unable to recover it. 00:42:49.923 [2024-05-15 09:08:44.608340] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.923 [2024-05-15 09:08:44.608441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.923 [2024-05-15 09:08:44.608466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.923 [2024-05-15 09:08:44.608481] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.923 [2024-05-15 09:08:44.608494] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.923 [2024-05-15 09:08:44.608523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.923 qpair failed and we were unable to recover it. 00:42:49.923 [2024-05-15 09:08:44.618419] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.923 [2024-05-15 09:08:44.618541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.923 [2024-05-15 09:08:44.618565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.923 [2024-05-15 09:08:44.618579] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.923 [2024-05-15 09:08:44.618592] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.923 [2024-05-15 09:08:44.618622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.923 qpair failed and we were unable to recover it. 00:42:49.923 [2024-05-15 09:08:44.628319] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.923 [2024-05-15 09:08:44.628425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.923 [2024-05-15 09:08:44.628453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.923 [2024-05-15 09:08:44.628468] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.923 [2024-05-15 09:08:44.628481] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.923 [2024-05-15 09:08:44.628511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.923 qpair failed and we were unable to recover it. 00:42:49.923 [2024-05-15 09:08:44.638334] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.924 [2024-05-15 09:08:44.638454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.924 [2024-05-15 09:08:44.638483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.924 [2024-05-15 09:08:44.638499] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.924 [2024-05-15 09:08:44.638511] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.924 [2024-05-15 09:08:44.638540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.924 qpair failed and we were unable to recover it. 00:42:49.924 [2024-05-15 09:08:44.648402] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.924 [2024-05-15 09:08:44.648524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.924 [2024-05-15 09:08:44.648548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.924 [2024-05-15 09:08:44.648563] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.924 [2024-05-15 09:08:44.648576] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.924 [2024-05-15 09:08:44.648606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.924 qpair failed and we were unable to recover it. 00:42:49.924 [2024-05-15 09:08:44.658451] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.924 [2024-05-15 09:08:44.658560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.924 [2024-05-15 09:08:44.658585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.924 [2024-05-15 09:08:44.658600] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.924 [2024-05-15 09:08:44.658613] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.924 [2024-05-15 09:08:44.658643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.924 qpair failed and we were unable to recover it. 00:42:49.924 [2024-05-15 09:08:44.668427] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.924 [2024-05-15 09:08:44.668541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.924 [2024-05-15 09:08:44.668566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.924 [2024-05-15 09:08:44.668589] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.924 [2024-05-15 09:08:44.668603] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.924 [2024-05-15 09:08:44.668633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.924 qpair failed and we were unable to recover it. 00:42:49.924 [2024-05-15 09:08:44.678464] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.924 [2024-05-15 09:08:44.678566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.924 [2024-05-15 09:08:44.678599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.924 [2024-05-15 09:08:44.678616] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.924 [2024-05-15 09:08:44.678629] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.924 [2024-05-15 09:08:44.678660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.924 qpair failed and we were unable to recover it. 00:42:49.924 [2024-05-15 09:08:44.688475] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.924 [2024-05-15 09:08:44.688574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.924 [2024-05-15 09:08:44.688600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.924 [2024-05-15 09:08:44.688614] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.924 [2024-05-15 09:08:44.688627] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.924 [2024-05-15 09:08:44.688657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.924 qpair failed and we were unable to recover it. 00:42:49.924 [2024-05-15 09:08:44.698630] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.924 [2024-05-15 09:08:44.698739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.924 [2024-05-15 09:08:44.698767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.924 [2024-05-15 09:08:44.698782] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.924 [2024-05-15 09:08:44.698795] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.924 [2024-05-15 09:08:44.698825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.924 qpair failed and we were unable to recover it. 00:42:49.924 [2024-05-15 09:08:44.708535] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:49.924 [2024-05-15 09:08:44.708643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:49.924 [2024-05-15 09:08:44.708668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:49.924 [2024-05-15 09:08:44.708682] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:49.924 [2024-05-15 09:08:44.708695] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:49.924 [2024-05-15 09:08:44.708725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:49.924 qpair failed and we were unable to recover it. 00:42:50.186 [2024-05-15 09:08:44.718572] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.186 [2024-05-15 09:08:44.718687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.186 [2024-05-15 09:08:44.718713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.186 [2024-05-15 09:08:44.718727] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.186 [2024-05-15 09:08:44.718740] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:50.186 [2024-05-15 09:08:44.718770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:50.186 qpair failed and we were unable to recover it. 00:42:50.186 [2024-05-15 09:08:44.728577] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.186 [2024-05-15 09:08:44.728681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.186 [2024-05-15 09:08:44.728706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.187 [2024-05-15 09:08:44.728720] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.187 [2024-05-15 09:08:44.728733] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:50.187 [2024-05-15 09:08:44.728763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:50.187 qpair failed and we were unable to recover it. 00:42:50.187 [2024-05-15 09:08:44.738750] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.187 [2024-05-15 09:08:44.738864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.187 [2024-05-15 09:08:44.738889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.187 [2024-05-15 09:08:44.738903] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.187 [2024-05-15 09:08:44.738916] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:50.187 [2024-05-15 09:08:44.738946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:50.187 qpair failed and we were unable to recover it. 00:42:50.187 [2024-05-15 09:08:44.748677] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.187 [2024-05-15 09:08:44.748786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.187 [2024-05-15 09:08:44.748813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.187 [2024-05-15 09:08:44.748828] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.187 [2024-05-15 09:08:44.748840] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:50.187 [2024-05-15 09:08:44.748870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:50.187 qpair failed and we were unable to recover it. 00:42:50.187 [2024-05-15 09:08:44.758680] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.187 [2024-05-15 09:08:44.758787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.187 [2024-05-15 09:08:44.758835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.187 [2024-05-15 09:08:44.758853] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.187 [2024-05-15 09:08:44.758866] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:50.187 [2024-05-15 09:08:44.758896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:50.187 qpair failed and we were unable to recover it. 00:42:50.187 [2024-05-15 09:08:44.768679] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.187 [2024-05-15 09:08:44.768781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.187 [2024-05-15 09:08:44.768807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.187 [2024-05-15 09:08:44.768821] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.187 [2024-05-15 09:08:44.768834] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:50.187 [2024-05-15 09:08:44.768864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:50.187 qpair failed and we were unable to recover it. 00:42:50.187 [2024-05-15 09:08:44.778741] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.187 [2024-05-15 09:08:44.778853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.187 [2024-05-15 09:08:44.778880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.187 [2024-05-15 09:08:44.778895] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.187 [2024-05-15 09:08:44.778908] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:50.187 [2024-05-15 09:08:44.778938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:50.187 qpair failed and we were unable to recover it. 00:42:50.187 [2024-05-15 09:08:44.788737] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.187 [2024-05-15 09:08:44.788844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.187 [2024-05-15 09:08:44.788869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.187 [2024-05-15 09:08:44.788884] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.187 [2024-05-15 09:08:44.788897] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:50.187 [2024-05-15 09:08:44.788926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:50.187 qpair failed and we were unable to recover it. 00:42:50.187 [2024-05-15 09:08:44.798899] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.187 [2024-05-15 09:08:44.799054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.187 [2024-05-15 09:08:44.799079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.187 [2024-05-15 09:08:44.799093] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.187 [2024-05-15 09:08:44.799106] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:50.187 [2024-05-15 09:08:44.799143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:50.187 qpair failed and we were unable to recover it. 00:42:50.187 [2024-05-15 09:08:44.808851] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.187 [2024-05-15 09:08:44.808969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.187 [2024-05-15 09:08:44.808994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.187 [2024-05-15 09:08:44.809010] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.187 [2024-05-15 09:08:44.809022] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:50.187 [2024-05-15 09:08:44.809052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:50.187 qpair failed and we were unable to recover it. 00:42:50.187 [2024-05-15 09:08:44.818857] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.187 [2024-05-15 09:08:44.818985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.187 [2024-05-15 09:08:44.819011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.187 [2024-05-15 09:08:44.819026] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.187 [2024-05-15 09:08:44.819039] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:50.187 [2024-05-15 09:08:44.819068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:50.187 qpair failed and we were unable to recover it. 00:42:50.187 [2024-05-15 09:08:44.828980] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.187 [2024-05-15 09:08:44.829125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.187 [2024-05-15 09:08:44.829152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.187 [2024-05-15 09:08:44.829166] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.187 [2024-05-15 09:08:44.829179] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:50.187 [2024-05-15 09:08:44.829209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:50.187 qpair failed and we were unable to recover it. 00:42:50.187 [2024-05-15 09:08:44.838944] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.187 [2024-05-15 09:08:44.839052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.187 [2024-05-15 09:08:44.839085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.187 [2024-05-15 09:08:44.839102] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.187 [2024-05-15 09:08:44.839114] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.187 [2024-05-15 09:08:44.839146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.187 qpair failed and we were unable to recover it. 00:42:50.187 [2024-05-15 09:08:44.848980] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.187 [2024-05-15 09:08:44.849098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.187 [2024-05-15 09:08:44.849131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.187 [2024-05-15 09:08:44.849147] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.187 [2024-05-15 09:08:44.849159] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.187 [2024-05-15 09:08:44.849190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.187 qpair failed and we were unable to recover it. 00:42:50.187 [2024-05-15 09:08:44.859023] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.187 [2024-05-15 09:08:44.859173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.187 [2024-05-15 09:08:44.859201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.188 [2024-05-15 09:08:44.859224] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.188 [2024-05-15 09:08:44.859239] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.188 [2024-05-15 09:08:44.859270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.188 qpair failed and we were unable to recover it. 00:42:50.188 [2024-05-15 09:08:44.869025] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.188 [2024-05-15 09:08:44.869130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.188 [2024-05-15 09:08:44.869156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.188 [2024-05-15 09:08:44.869171] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.188 [2024-05-15 09:08:44.869184] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.188 [2024-05-15 09:08:44.869221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.188 qpair failed and we were unable to recover it. 00:42:50.188 [2024-05-15 09:08:44.879044] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.188 [2024-05-15 09:08:44.879142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.188 [2024-05-15 09:08:44.879168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.188 [2024-05-15 09:08:44.879182] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.188 [2024-05-15 09:08:44.879195] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.188 [2024-05-15 09:08:44.879232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.188 qpair failed and we were unable to recover it. 00:42:50.188 [2024-05-15 09:08:44.889098] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.188 [2024-05-15 09:08:44.889222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.188 [2024-05-15 09:08:44.889250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.188 [2024-05-15 09:08:44.889266] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.188 [2024-05-15 09:08:44.889286] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.188 [2024-05-15 09:08:44.889318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.188 qpair failed and we were unable to recover it. 00:42:50.188 [2024-05-15 09:08:44.899161] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.188 [2024-05-15 09:08:44.899281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.188 [2024-05-15 09:08:44.899306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.188 [2024-05-15 09:08:44.899320] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.188 [2024-05-15 09:08:44.899337] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.188 [2024-05-15 09:08:44.899366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.188 qpair failed and we were unable to recover it. 00:42:50.188 [2024-05-15 09:08:44.909119] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.188 [2024-05-15 09:08:44.909254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.188 [2024-05-15 09:08:44.909287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.188 [2024-05-15 09:08:44.909302] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.188 [2024-05-15 09:08:44.909314] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.188 [2024-05-15 09:08:44.909343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.188 qpair failed and we were unable to recover it. 00:42:50.188 [2024-05-15 09:08:44.919238] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.188 [2024-05-15 09:08:44.919361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.188 [2024-05-15 09:08:44.919387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.188 [2024-05-15 09:08:44.919401] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.188 [2024-05-15 09:08:44.919413] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.188 [2024-05-15 09:08:44.919443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.188 qpair failed and we were unable to recover it. 00:42:50.188 [2024-05-15 09:08:44.929173] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.188 [2024-05-15 09:08:44.929283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.188 [2024-05-15 09:08:44.929308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.188 [2024-05-15 09:08:44.929323] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.188 [2024-05-15 09:08:44.929336] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.188 [2024-05-15 09:08:44.929365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.188 qpair failed and we were unable to recover it. 00:42:50.188 [2024-05-15 09:08:44.939206] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.188 [2024-05-15 09:08:44.939329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.188 [2024-05-15 09:08:44.939355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.188 [2024-05-15 09:08:44.939369] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.188 [2024-05-15 09:08:44.939382] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.188 [2024-05-15 09:08:44.939413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.188 qpair failed and we were unable to recover it. 00:42:50.188 [2024-05-15 09:08:44.949233] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.188 [2024-05-15 09:08:44.949356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.188 [2024-05-15 09:08:44.949382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.188 [2024-05-15 09:08:44.949396] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.188 [2024-05-15 09:08:44.949409] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.188 [2024-05-15 09:08:44.949440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.188 qpair failed and we were unable to recover it. 00:42:50.188 [2024-05-15 09:08:44.959271] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.188 [2024-05-15 09:08:44.959379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.188 [2024-05-15 09:08:44.959404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.188 [2024-05-15 09:08:44.959419] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.188 [2024-05-15 09:08:44.959432] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.188 [2024-05-15 09:08:44.959462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.188 qpair failed and we were unable to recover it. 00:42:50.188 [2024-05-15 09:08:44.969287] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.188 [2024-05-15 09:08:44.969394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.188 [2024-05-15 09:08:44.969419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.188 [2024-05-15 09:08:44.969434] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.188 [2024-05-15 09:08:44.969447] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.188 [2024-05-15 09:08:44.969477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.188 qpair failed and we were unable to recover it. 00:42:50.451 [2024-05-15 09:08:44.979337] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.451 [2024-05-15 09:08:44.979444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.451 [2024-05-15 09:08:44.979470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.451 [2024-05-15 09:08:44.979485] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.451 [2024-05-15 09:08:44.979503] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.451 [2024-05-15 09:08:44.979534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.451 qpair failed and we were unable to recover it. 00:42:50.451 [2024-05-15 09:08:44.989337] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.451 [2024-05-15 09:08:44.989441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.451 [2024-05-15 09:08:44.989466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.451 [2024-05-15 09:08:44.989481] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.451 [2024-05-15 09:08:44.989493] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.451 [2024-05-15 09:08:44.989523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.451 qpair failed and we were unable to recover it. 00:42:50.451 [2024-05-15 09:08:44.999411] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.451 [2024-05-15 09:08:44.999529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.451 [2024-05-15 09:08:44.999554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.451 [2024-05-15 09:08:44.999569] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.451 [2024-05-15 09:08:44.999582] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.451 [2024-05-15 09:08:44.999612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.451 qpair failed and we were unable to recover it. 00:42:50.451 [2024-05-15 09:08:45.009442] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.451 [2024-05-15 09:08:45.009561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.451 [2024-05-15 09:08:45.009586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.451 [2024-05-15 09:08:45.009601] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.451 [2024-05-15 09:08:45.009614] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.451 [2024-05-15 09:08:45.009644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.451 qpair failed and we were unable to recover it. 00:42:50.451 [2024-05-15 09:08:45.019439] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.451 [2024-05-15 09:08:45.019544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.451 [2024-05-15 09:08:45.019570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.451 [2024-05-15 09:08:45.019585] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.451 [2024-05-15 09:08:45.019598] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.451 [2024-05-15 09:08:45.019627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.451 qpair failed and we were unable to recover it. 00:42:50.451 [2024-05-15 09:08:45.029479] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.451 [2024-05-15 09:08:45.029606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.451 [2024-05-15 09:08:45.029632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.451 [2024-05-15 09:08:45.029646] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.451 [2024-05-15 09:08:45.029659] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.451 [2024-05-15 09:08:45.029688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.451 qpair failed and we were unable to recover it. 00:42:50.451 [2024-05-15 09:08:45.039513] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.451 [2024-05-15 09:08:45.039616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.451 [2024-05-15 09:08:45.039642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.451 [2024-05-15 09:08:45.039656] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.451 [2024-05-15 09:08:45.039669] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.451 [2024-05-15 09:08:45.039699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.451 qpair failed and we were unable to recover it. 00:42:50.451 [2024-05-15 09:08:45.049536] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.451 [2024-05-15 09:08:45.049636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.451 [2024-05-15 09:08:45.049662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.452 [2024-05-15 09:08:45.049676] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.452 [2024-05-15 09:08:45.049689] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.452 [2024-05-15 09:08:45.049719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.452 qpair failed and we were unable to recover it. 00:42:50.452 [2024-05-15 09:08:45.059685] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.452 [2024-05-15 09:08:45.059810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.452 [2024-05-15 09:08:45.059835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.452 [2024-05-15 09:08:45.059850] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.452 [2024-05-15 09:08:45.059863] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.452 [2024-05-15 09:08:45.059893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.452 qpair failed and we were unable to recover it. 00:42:50.452 [2024-05-15 09:08:45.069581] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.452 [2024-05-15 09:08:45.069691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.452 [2024-05-15 09:08:45.069716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.452 [2024-05-15 09:08:45.069739] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.452 [2024-05-15 09:08:45.069753] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.452 [2024-05-15 09:08:45.069783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.452 qpair failed and we were unable to recover it. 00:42:50.452 [2024-05-15 09:08:45.079701] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.452 [2024-05-15 09:08:45.079800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.452 [2024-05-15 09:08:45.079826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.452 [2024-05-15 09:08:45.079839] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.452 [2024-05-15 09:08:45.079852] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.452 [2024-05-15 09:08:45.079882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.452 qpair failed and we were unable to recover it. 00:42:50.452 [2024-05-15 09:08:45.089633] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.452 [2024-05-15 09:08:45.089736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.452 [2024-05-15 09:08:45.089762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.452 [2024-05-15 09:08:45.089777] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.452 [2024-05-15 09:08:45.089789] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.452 [2024-05-15 09:08:45.089822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.452 qpair failed and we were unable to recover it. 00:42:50.452 [2024-05-15 09:08:45.099709] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.452 [2024-05-15 09:08:45.099827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.452 [2024-05-15 09:08:45.099853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.452 [2024-05-15 09:08:45.099868] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.452 [2024-05-15 09:08:45.099882] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.452 [2024-05-15 09:08:45.099912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.452 qpair failed and we were unable to recover it. 00:42:50.452 [2024-05-15 09:08:45.109694] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.452 [2024-05-15 09:08:45.109801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.452 [2024-05-15 09:08:45.109827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.452 [2024-05-15 09:08:45.109842] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.452 [2024-05-15 09:08:45.109854] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.452 [2024-05-15 09:08:45.109884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.452 qpair failed and we were unable to recover it. 00:42:50.452 [2024-05-15 09:08:45.119714] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.452 [2024-05-15 09:08:45.119846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.452 [2024-05-15 09:08:45.119871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.452 [2024-05-15 09:08:45.119886] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.452 [2024-05-15 09:08:45.119899] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.452 [2024-05-15 09:08:45.119928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.452 qpair failed and we were unable to recover it. 00:42:50.452 [2024-05-15 09:08:45.129780] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.452 [2024-05-15 09:08:45.129896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.452 [2024-05-15 09:08:45.129922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.452 [2024-05-15 09:08:45.129937] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.452 [2024-05-15 09:08:45.129954] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.452 [2024-05-15 09:08:45.129986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.452 qpair failed and we were unable to recover it. 00:42:50.452 [2024-05-15 09:08:45.139903] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.452 [2024-05-15 09:08:45.140046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.452 [2024-05-15 09:08:45.140071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.452 [2024-05-15 09:08:45.140086] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.452 [2024-05-15 09:08:45.140099] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.452 [2024-05-15 09:08:45.140129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.452 qpair failed and we were unable to recover it. 00:42:50.452 [2024-05-15 09:08:45.149803] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.452 [2024-05-15 09:08:45.149908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.452 [2024-05-15 09:08:45.149934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.452 [2024-05-15 09:08:45.149949] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.452 [2024-05-15 09:08:45.149961] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.452 [2024-05-15 09:08:45.149992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.452 qpair failed and we were unable to recover it. 00:42:50.452 [2024-05-15 09:08:45.159850] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.452 [2024-05-15 09:08:45.159954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.452 [2024-05-15 09:08:45.159984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.452 [2024-05-15 09:08:45.160000] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.452 [2024-05-15 09:08:45.160013] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.452 [2024-05-15 09:08:45.160043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.452 qpair failed and we were unable to recover it. 00:42:50.452 [2024-05-15 09:08:45.169872] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.452 [2024-05-15 09:08:45.170000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.452 [2024-05-15 09:08:45.170027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.452 [2024-05-15 09:08:45.170042] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.452 [2024-05-15 09:08:45.170055] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.452 [2024-05-15 09:08:45.170085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.452 qpair failed and we were unable to recover it. 00:42:50.452 [2024-05-15 09:08:45.179889] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.452 [2024-05-15 09:08:45.179994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.452 [2024-05-15 09:08:45.180019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.452 [2024-05-15 09:08:45.180035] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.452 [2024-05-15 09:08:45.180049] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.453 [2024-05-15 09:08:45.180080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.453 qpair failed and we were unable to recover it. 00:42:50.453 [2024-05-15 09:08:45.189930] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.453 [2024-05-15 09:08:45.190032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.453 [2024-05-15 09:08:45.190057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.453 [2024-05-15 09:08:45.190072] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.453 [2024-05-15 09:08:45.190084] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.453 [2024-05-15 09:08:45.190114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.453 qpair failed and we were unable to recover it. 00:42:50.453 [2024-05-15 09:08:45.199953] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.453 [2024-05-15 09:08:45.200057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.453 [2024-05-15 09:08:45.200084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.453 [2024-05-15 09:08:45.200099] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.453 [2024-05-15 09:08:45.200111] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.453 [2024-05-15 09:08:45.200146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.453 qpair failed and we were unable to recover it. 00:42:50.453 [2024-05-15 09:08:45.209959] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.453 [2024-05-15 09:08:45.210058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.453 [2024-05-15 09:08:45.210085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.453 [2024-05-15 09:08:45.210099] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.453 [2024-05-15 09:08:45.210112] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.453 [2024-05-15 09:08:45.210142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.453 qpair failed and we were unable to recover it. 00:42:50.453 [2024-05-15 09:08:45.220006] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.453 [2024-05-15 09:08:45.220117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.453 [2024-05-15 09:08:45.220144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.453 [2024-05-15 09:08:45.220159] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.453 [2024-05-15 09:08:45.220171] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.453 [2024-05-15 09:08:45.220200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.453 qpair failed and we were unable to recover it. 00:42:50.453 [2024-05-15 09:08:45.230037] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.453 [2024-05-15 09:08:45.230134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.453 [2024-05-15 09:08:45.230158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.453 [2024-05-15 09:08:45.230172] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.453 [2024-05-15 09:08:45.230185] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.453 [2024-05-15 09:08:45.230222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.453 qpair failed and we were unable to recover it. 00:42:50.453 [2024-05-15 09:08:45.240035] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.453 [2024-05-15 09:08:45.240139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.453 [2024-05-15 09:08:45.240166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.453 [2024-05-15 09:08:45.240181] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.453 [2024-05-15 09:08:45.240193] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.453 [2024-05-15 09:08:45.240229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.453 qpair failed and we were unable to recover it. 00:42:50.732 [2024-05-15 09:08:45.250085] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.732 [2024-05-15 09:08:45.250198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.732 [2024-05-15 09:08:45.250238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.732 [2024-05-15 09:08:45.250255] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.732 [2024-05-15 09:08:45.250268] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.732 [2024-05-15 09:08:45.250300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.732 qpair failed and we were unable to recover it. 00:42:50.732 [2024-05-15 09:08:45.260166] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.732 [2024-05-15 09:08:45.260287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.732 [2024-05-15 09:08:45.260315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.732 [2024-05-15 09:08:45.260331] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.732 [2024-05-15 09:08:45.260344] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.732 [2024-05-15 09:08:45.260376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.732 qpair failed and we were unable to recover it. 00:42:50.732 [2024-05-15 09:08:45.270154] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.732 [2024-05-15 09:08:45.270283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.732 [2024-05-15 09:08:45.270310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.732 [2024-05-15 09:08:45.270325] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.732 [2024-05-15 09:08:45.270341] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.732 [2024-05-15 09:08:45.270374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.732 qpair failed and we were unable to recover it. 00:42:50.732 [2024-05-15 09:08:45.280195] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.732 [2024-05-15 09:08:45.280314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.732 [2024-05-15 09:08:45.280342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.732 [2024-05-15 09:08:45.280357] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.732 [2024-05-15 09:08:45.280370] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.732 [2024-05-15 09:08:45.280402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.732 qpair failed and we were unable to recover it. 00:42:50.732 [2024-05-15 09:08:45.290252] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.732 [2024-05-15 09:08:45.290403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.732 [2024-05-15 09:08:45.290431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.732 [2024-05-15 09:08:45.290446] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.732 [2024-05-15 09:08:45.290463] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.732 [2024-05-15 09:08:45.290495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.732 qpair failed and we were unable to recover it. 00:42:50.732 [2024-05-15 09:08:45.300239] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.732 [2024-05-15 09:08:45.300342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.732 [2024-05-15 09:08:45.300369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.732 [2024-05-15 09:08:45.300384] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.732 [2024-05-15 09:08:45.300397] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.732 [2024-05-15 09:08:45.300426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.732 qpair failed and we were unable to recover it. 00:42:50.732 [2024-05-15 09:08:45.310285] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.732 [2024-05-15 09:08:45.310432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.732 [2024-05-15 09:08:45.310459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.732 [2024-05-15 09:08:45.310474] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.732 [2024-05-15 09:08:45.310486] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.732 [2024-05-15 09:08:45.310515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.732 qpair failed and we were unable to recover it. 00:42:50.732 [2024-05-15 09:08:45.320288] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.732 [2024-05-15 09:08:45.320387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.732 [2024-05-15 09:08:45.320413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.732 [2024-05-15 09:08:45.320428] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.732 [2024-05-15 09:08:45.320440] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.732 [2024-05-15 09:08:45.320469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.732 qpair failed and we were unable to recover it. 00:42:50.732 [2024-05-15 09:08:45.330358] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.732 [2024-05-15 09:08:45.330467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.732 [2024-05-15 09:08:45.330493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.733 [2024-05-15 09:08:45.330508] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.733 [2024-05-15 09:08:45.330521] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.733 [2024-05-15 09:08:45.330551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.733 qpair failed and we were unable to recover it. 00:42:50.733 [2024-05-15 09:08:45.340370] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.733 [2024-05-15 09:08:45.340485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.733 [2024-05-15 09:08:45.340511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.733 [2024-05-15 09:08:45.340526] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.733 [2024-05-15 09:08:45.340539] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.733 [2024-05-15 09:08:45.340569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.733 qpair failed and we were unable to recover it. 00:42:50.733 [2024-05-15 09:08:45.350388] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.733 [2024-05-15 09:08:45.350494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.733 [2024-05-15 09:08:45.350522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.733 [2024-05-15 09:08:45.350537] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.733 [2024-05-15 09:08:45.350549] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.733 [2024-05-15 09:08:45.350579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.733 qpair failed and we were unable to recover it. 00:42:50.733 [2024-05-15 09:08:45.360418] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.733 [2024-05-15 09:08:45.360572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.733 [2024-05-15 09:08:45.360598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.733 [2024-05-15 09:08:45.360613] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.733 [2024-05-15 09:08:45.360626] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.733 [2024-05-15 09:08:45.360655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.733 qpair failed and we were unable to recover it. 00:42:50.733 [2024-05-15 09:08:45.370474] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.733 [2024-05-15 09:08:45.370581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.733 [2024-05-15 09:08:45.370607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.733 [2024-05-15 09:08:45.370622] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.733 [2024-05-15 09:08:45.370634] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.733 [2024-05-15 09:08:45.370664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.733 qpair failed and we were unable to recover it. 00:42:50.733 [2024-05-15 09:08:45.380466] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.733 [2024-05-15 09:08:45.380576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.733 [2024-05-15 09:08:45.380603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.733 [2024-05-15 09:08:45.380618] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.733 [2024-05-15 09:08:45.380635] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.733 [2024-05-15 09:08:45.380665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.733 qpair failed and we were unable to recover it. 00:42:50.733 [2024-05-15 09:08:45.390500] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.733 [2024-05-15 09:08:45.390604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.733 [2024-05-15 09:08:45.390630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.733 [2024-05-15 09:08:45.390645] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.733 [2024-05-15 09:08:45.390657] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.733 [2024-05-15 09:08:45.390687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.733 qpair failed and we were unable to recover it. 00:42:50.733 [2024-05-15 09:08:45.400535] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.733 [2024-05-15 09:08:45.400675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.733 [2024-05-15 09:08:45.400701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.733 [2024-05-15 09:08:45.400716] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.733 [2024-05-15 09:08:45.400729] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.733 [2024-05-15 09:08:45.400758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.733 qpair failed and we were unable to recover it. 00:42:50.733 [2024-05-15 09:08:45.410544] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.733 [2024-05-15 09:08:45.410643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.733 [2024-05-15 09:08:45.410670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.733 [2024-05-15 09:08:45.410685] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.733 [2024-05-15 09:08:45.410698] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.733 [2024-05-15 09:08:45.410727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.733 qpair failed and we were unable to recover it. 00:42:50.733 [2024-05-15 09:08:45.420610] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.733 [2024-05-15 09:08:45.420727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.733 [2024-05-15 09:08:45.420753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.733 [2024-05-15 09:08:45.420767] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.733 [2024-05-15 09:08:45.420780] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.733 [2024-05-15 09:08:45.420810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.733 qpair failed and we were unable to recover it. 00:42:50.733 [2024-05-15 09:08:45.430634] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.733 [2024-05-15 09:08:45.430739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.733 [2024-05-15 09:08:45.430765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.733 [2024-05-15 09:08:45.430780] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.733 [2024-05-15 09:08:45.430793] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.733 [2024-05-15 09:08:45.430823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.733 qpair failed and we were unable to recover it. 00:42:50.733 [2024-05-15 09:08:45.440637] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.733 [2024-05-15 09:08:45.440768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.733 [2024-05-15 09:08:45.440794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.733 [2024-05-15 09:08:45.440809] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.734 [2024-05-15 09:08:45.440822] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.734 [2024-05-15 09:08:45.440851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.734 qpair failed and we were unable to recover it. 00:42:50.734 [2024-05-15 09:08:45.450689] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.734 [2024-05-15 09:08:45.450820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.734 [2024-05-15 09:08:45.450846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.734 [2024-05-15 09:08:45.450861] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.734 [2024-05-15 09:08:45.450874] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.734 [2024-05-15 09:08:45.450904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.734 qpair failed and we were unable to recover it. 00:42:50.734 [2024-05-15 09:08:45.460722] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.734 [2024-05-15 09:08:45.460834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.734 [2024-05-15 09:08:45.460861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.734 [2024-05-15 09:08:45.460875] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.734 [2024-05-15 09:08:45.460888] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.734 [2024-05-15 09:08:45.460918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.734 qpair failed and we were unable to recover it. 00:42:50.734 [2024-05-15 09:08:45.470878] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.734 [2024-05-15 09:08:45.470990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.734 [2024-05-15 09:08:45.471016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.734 [2024-05-15 09:08:45.471037] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.734 [2024-05-15 09:08:45.471050] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.734 [2024-05-15 09:08:45.471080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.734 qpair failed and we were unable to recover it. 00:42:50.734 [2024-05-15 09:08:45.480744] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.734 [2024-05-15 09:08:45.480850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.734 [2024-05-15 09:08:45.480877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.734 [2024-05-15 09:08:45.480892] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.734 [2024-05-15 09:08:45.480906] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.734 [2024-05-15 09:08:45.480937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.734 qpair failed and we were unable to recover it. 00:42:50.734 [2024-05-15 09:08:45.490795] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.734 [2024-05-15 09:08:45.490904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.734 [2024-05-15 09:08:45.490930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.734 [2024-05-15 09:08:45.490945] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.734 [2024-05-15 09:08:45.490958] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.734 [2024-05-15 09:08:45.490987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.734 qpair failed and we were unable to recover it. 00:42:50.734 [2024-05-15 09:08:45.500810] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.734 [2024-05-15 09:08:45.500921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.734 [2024-05-15 09:08:45.500947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.734 [2024-05-15 09:08:45.500963] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.734 [2024-05-15 09:08:45.500976] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.734 [2024-05-15 09:08:45.501006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.734 qpair failed and we were unable to recover it. 00:42:50.734 [2024-05-15 09:08:45.510821] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.734 [2024-05-15 09:08:45.510924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.734 [2024-05-15 09:08:45.510950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.734 [2024-05-15 09:08:45.510964] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.734 [2024-05-15 09:08:45.510977] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.734 [2024-05-15 09:08:45.511006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.734 qpair failed and we were unable to recover it. 00:42:50.734 [2024-05-15 09:08:45.520893] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.734 [2024-05-15 09:08:45.520996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.734 [2024-05-15 09:08:45.521021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.734 [2024-05-15 09:08:45.521036] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.734 [2024-05-15 09:08:45.521049] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.734 [2024-05-15 09:08:45.521077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.734 qpair failed and we were unable to recover it. 00:42:50.994 [2024-05-15 09:08:45.530907] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.994 [2024-05-15 09:08:45.531028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.994 [2024-05-15 09:08:45.531055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.994 [2024-05-15 09:08:45.531070] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.994 [2024-05-15 09:08:45.531085] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.994 [2024-05-15 09:08:45.531115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.994 qpair failed and we were unable to recover it. 00:42:50.994 [2024-05-15 09:08:45.540972] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.994 [2024-05-15 09:08:45.541134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.994 [2024-05-15 09:08:45.541161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.994 [2024-05-15 09:08:45.541176] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.994 [2024-05-15 09:08:45.541189] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.994 [2024-05-15 09:08:45.541226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.994 qpair failed and we were unable to recover it. 00:42:50.994 [2024-05-15 09:08:45.550965] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.994 [2024-05-15 09:08:45.551078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.994 [2024-05-15 09:08:45.551104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.994 [2024-05-15 09:08:45.551119] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.994 [2024-05-15 09:08:45.551131] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.994 [2024-05-15 09:08:45.551160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.994 qpair failed and we were unable to recover it. 00:42:50.994 [2024-05-15 09:08:45.560976] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.994 [2024-05-15 09:08:45.561083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.994 [2024-05-15 09:08:45.561113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.994 [2024-05-15 09:08:45.561129] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.994 [2024-05-15 09:08:45.561141] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.994 [2024-05-15 09:08:45.561170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.994 qpair failed and we were unable to recover it. 00:42:50.994 [2024-05-15 09:08:45.570999] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.994 [2024-05-15 09:08:45.571111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.994 [2024-05-15 09:08:45.571138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.994 [2024-05-15 09:08:45.571153] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.994 [2024-05-15 09:08:45.571165] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.994 [2024-05-15 09:08:45.571195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.994 qpair failed and we were unable to recover it. 00:42:50.994 [2024-05-15 09:08:45.581074] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.994 [2024-05-15 09:08:45.581211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.994 [2024-05-15 09:08:45.581243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.994 [2024-05-15 09:08:45.581258] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.994 [2024-05-15 09:08:45.581271] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.994 [2024-05-15 09:08:45.581301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.994 qpair failed and we were unable to recover it. 00:42:50.994 [2024-05-15 09:08:45.591067] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.994 [2024-05-15 09:08:45.591172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.994 [2024-05-15 09:08:45.591199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.994 [2024-05-15 09:08:45.591214] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.994 [2024-05-15 09:08:45.591236] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.994 [2024-05-15 09:08:45.591266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.994 qpair failed and we were unable to recover it. 00:42:50.994 [2024-05-15 09:08:45.601099] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.994 [2024-05-15 09:08:45.601224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.994 [2024-05-15 09:08:45.601252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.994 [2024-05-15 09:08:45.601267] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.994 [2024-05-15 09:08:45.601280] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.994 [2024-05-15 09:08:45.601329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.994 qpair failed and we were unable to recover it. 00:42:50.994 [2024-05-15 09:08:45.611117] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.994 [2024-05-15 09:08:45.611276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.994 [2024-05-15 09:08:45.611303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.994 [2024-05-15 09:08:45.611319] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.994 [2024-05-15 09:08:45.611331] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.994 [2024-05-15 09:08:45.611361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.994 qpair failed and we were unable to recover it. 00:42:50.994 [2024-05-15 09:08:45.621155] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.994 [2024-05-15 09:08:45.621285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.994 [2024-05-15 09:08:45.621311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.994 [2024-05-15 09:08:45.621325] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.994 [2024-05-15 09:08:45.621337] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.994 [2024-05-15 09:08:45.621367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.994 qpair failed and we were unable to recover it. 00:42:50.994 [2024-05-15 09:08:45.631192] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.994 [2024-05-15 09:08:45.631320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.994 [2024-05-15 09:08:45.631348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.994 [2024-05-15 09:08:45.631363] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.994 [2024-05-15 09:08:45.631375] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.994 [2024-05-15 09:08:45.631405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.994 qpair failed and we were unable to recover it. 00:42:50.994 [2024-05-15 09:08:45.641234] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.994 [2024-05-15 09:08:45.641343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.995 [2024-05-15 09:08:45.641370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.995 [2024-05-15 09:08:45.641385] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.995 [2024-05-15 09:08:45.641397] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.995 [2024-05-15 09:08:45.641427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.995 qpair failed and we were unable to recover it. 00:42:50.995 [2024-05-15 09:08:45.651238] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.995 [2024-05-15 09:08:45.651354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.995 [2024-05-15 09:08:45.651385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.995 [2024-05-15 09:08:45.651401] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.995 [2024-05-15 09:08:45.651413] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.995 [2024-05-15 09:08:45.651442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.995 qpair failed and we were unable to recover it. 00:42:50.995 [2024-05-15 09:08:45.661411] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.995 [2024-05-15 09:08:45.661556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.995 [2024-05-15 09:08:45.661584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.995 [2024-05-15 09:08:45.661603] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.995 [2024-05-15 09:08:45.661617] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.995 [2024-05-15 09:08:45.661649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.995 qpair failed and we were unable to recover it. 00:42:50.995 [2024-05-15 09:08:45.671295] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.995 [2024-05-15 09:08:45.671405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.995 [2024-05-15 09:08:45.671431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.995 [2024-05-15 09:08:45.671446] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.995 [2024-05-15 09:08:45.671459] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.995 [2024-05-15 09:08:45.671488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.995 qpair failed and we were unable to recover it. 00:42:50.995 [2024-05-15 09:08:45.681307] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.995 [2024-05-15 09:08:45.681415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.995 [2024-05-15 09:08:45.681442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.995 [2024-05-15 09:08:45.681456] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.995 [2024-05-15 09:08:45.681469] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.995 [2024-05-15 09:08:45.681498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.995 qpair failed and we were unable to recover it. 00:42:50.995 [2024-05-15 09:08:45.691394] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.995 [2024-05-15 09:08:45.691545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.995 [2024-05-15 09:08:45.691571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.995 [2024-05-15 09:08:45.691586] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.995 [2024-05-15 09:08:45.691599] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.995 [2024-05-15 09:08:45.691634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.995 qpair failed and we were unable to recover it. 00:42:50.995 [2024-05-15 09:08:45.701409] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.995 [2024-05-15 09:08:45.701521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.995 [2024-05-15 09:08:45.701546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.995 [2024-05-15 09:08:45.701562] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.995 [2024-05-15 09:08:45.701574] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.995 [2024-05-15 09:08:45.701603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.995 qpair failed and we were unable to recover it. 00:42:50.995 [2024-05-15 09:08:45.711411] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.995 [2024-05-15 09:08:45.711543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.995 [2024-05-15 09:08:45.711568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.995 [2024-05-15 09:08:45.711583] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.995 [2024-05-15 09:08:45.711595] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.995 [2024-05-15 09:08:45.711625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.995 qpair failed and we were unable to recover it. 00:42:50.995 [2024-05-15 09:08:45.721428] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.995 [2024-05-15 09:08:45.721538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.995 [2024-05-15 09:08:45.721565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.995 [2024-05-15 09:08:45.721580] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.995 [2024-05-15 09:08:45.721592] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.995 [2024-05-15 09:08:45.721624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.995 qpair failed and we were unable to recover it. 00:42:50.995 [2024-05-15 09:08:45.731482] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.995 [2024-05-15 09:08:45.731638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.995 [2024-05-15 09:08:45.731665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.995 [2024-05-15 09:08:45.731680] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.995 [2024-05-15 09:08:45.731693] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.995 [2024-05-15 09:08:45.731722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.995 qpair failed and we were unable to recover it. 00:42:50.995 [2024-05-15 09:08:45.741605] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.995 [2024-05-15 09:08:45.741735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.995 [2024-05-15 09:08:45.741761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.995 [2024-05-15 09:08:45.741776] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.995 [2024-05-15 09:08:45.741789] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.995 [2024-05-15 09:08:45.741819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.995 qpair failed and we were unable to recover it. 00:42:50.995 [2024-05-15 09:08:45.751576] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.995 [2024-05-15 09:08:45.751701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.995 [2024-05-15 09:08:45.751728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.995 [2024-05-15 09:08:45.751743] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.995 [2024-05-15 09:08:45.751755] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.996 [2024-05-15 09:08:45.751784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.996 qpair failed and we were unable to recover it. 00:42:50.996 [2024-05-15 09:08:45.761566] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.996 [2024-05-15 09:08:45.761672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.996 [2024-05-15 09:08:45.761698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.996 [2024-05-15 09:08:45.761713] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.996 [2024-05-15 09:08:45.761726] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.996 [2024-05-15 09:08:45.761755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.996 qpair failed and we were unable to recover it. 00:42:50.996 [2024-05-15 09:08:45.771609] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.996 [2024-05-15 09:08:45.771712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.996 [2024-05-15 09:08:45.771738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.996 [2024-05-15 09:08:45.771752] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.996 [2024-05-15 09:08:45.771765] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.996 [2024-05-15 09:08:45.771794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.996 qpair failed and we were unable to recover it. 00:42:50.996 [2024-05-15 09:08:45.781622] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.996 [2024-05-15 09:08:45.781743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.996 [2024-05-15 09:08:45.781773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.996 [2024-05-15 09:08:45.781787] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.996 [2024-05-15 09:08:45.781816] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:50.996 [2024-05-15 09:08:45.781846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.996 qpair failed and we were unable to recover it. 00:42:51.257 [2024-05-15 09:08:45.791658] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.257 [2024-05-15 09:08:45.791769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.257 [2024-05-15 09:08:45.791795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.257 [2024-05-15 09:08:45.791811] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.257 [2024-05-15 09:08:45.791823] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.257 [2024-05-15 09:08:45.791852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.257 qpair failed and we were unable to recover it. 00:42:51.257 [2024-05-15 09:08:45.801649] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.257 [2024-05-15 09:08:45.801759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.257 [2024-05-15 09:08:45.801785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.257 [2024-05-15 09:08:45.801800] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.257 [2024-05-15 09:08:45.801813] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.257 [2024-05-15 09:08:45.801842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.257 qpair failed and we were unable to recover it. 00:42:51.257 [2024-05-15 09:08:45.811691] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.257 [2024-05-15 09:08:45.811806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.257 [2024-05-15 09:08:45.811832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.257 [2024-05-15 09:08:45.811847] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.257 [2024-05-15 09:08:45.811859] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.257 [2024-05-15 09:08:45.811889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.257 qpair failed and we were unable to recover it. 00:42:51.257 [2024-05-15 09:08:45.821771] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.257 [2024-05-15 09:08:45.821894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.257 [2024-05-15 09:08:45.821920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.257 [2024-05-15 09:08:45.821936] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.257 [2024-05-15 09:08:45.821951] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.257 [2024-05-15 09:08:45.821981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.257 qpair failed and we were unable to recover it. 00:42:51.257 [2024-05-15 09:08:45.831891] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.257 [2024-05-15 09:08:45.832016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.257 [2024-05-15 09:08:45.832044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.257 [2024-05-15 09:08:45.832060] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.257 [2024-05-15 09:08:45.832072] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.257 [2024-05-15 09:08:45.832103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.257 qpair failed and we were unable to recover it. 00:42:51.257 [2024-05-15 09:08:45.841795] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.257 [2024-05-15 09:08:45.841911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.257 [2024-05-15 09:08:45.841938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.257 [2024-05-15 09:08:45.841953] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.257 [2024-05-15 09:08:45.841965] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.257 [2024-05-15 09:08:45.841996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.257 qpair failed and we were unable to recover it. 00:42:51.257 [2024-05-15 09:08:45.851788] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.257 [2024-05-15 09:08:45.851894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.257 [2024-05-15 09:08:45.851920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.257 [2024-05-15 09:08:45.851935] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.257 [2024-05-15 09:08:45.851947] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.257 [2024-05-15 09:08:45.851977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.257 qpair failed and we were unable to recover it. 00:42:51.257 [2024-05-15 09:08:45.861842] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.257 [2024-05-15 09:08:45.861959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.257 [2024-05-15 09:08:45.861988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.257 [2024-05-15 09:08:45.862004] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.257 [2024-05-15 09:08:45.862016] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.257 [2024-05-15 09:08:45.862046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.257 qpair failed and we were unable to recover it. 00:42:51.257 [2024-05-15 09:08:45.871844] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.257 [2024-05-15 09:08:45.871950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.257 [2024-05-15 09:08:45.871976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.257 [2024-05-15 09:08:45.872001] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.257 [2024-05-15 09:08:45.872014] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.257 [2024-05-15 09:08:45.872044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.257 qpair failed and we were unable to recover it. 00:42:51.257 [2024-05-15 09:08:45.881914] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.257 [2024-05-15 09:08:45.882024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.257 [2024-05-15 09:08:45.882050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.257 [2024-05-15 09:08:45.882065] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.257 [2024-05-15 09:08:45.882077] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.257 [2024-05-15 09:08:45.882106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.257 qpair failed and we were unable to recover it. 00:42:51.257 [2024-05-15 09:08:45.891895] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.257 [2024-05-15 09:08:45.892004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.257 [2024-05-15 09:08:45.892031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.257 [2024-05-15 09:08:45.892046] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.257 [2024-05-15 09:08:45.892059] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.257 [2024-05-15 09:08:45.892088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.257 qpair failed and we were unable to recover it. 00:42:51.257 [2024-05-15 09:08:45.901950] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.257 [2024-05-15 09:08:45.902083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.257 [2024-05-15 09:08:45.902112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.257 [2024-05-15 09:08:45.902127] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.258 [2024-05-15 09:08:45.902144] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.258 [2024-05-15 09:08:45.902175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.258 qpair failed and we were unable to recover it. 00:42:51.258 [2024-05-15 09:08:45.911994] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.258 [2024-05-15 09:08:45.912119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.258 [2024-05-15 09:08:45.912147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.258 [2024-05-15 09:08:45.912162] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.258 [2024-05-15 09:08:45.912175] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.258 [2024-05-15 09:08:45.912209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.258 qpair failed and we were unable to recover it. 00:42:51.258 [2024-05-15 09:08:45.921993] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.258 [2024-05-15 09:08:45.922099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.258 [2024-05-15 09:08:45.922127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.258 [2024-05-15 09:08:45.922141] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.258 [2024-05-15 09:08:45.922154] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.258 [2024-05-15 09:08:45.922184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.258 qpair failed and we were unable to recover it. 00:42:51.258 [2024-05-15 09:08:45.932034] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.258 [2024-05-15 09:08:45.932147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.258 [2024-05-15 09:08:45.932174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.258 [2024-05-15 09:08:45.932188] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.258 [2024-05-15 09:08:45.932209] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.258 [2024-05-15 09:08:45.932247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.258 qpair failed and we were unable to recover it. 00:42:51.258 [2024-05-15 09:08:45.942049] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.258 [2024-05-15 09:08:45.942167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.258 [2024-05-15 09:08:45.942194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.258 [2024-05-15 09:08:45.942212] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.258 [2024-05-15 09:08:45.942233] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.258 [2024-05-15 09:08:45.942263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.258 qpair failed and we were unable to recover it. 00:42:51.258 [2024-05-15 09:08:45.952080] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.258 [2024-05-15 09:08:45.952191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.258 [2024-05-15 09:08:45.952236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.258 [2024-05-15 09:08:45.952252] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.258 [2024-05-15 09:08:45.952265] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.258 [2024-05-15 09:08:45.952294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.258 qpair failed and we were unable to recover it. 00:42:51.258 [2024-05-15 09:08:45.962128] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.258 [2024-05-15 09:08:45.962261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.258 [2024-05-15 09:08:45.962286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.258 [2024-05-15 09:08:45.962306] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.258 [2024-05-15 09:08:45.962319] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.258 [2024-05-15 09:08:45.962350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.258 qpair failed and we were unable to recover it. 00:42:51.258 [2024-05-15 09:08:45.972147] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.258 [2024-05-15 09:08:45.972288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.258 [2024-05-15 09:08:45.972316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.258 [2024-05-15 09:08:45.972331] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.258 [2024-05-15 09:08:45.972344] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.258 [2024-05-15 09:08:45.972374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.258 qpair failed and we were unable to recover it. 00:42:51.258 [2024-05-15 09:08:45.982181] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.258 [2024-05-15 09:08:45.982314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.258 [2024-05-15 09:08:45.982341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.258 [2024-05-15 09:08:45.982355] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.258 [2024-05-15 09:08:45.982369] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.258 [2024-05-15 09:08:45.982401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.258 qpair failed and we were unable to recover it. 00:42:51.258 [2024-05-15 09:08:45.992249] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.258 [2024-05-15 09:08:45.992407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.258 [2024-05-15 09:08:45.992434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.258 [2024-05-15 09:08:45.992450] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.258 [2024-05-15 09:08:45.992463] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.258 [2024-05-15 09:08:45.992493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.258 qpair failed and we were unable to recover it. 00:42:51.258 [2024-05-15 09:08:46.002313] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.258 [2024-05-15 09:08:46.002452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.258 [2024-05-15 09:08:46.002478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.258 [2024-05-15 09:08:46.002504] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.258 [2024-05-15 09:08:46.002517] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.258 [2024-05-15 09:08:46.002546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.258 qpair failed and we were unable to recover it. 00:42:51.258 [2024-05-15 09:08:46.012282] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.258 [2024-05-15 09:08:46.012403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.258 [2024-05-15 09:08:46.012430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.258 [2024-05-15 09:08:46.012445] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.258 [2024-05-15 09:08:46.012466] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.258 [2024-05-15 09:08:46.012498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.258 qpair failed and we were unable to recover it. 00:42:51.258 [2024-05-15 09:08:46.022315] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.258 [2024-05-15 09:08:46.022426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.258 [2024-05-15 09:08:46.022454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.258 [2024-05-15 09:08:46.022468] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.258 [2024-05-15 09:08:46.022481] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.258 [2024-05-15 09:08:46.022511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.258 qpair failed and we were unable to recover it. 00:42:51.258 [2024-05-15 09:08:46.032307] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.258 [2024-05-15 09:08:46.032456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.258 [2024-05-15 09:08:46.032483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.258 [2024-05-15 09:08:46.032498] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.258 [2024-05-15 09:08:46.032520] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.259 [2024-05-15 09:08:46.032550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.259 qpair failed and we were unable to recover it. 00:42:51.259 [2024-05-15 09:08:46.042350] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.259 [2024-05-15 09:08:46.042458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.259 [2024-05-15 09:08:46.042485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.259 [2024-05-15 09:08:46.042500] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.259 [2024-05-15 09:08:46.042513] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.259 [2024-05-15 09:08:46.042567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.259 qpair failed and we were unable to recover it. 00:42:51.520 [2024-05-15 09:08:46.052467] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.520 [2024-05-15 09:08:46.052577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.520 [2024-05-15 09:08:46.052608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.520 [2024-05-15 09:08:46.052624] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.520 [2024-05-15 09:08:46.052645] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.520 [2024-05-15 09:08:46.052674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.520 qpair failed and we were unable to recover it. 00:42:51.520 [2024-05-15 09:08:46.062448] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.520 [2024-05-15 09:08:46.062564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.520 [2024-05-15 09:08:46.062597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.520 [2024-05-15 09:08:46.062614] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.520 [2024-05-15 09:08:46.062628] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.520 [2024-05-15 09:08:46.062662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.520 qpair failed and we were unable to recover it. 00:42:51.520 [2024-05-15 09:08:46.072427] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.520 [2024-05-15 09:08:46.072543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.520 [2024-05-15 09:08:46.072570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.520 [2024-05-15 09:08:46.072585] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.520 [2024-05-15 09:08:46.072597] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.520 [2024-05-15 09:08:46.072627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.520 qpair failed and we were unable to recover it. 00:42:51.520 [2024-05-15 09:08:46.082453] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.520 [2024-05-15 09:08:46.082584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.520 [2024-05-15 09:08:46.082611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.520 [2024-05-15 09:08:46.082625] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.520 [2024-05-15 09:08:46.082638] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.521 [2024-05-15 09:08:46.082667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.521 qpair failed and we were unable to recover it. 00:42:51.521 [2024-05-15 09:08:46.092521] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.521 [2024-05-15 09:08:46.092636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.521 [2024-05-15 09:08:46.092663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.521 [2024-05-15 09:08:46.092678] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.521 [2024-05-15 09:08:46.092691] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.521 [2024-05-15 09:08:46.092742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.521 qpair failed and we were unable to recover it. 00:42:51.521 [2024-05-15 09:08:46.102495] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.521 [2024-05-15 09:08:46.102604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.521 [2024-05-15 09:08:46.102630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.521 [2024-05-15 09:08:46.102645] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.521 [2024-05-15 09:08:46.102657] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.521 [2024-05-15 09:08:46.102687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.521 qpair failed and we were unable to recover it. 00:42:51.521 [2024-05-15 09:08:46.112510] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.521 [2024-05-15 09:08:46.112622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.521 [2024-05-15 09:08:46.112649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.521 [2024-05-15 09:08:46.112664] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.521 [2024-05-15 09:08:46.112676] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.521 [2024-05-15 09:08:46.112706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.521 qpair failed and we were unable to recover it. 00:42:51.521 [2024-05-15 09:08:46.122593] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.521 [2024-05-15 09:08:46.122715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.521 [2024-05-15 09:08:46.122744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.521 [2024-05-15 09:08:46.122759] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.521 [2024-05-15 09:08:46.122772] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.521 [2024-05-15 09:08:46.122803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.521 qpair failed and we were unable to recover it. 00:42:51.521 [2024-05-15 09:08:46.132574] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.521 [2024-05-15 09:08:46.132685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.521 [2024-05-15 09:08:46.132712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.521 [2024-05-15 09:08:46.132727] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.521 [2024-05-15 09:08:46.132740] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.521 [2024-05-15 09:08:46.132770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.521 qpair failed and we were unable to recover it. 00:42:51.521 [2024-05-15 09:08:46.142611] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.521 [2024-05-15 09:08:46.142725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.521 [2024-05-15 09:08:46.142756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.521 [2024-05-15 09:08:46.142772] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.521 [2024-05-15 09:08:46.142785] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.521 [2024-05-15 09:08:46.142815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.521 qpair failed and we were unable to recover it. 00:42:51.521 [2024-05-15 09:08:46.152634] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.521 [2024-05-15 09:08:46.152786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.521 [2024-05-15 09:08:46.152813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.521 [2024-05-15 09:08:46.152829] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.521 [2024-05-15 09:08:46.152841] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.521 [2024-05-15 09:08:46.152870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.521 qpair failed and we were unable to recover it. 00:42:51.521 [2024-05-15 09:08:46.162664] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.521 [2024-05-15 09:08:46.162775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.521 [2024-05-15 09:08:46.162801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.521 [2024-05-15 09:08:46.162816] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.521 [2024-05-15 09:08:46.162829] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.521 [2024-05-15 09:08:46.162873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.521 qpair failed and we were unable to recover it. 00:42:51.521 [2024-05-15 09:08:46.172716] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.521 [2024-05-15 09:08:46.172825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.521 [2024-05-15 09:08:46.172852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.521 [2024-05-15 09:08:46.172867] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.521 [2024-05-15 09:08:46.172879] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.521 [2024-05-15 09:08:46.172909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.521 qpair failed and we were unable to recover it. 00:42:51.521 [2024-05-15 09:08:46.182730] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.521 [2024-05-15 09:08:46.182858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.521 [2024-05-15 09:08:46.182884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.521 [2024-05-15 09:08:46.182899] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.521 [2024-05-15 09:08:46.182918] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.521 [2024-05-15 09:08:46.182948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.521 qpair failed and we were unable to recover it. 00:42:51.521 [2024-05-15 09:08:46.192789] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.521 [2024-05-15 09:08:46.192899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.521 [2024-05-15 09:08:46.192926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.521 [2024-05-15 09:08:46.192941] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.521 [2024-05-15 09:08:46.192953] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.521 [2024-05-15 09:08:46.192982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.521 qpair failed and we were unable to recover it. 00:42:51.521 [2024-05-15 09:08:46.202794] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.521 [2024-05-15 09:08:46.202902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.521 [2024-05-15 09:08:46.202929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.521 [2024-05-15 09:08:46.202944] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.521 [2024-05-15 09:08:46.202956] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.522 [2024-05-15 09:08:46.202986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.522 qpair failed and we were unable to recover it. 00:42:51.522 [2024-05-15 09:08:46.212812] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.522 [2024-05-15 09:08:46.212937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.522 [2024-05-15 09:08:46.212963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.522 [2024-05-15 09:08:46.212979] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.522 [2024-05-15 09:08:46.212991] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.522 [2024-05-15 09:08:46.213021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.522 qpair failed and we were unable to recover it. 00:42:51.522 [2024-05-15 09:08:46.222855] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.522 [2024-05-15 09:08:46.222977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.522 [2024-05-15 09:08:46.223003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.522 [2024-05-15 09:08:46.223018] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.522 [2024-05-15 09:08:46.223031] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.522 [2024-05-15 09:08:46.223060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.522 qpair failed and we were unable to recover it. 00:42:51.522 [2024-05-15 09:08:46.232867] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.522 [2024-05-15 09:08:46.232985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.522 [2024-05-15 09:08:46.233012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.522 [2024-05-15 09:08:46.233027] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.522 [2024-05-15 09:08:46.233039] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.522 [2024-05-15 09:08:46.233069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.522 qpair failed and we were unable to recover it. 00:42:51.522 [2024-05-15 09:08:46.242879] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.522 [2024-05-15 09:08:46.243023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.522 [2024-05-15 09:08:46.243049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.522 [2024-05-15 09:08:46.243065] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.522 [2024-05-15 09:08:46.243077] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.522 [2024-05-15 09:08:46.243106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.522 qpair failed and we were unable to recover it. 00:42:51.522 [2024-05-15 09:08:46.252920] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.522 [2024-05-15 09:08:46.253046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.522 [2024-05-15 09:08:46.253074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.522 [2024-05-15 09:08:46.253089] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.522 [2024-05-15 09:08:46.253104] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.522 [2024-05-15 09:08:46.253134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.522 qpair failed and we were unable to recover it. 00:42:51.522 [2024-05-15 09:08:46.262955] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.522 [2024-05-15 09:08:46.263067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.522 [2024-05-15 09:08:46.263094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.522 [2024-05-15 09:08:46.263108] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.522 [2024-05-15 09:08:46.263121] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.522 [2024-05-15 09:08:46.263151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.522 qpair failed and we were unable to recover it. 00:42:51.522 [2024-05-15 09:08:46.272980] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.522 [2024-05-15 09:08:46.273098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.522 [2024-05-15 09:08:46.273123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.522 [2024-05-15 09:08:46.273143] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.522 [2024-05-15 09:08:46.273156] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.522 [2024-05-15 09:08:46.273186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.522 qpair failed and we were unable to recover it. 00:42:51.522 [2024-05-15 09:08:46.283005] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.522 [2024-05-15 09:08:46.283104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.522 [2024-05-15 09:08:46.283130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.522 [2024-05-15 09:08:46.283144] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.522 [2024-05-15 09:08:46.283157] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.522 [2024-05-15 09:08:46.283200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.522 qpair failed and we were unable to recover it. 00:42:51.522 [2024-05-15 09:08:46.293050] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.522 [2024-05-15 09:08:46.293166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.522 [2024-05-15 09:08:46.293193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.522 [2024-05-15 09:08:46.293208] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.522 [2024-05-15 09:08:46.293229] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.522 [2024-05-15 09:08:46.293260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.522 qpair failed and we were unable to recover it. 00:42:51.522 [2024-05-15 09:08:46.303087] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.522 [2024-05-15 09:08:46.303201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.522 [2024-05-15 09:08:46.303235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.522 [2024-05-15 09:08:46.303252] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.522 [2024-05-15 09:08:46.303265] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.522 [2024-05-15 09:08:46.303296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.522 qpair failed and we were unable to recover it. 00:42:51.784 [2024-05-15 09:08:46.313166] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.784 [2024-05-15 09:08:46.313284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.784 [2024-05-15 09:08:46.313310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.784 [2024-05-15 09:08:46.313325] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.784 [2024-05-15 09:08:46.313338] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.784 [2024-05-15 09:08:46.313368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.784 qpair failed and we were unable to recover it. 00:42:51.784 [2024-05-15 09:08:46.323134] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.784 [2024-05-15 09:08:46.323246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.784 [2024-05-15 09:08:46.323272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.784 [2024-05-15 09:08:46.323286] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.784 [2024-05-15 09:08:46.323299] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.784 [2024-05-15 09:08:46.323328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.784 qpair failed and we were unable to recover it. 00:42:51.784 [2024-05-15 09:08:46.333125] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.784 [2024-05-15 09:08:46.333234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.784 [2024-05-15 09:08:46.333260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.784 [2024-05-15 09:08:46.333275] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.784 [2024-05-15 09:08:46.333287] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.784 [2024-05-15 09:08:46.333317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.784 qpair failed and we were unable to recover it. 00:42:51.784 [2024-05-15 09:08:46.343184] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.784 [2024-05-15 09:08:46.343344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.784 [2024-05-15 09:08:46.343370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.784 [2024-05-15 09:08:46.343385] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.784 [2024-05-15 09:08:46.343397] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.784 [2024-05-15 09:08:46.343427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.784 qpair failed and we were unable to recover it. 00:42:51.784 [2024-05-15 09:08:46.353209] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.784 [2024-05-15 09:08:46.353329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.784 [2024-05-15 09:08:46.353354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.784 [2024-05-15 09:08:46.353369] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.784 [2024-05-15 09:08:46.353382] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.784 [2024-05-15 09:08:46.353411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.784 qpair failed and we were unable to recover it. 00:42:51.784 [2024-05-15 09:08:46.363232] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.784 [2024-05-15 09:08:46.363341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.784 [2024-05-15 09:08:46.363366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.784 [2024-05-15 09:08:46.363386] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.784 [2024-05-15 09:08:46.363400] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.784 [2024-05-15 09:08:46.363430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.785 qpair failed and we were unable to recover it. 00:42:51.785 [2024-05-15 09:08:46.373260] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.785 [2024-05-15 09:08:46.373361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.785 [2024-05-15 09:08:46.373386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.785 [2024-05-15 09:08:46.373401] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.785 [2024-05-15 09:08:46.373413] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.785 [2024-05-15 09:08:46.373444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.785 qpair failed and we were unable to recover it. 00:42:51.785 [2024-05-15 09:08:46.383314] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.785 [2024-05-15 09:08:46.383430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.785 [2024-05-15 09:08:46.383456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.785 [2024-05-15 09:08:46.383470] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.785 [2024-05-15 09:08:46.383484] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.785 [2024-05-15 09:08:46.383513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.785 qpair failed and we were unable to recover it. 00:42:51.785 [2024-05-15 09:08:46.393297] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.785 [2024-05-15 09:08:46.393399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.785 [2024-05-15 09:08:46.393423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.785 [2024-05-15 09:08:46.393438] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.785 [2024-05-15 09:08:46.393451] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.785 [2024-05-15 09:08:46.393480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.785 qpair failed and we were unable to recover it. 00:42:51.785 [2024-05-15 09:08:46.403360] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.785 [2024-05-15 09:08:46.403478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.785 [2024-05-15 09:08:46.403503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.785 [2024-05-15 09:08:46.403517] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.785 [2024-05-15 09:08:46.403531] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.785 [2024-05-15 09:08:46.403560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.785 qpair failed and we were unable to recover it. 00:42:51.785 [2024-05-15 09:08:46.413356] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.785 [2024-05-15 09:08:46.413454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.785 [2024-05-15 09:08:46.413480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.785 [2024-05-15 09:08:46.413494] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.785 [2024-05-15 09:08:46.413507] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.785 [2024-05-15 09:08:46.413548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.785 qpair failed and we were unable to recover it. 00:42:51.785 [2024-05-15 09:08:46.423392] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.785 [2024-05-15 09:08:46.423522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.785 [2024-05-15 09:08:46.423547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.785 [2024-05-15 09:08:46.423563] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.785 [2024-05-15 09:08:46.423575] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.785 [2024-05-15 09:08:46.423605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.785 qpair failed and we were unable to recover it. 00:42:51.785 [2024-05-15 09:08:46.433443] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.785 [2024-05-15 09:08:46.433546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.785 [2024-05-15 09:08:46.433570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.785 [2024-05-15 09:08:46.433585] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.785 [2024-05-15 09:08:46.433597] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.785 [2024-05-15 09:08:46.433627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.785 qpair failed and we were unable to recover it. 00:42:51.785 [2024-05-15 09:08:46.443485] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.785 [2024-05-15 09:08:46.443621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.785 [2024-05-15 09:08:46.443646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.785 [2024-05-15 09:08:46.443661] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.785 [2024-05-15 09:08:46.443673] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.785 [2024-05-15 09:08:46.443703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.785 qpair failed and we were unable to recover it. 00:42:51.785 [2024-05-15 09:08:46.453477] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.785 [2024-05-15 09:08:46.453609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.785 [2024-05-15 09:08:46.453638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.785 [2024-05-15 09:08:46.453654] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.785 [2024-05-15 09:08:46.453667] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.785 [2024-05-15 09:08:46.453697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.785 qpair failed and we were unable to recover it. 00:42:51.785 [2024-05-15 09:08:46.463497] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.785 [2024-05-15 09:08:46.463625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.785 [2024-05-15 09:08:46.463649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.785 [2024-05-15 09:08:46.463664] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.785 [2024-05-15 09:08:46.463677] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.785 [2024-05-15 09:08:46.463706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.785 qpair failed and we were unable to recover it. 00:42:51.785 [2024-05-15 09:08:46.473535] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.785 [2024-05-15 09:08:46.473645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.785 [2024-05-15 09:08:46.473670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.785 [2024-05-15 09:08:46.473684] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.785 [2024-05-15 09:08:46.473697] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.785 [2024-05-15 09:08:46.473726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.785 qpair failed and we were unable to recover it. 00:42:51.785 [2024-05-15 09:08:46.483557] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.785 [2024-05-15 09:08:46.483665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.785 [2024-05-15 09:08:46.483691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.785 [2024-05-15 09:08:46.483705] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.785 [2024-05-15 09:08:46.483718] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.785 [2024-05-15 09:08:46.483748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.785 qpair failed and we were unable to recover it. 00:42:51.785 [2024-05-15 09:08:46.493603] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.785 [2024-05-15 09:08:46.493754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.785 [2024-05-15 09:08:46.493779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.785 [2024-05-15 09:08:46.493793] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.785 [2024-05-15 09:08:46.493806] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.785 [2024-05-15 09:08:46.493842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.785 qpair failed and we were unable to recover it. 00:42:51.785 [2024-05-15 09:08:46.503690] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.786 [2024-05-15 09:08:46.503796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.786 [2024-05-15 09:08:46.503822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.786 [2024-05-15 09:08:46.503837] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.786 [2024-05-15 09:08:46.503850] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.786 [2024-05-15 09:08:46.503891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.786 qpair failed and we were unable to recover it. 00:42:51.786 [2024-05-15 09:08:46.513642] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.786 [2024-05-15 09:08:46.513772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.786 [2024-05-15 09:08:46.513797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.786 [2024-05-15 09:08:46.513812] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.786 [2024-05-15 09:08:46.513824] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.786 [2024-05-15 09:08:46.513854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.786 qpair failed and we were unable to recover it. 00:42:51.786 [2024-05-15 09:08:46.523667] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.786 [2024-05-15 09:08:46.523771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.786 [2024-05-15 09:08:46.523796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.786 [2024-05-15 09:08:46.523811] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.786 [2024-05-15 09:08:46.523824] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.786 [2024-05-15 09:08:46.523854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.786 qpair failed and we were unable to recover it. 00:42:51.786 [2024-05-15 09:08:46.533733] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.786 [2024-05-15 09:08:46.533873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.786 [2024-05-15 09:08:46.533899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.786 [2024-05-15 09:08:46.533914] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.786 [2024-05-15 09:08:46.533926] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.786 [2024-05-15 09:08:46.533956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.786 qpair failed and we were unable to recover it. 00:42:51.786 [2024-05-15 09:08:46.543752] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.786 [2024-05-15 09:08:46.543864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.786 [2024-05-15 09:08:46.543895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.786 [2024-05-15 09:08:46.543910] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.786 [2024-05-15 09:08:46.543924] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.786 [2024-05-15 09:08:46.543953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.786 qpair failed and we were unable to recover it. 00:42:51.786 [2024-05-15 09:08:46.553783] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.786 [2024-05-15 09:08:46.553907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.786 [2024-05-15 09:08:46.553933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.786 [2024-05-15 09:08:46.553948] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.786 [2024-05-15 09:08:46.553961] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.786 [2024-05-15 09:08:46.553993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.786 qpair failed and we were unable to recover it. 00:42:51.786 [2024-05-15 09:08:46.563834] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.786 [2024-05-15 09:08:46.563934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.786 [2024-05-15 09:08:46.563960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.786 [2024-05-15 09:08:46.563975] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.786 [2024-05-15 09:08:46.563987] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.786 [2024-05-15 09:08:46.564017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.786 qpair failed and we were unable to recover it. 00:42:51.786 [2024-05-15 09:08:46.573913] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.786 [2024-05-15 09:08:46.574027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.786 [2024-05-15 09:08:46.574052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.786 [2024-05-15 09:08:46.574067] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.786 [2024-05-15 09:08:46.574080] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:51.786 [2024-05-15 09:08:46.574109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.786 qpair failed and we were unable to recover it. 00:42:52.047 [2024-05-15 09:08:46.583938] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.047 [2024-05-15 09:08:46.584076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.047 [2024-05-15 09:08:46.584102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.047 [2024-05-15 09:08:46.584117] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.047 [2024-05-15 09:08:46.584139] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.047 [2024-05-15 09:08:46.584170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.047 qpair failed and we were unable to recover it. 00:42:52.047 [2024-05-15 09:08:46.593927] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.047 [2024-05-15 09:08:46.594056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.047 [2024-05-15 09:08:46.594084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.047 [2024-05-15 09:08:46.594099] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.047 [2024-05-15 09:08:46.594111] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.047 [2024-05-15 09:08:46.594141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.047 qpair failed and we were unable to recover it. 00:42:52.047 [2024-05-15 09:08:46.603934] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.047 [2024-05-15 09:08:46.604051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.047 [2024-05-15 09:08:46.604076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.047 [2024-05-15 09:08:46.604091] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.047 [2024-05-15 09:08:46.604104] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.047 [2024-05-15 09:08:46.604133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.047 qpair failed and we were unable to recover it. 00:42:52.047 [2024-05-15 09:08:46.613944] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.047 [2024-05-15 09:08:46.614048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.047 [2024-05-15 09:08:46.614073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.047 [2024-05-15 09:08:46.614088] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.047 [2024-05-15 09:08:46.614100] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.047 [2024-05-15 09:08:46.614131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.047 qpair failed and we were unable to recover it. 00:42:52.047 [2024-05-15 09:08:46.624001] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.047 [2024-05-15 09:08:46.624117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.047 [2024-05-15 09:08:46.624142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.047 [2024-05-15 09:08:46.624168] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.047 [2024-05-15 09:08:46.624182] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.047 [2024-05-15 09:08:46.624211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.047 qpair failed and we were unable to recover it. 00:42:52.047 [2024-05-15 09:08:46.634033] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.047 [2024-05-15 09:08:46.634155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.047 [2024-05-15 09:08:46.634183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.047 [2024-05-15 09:08:46.634198] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.047 [2024-05-15 09:08:46.634211] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.047 [2024-05-15 09:08:46.634250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.047 qpair failed and we were unable to recover it. 00:42:52.047 [2024-05-15 09:08:46.644039] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.047 [2024-05-15 09:08:46.644143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.047 [2024-05-15 09:08:46.644169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.047 [2024-05-15 09:08:46.644184] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.047 [2024-05-15 09:08:46.644197] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.047 [2024-05-15 09:08:46.644250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.047 qpair failed and we were unable to recover it. 00:42:52.047 [2024-05-15 09:08:46.654071] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.047 [2024-05-15 09:08:46.654179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.047 [2024-05-15 09:08:46.654205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.047 [2024-05-15 09:08:46.654227] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.047 [2024-05-15 09:08:46.654241] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.047 [2024-05-15 09:08:46.654271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.047 qpair failed and we were unable to recover it. 00:42:52.047 [2024-05-15 09:08:46.664109] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.047 [2024-05-15 09:08:46.664224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.047 [2024-05-15 09:08:46.664250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.047 [2024-05-15 09:08:46.664264] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.047 [2024-05-15 09:08:46.664277] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.047 [2024-05-15 09:08:46.664307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.047 qpair failed and we were unable to recover it. 00:42:52.047 [2024-05-15 09:08:46.674126] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.047 [2024-05-15 09:08:46.674261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.047 [2024-05-15 09:08:46.674289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.047 [2024-05-15 09:08:46.674307] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.047 [2024-05-15 09:08:46.674326] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.047 [2024-05-15 09:08:46.674356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.047 qpair failed and we were unable to recover it. 00:42:52.047 [2024-05-15 09:08:46.684170] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.047 [2024-05-15 09:08:46.684296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.047 [2024-05-15 09:08:46.684323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.047 [2024-05-15 09:08:46.684338] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.047 [2024-05-15 09:08:46.684351] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.047 [2024-05-15 09:08:46.684381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.047 qpair failed and we were unable to recover it. 00:42:52.047 [2024-05-15 09:08:46.694195] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.047 [2024-05-15 09:08:46.694314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.047 [2024-05-15 09:08:46.694342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.047 [2024-05-15 09:08:46.694357] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.047 [2024-05-15 09:08:46.694370] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.047 [2024-05-15 09:08:46.694401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.047 qpair failed and we were unable to recover it. 00:42:52.047 [2024-05-15 09:08:46.704226] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.047 [2024-05-15 09:08:46.704334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.047 [2024-05-15 09:08:46.704359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.047 [2024-05-15 09:08:46.704373] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.047 [2024-05-15 09:08:46.704386] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.047 [2024-05-15 09:08:46.704416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.047 qpair failed and we were unable to recover it. 00:42:52.047 [2024-05-15 09:08:46.714253] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.047 [2024-05-15 09:08:46.714358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.048 [2024-05-15 09:08:46.714383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.048 [2024-05-15 09:08:46.714398] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.048 [2024-05-15 09:08:46.714411] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.048 [2024-05-15 09:08:46.714454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.048 qpair failed and we were unable to recover it. 00:42:52.048 [2024-05-15 09:08:46.724281] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.048 [2024-05-15 09:08:46.724388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.048 [2024-05-15 09:08:46.724413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.048 [2024-05-15 09:08:46.724427] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.048 [2024-05-15 09:08:46.724441] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.048 [2024-05-15 09:08:46.724470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.048 qpair failed and we were unable to recover it. 00:42:52.048 [2024-05-15 09:08:46.734305] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.048 [2024-05-15 09:08:46.734409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.048 [2024-05-15 09:08:46.734434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.048 [2024-05-15 09:08:46.734448] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.048 [2024-05-15 09:08:46.734461] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.048 [2024-05-15 09:08:46.734490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.048 qpair failed and we were unable to recover it. 00:42:52.048 [2024-05-15 09:08:46.744375] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.048 [2024-05-15 09:08:46.744484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.048 [2024-05-15 09:08:46.744510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.048 [2024-05-15 09:08:46.744525] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.048 [2024-05-15 09:08:46.744538] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.048 [2024-05-15 09:08:46.744569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.048 qpair failed and we were unable to recover it. 00:42:52.048 [2024-05-15 09:08:46.754397] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.048 [2024-05-15 09:08:46.754540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.048 [2024-05-15 09:08:46.754565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.048 [2024-05-15 09:08:46.754579] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.048 [2024-05-15 09:08:46.754592] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.048 [2024-05-15 09:08:46.754622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.048 qpair failed and we were unable to recover it. 00:42:52.048 [2024-05-15 09:08:46.764397] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.048 [2024-05-15 09:08:46.764513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.048 [2024-05-15 09:08:46.764538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.048 [2024-05-15 09:08:46.764558] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.048 [2024-05-15 09:08:46.764572] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.048 [2024-05-15 09:08:46.764602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.048 qpair failed and we were unable to recover it. 00:42:52.048 [2024-05-15 09:08:46.774465] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.048 [2024-05-15 09:08:46.774567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.048 [2024-05-15 09:08:46.774593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.048 [2024-05-15 09:08:46.774608] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.048 [2024-05-15 09:08:46.774620] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.048 [2024-05-15 09:08:46.774649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.048 qpair failed and we were unable to recover it. 00:42:52.048 [2024-05-15 09:08:46.784508] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.048 [2024-05-15 09:08:46.784626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.048 [2024-05-15 09:08:46.784651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.048 [2024-05-15 09:08:46.784666] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.048 [2024-05-15 09:08:46.784680] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.048 [2024-05-15 09:08:46.784709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.048 qpair failed and we were unable to recover it. 00:42:52.048 [2024-05-15 09:08:46.794538] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.048 [2024-05-15 09:08:46.794685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.048 [2024-05-15 09:08:46.794710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.048 [2024-05-15 09:08:46.794725] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.048 [2024-05-15 09:08:46.794737] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.048 [2024-05-15 09:08:46.794767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.048 qpair failed and we were unable to recover it. 00:42:52.048 [2024-05-15 09:08:46.804489] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.048 [2024-05-15 09:08:46.804593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.048 [2024-05-15 09:08:46.804619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.048 [2024-05-15 09:08:46.804633] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.048 [2024-05-15 09:08:46.804646] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.048 [2024-05-15 09:08:46.804676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.048 qpair failed and we were unable to recover it. 00:42:52.048 [2024-05-15 09:08:46.814545] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.048 [2024-05-15 09:08:46.814680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.048 [2024-05-15 09:08:46.814705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.048 [2024-05-15 09:08:46.814720] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.048 [2024-05-15 09:08:46.814733] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.048 [2024-05-15 09:08:46.814763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.048 qpair failed and we were unable to recover it. 00:42:52.048 [2024-05-15 09:08:46.824558] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.048 [2024-05-15 09:08:46.824667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.048 [2024-05-15 09:08:46.824691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.048 [2024-05-15 09:08:46.824706] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.048 [2024-05-15 09:08:46.824719] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.048 [2024-05-15 09:08:46.824749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.048 qpair failed and we were unable to recover it. 00:42:52.048 [2024-05-15 09:08:46.834576] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.048 [2024-05-15 09:08:46.834676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.048 [2024-05-15 09:08:46.834701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.048 [2024-05-15 09:08:46.834715] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.048 [2024-05-15 09:08:46.834727] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.048 [2024-05-15 09:08:46.834757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.048 qpair failed and we were unable to recover it. 00:42:52.307 [2024-05-15 09:08:46.844703] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.307 [2024-05-15 09:08:46.844821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.307 [2024-05-15 09:08:46.844848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.307 [2024-05-15 09:08:46.844865] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.307 [2024-05-15 09:08:46.844880] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.307 [2024-05-15 09:08:46.844912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.307 qpair failed and we were unable to recover it. 00:42:52.307 [2024-05-15 09:08:46.854662] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.307 [2024-05-15 09:08:46.854778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.307 [2024-05-15 09:08:46.854809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.307 [2024-05-15 09:08:46.854824] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.307 [2024-05-15 09:08:46.854838] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.307 [2024-05-15 09:08:46.854868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.307 qpair failed and we were unable to recover it. 00:42:52.307 [2024-05-15 09:08:46.864710] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.307 [2024-05-15 09:08:46.864826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.307 [2024-05-15 09:08:46.864853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.307 [2024-05-15 09:08:46.864868] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.307 [2024-05-15 09:08:46.864885] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.307 [2024-05-15 09:08:46.864916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.307 qpair failed and we were unable to recover it. 00:42:52.307 [2024-05-15 09:08:46.874739] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.307 [2024-05-15 09:08:46.874862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.307 [2024-05-15 09:08:46.874888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.307 [2024-05-15 09:08:46.874902] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.307 [2024-05-15 09:08:46.874915] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.307 [2024-05-15 09:08:46.874944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.307 qpair failed and we were unable to recover it. 00:42:52.307 [2024-05-15 09:08:46.884777] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.307 [2024-05-15 09:08:46.884891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.307 [2024-05-15 09:08:46.884917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.307 [2024-05-15 09:08:46.884932] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.307 [2024-05-15 09:08:46.884945] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.307 [2024-05-15 09:08:46.884976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.307 qpair failed and we were unable to recover it. 00:42:52.307 [2024-05-15 09:08:46.894738] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.307 [2024-05-15 09:08:46.894873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.307 [2024-05-15 09:08:46.894898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.307 [2024-05-15 09:08:46.894913] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.307 [2024-05-15 09:08:46.894926] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.307 [2024-05-15 09:08:46.894961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.307 qpair failed and we were unable to recover it. 00:42:52.307 [2024-05-15 09:08:46.904927] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.307 [2024-05-15 09:08:46.905053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.307 [2024-05-15 09:08:46.905079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.307 [2024-05-15 09:08:46.905094] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.307 [2024-05-15 09:08:46.905106] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.307 [2024-05-15 09:08:46.905135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.307 qpair failed and we were unable to recover it. 00:42:52.307 [2024-05-15 09:08:46.914843] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.308 [2024-05-15 09:08:46.914950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.308 [2024-05-15 09:08:46.914975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.308 [2024-05-15 09:08:46.914989] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.308 [2024-05-15 09:08:46.915001] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.308 [2024-05-15 09:08:46.915031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.308 qpair failed and we were unable to recover it. 00:42:52.308 [2024-05-15 09:08:46.924857] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.308 [2024-05-15 09:08:46.924963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.308 [2024-05-15 09:08:46.924989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.308 [2024-05-15 09:08:46.925003] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.308 [2024-05-15 09:08:46.925016] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.308 [2024-05-15 09:08:46.925046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.308 qpair failed and we were unable to recover it. 00:42:52.308 [2024-05-15 09:08:46.934860] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.308 [2024-05-15 09:08:46.934961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.308 [2024-05-15 09:08:46.934987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.308 [2024-05-15 09:08:46.935001] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.308 [2024-05-15 09:08:46.935014] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.308 [2024-05-15 09:08:46.935044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.308 qpair failed and we were unable to recover it. 00:42:52.308 [2024-05-15 09:08:46.944966] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.308 [2024-05-15 09:08:46.945076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.308 [2024-05-15 09:08:46.945107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.308 [2024-05-15 09:08:46.945123] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.308 [2024-05-15 09:08:46.945136] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.308 [2024-05-15 09:08:46.945166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.308 qpair failed and we were unable to recover it. 00:42:52.308 [2024-05-15 09:08:46.954972] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.308 [2024-05-15 09:08:46.955079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.308 [2024-05-15 09:08:46.955105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.308 [2024-05-15 09:08:46.955119] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.308 [2024-05-15 09:08:46.955133] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.308 [2024-05-15 09:08:46.955163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.308 qpair failed and we were unable to recover it. 00:42:52.308 [2024-05-15 09:08:46.964956] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.308 [2024-05-15 09:08:46.965059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.308 [2024-05-15 09:08:46.965085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.308 [2024-05-15 09:08:46.965100] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.308 [2024-05-15 09:08:46.965113] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.308 [2024-05-15 09:08:46.965145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.308 qpair failed and we were unable to recover it. 00:42:52.308 [2024-05-15 09:08:46.974989] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.308 [2024-05-15 09:08:46.975134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.308 [2024-05-15 09:08:46.975159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.308 [2024-05-15 09:08:46.975173] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.308 [2024-05-15 09:08:46.975187] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.308 [2024-05-15 09:08:46.975222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.308 qpair failed and we were unable to recover it. 00:42:52.308 [2024-05-15 09:08:46.985029] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.308 [2024-05-15 09:08:46.985132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.308 [2024-05-15 09:08:46.985157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.308 [2024-05-15 09:08:46.985171] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.308 [2024-05-15 09:08:46.985189] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.308 [2024-05-15 09:08:46.985226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.308 qpair failed and we were unable to recover it. 00:42:52.308 [2024-05-15 09:08:46.995048] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.308 [2024-05-15 09:08:46.995178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.308 [2024-05-15 09:08:46.995203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.308 [2024-05-15 09:08:46.995224] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.308 [2024-05-15 09:08:46.995239] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.308 [2024-05-15 09:08:46.995269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.308 qpair failed and we were unable to recover it. 00:42:52.308 [2024-05-15 09:08:47.005112] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.308 [2024-05-15 09:08:47.005272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.308 [2024-05-15 09:08:47.005298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.308 [2024-05-15 09:08:47.005312] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.308 [2024-05-15 09:08:47.005325] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.308 [2024-05-15 09:08:47.005354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.308 qpair failed and we were unable to recover it. 00:42:52.308 [2024-05-15 09:08:47.015104] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.308 [2024-05-15 09:08:47.015225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.308 [2024-05-15 09:08:47.015251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.308 [2024-05-15 09:08:47.015268] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.308 [2024-05-15 09:08:47.015281] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.308 [2024-05-15 09:08:47.015311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.308 qpair failed and we were unable to recover it. 00:42:52.308 [2024-05-15 09:08:47.025127] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.308 [2024-05-15 09:08:47.025261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.308 [2024-05-15 09:08:47.025287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.308 [2024-05-15 09:08:47.025301] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.308 [2024-05-15 09:08:47.025314] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.308 [2024-05-15 09:08:47.025344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.308 qpair failed and we were unable to recover it. 00:42:52.308 [2024-05-15 09:08:47.035165] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.308 [2024-05-15 09:08:47.035288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.308 [2024-05-15 09:08:47.035314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.308 [2024-05-15 09:08:47.035329] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.308 [2024-05-15 09:08:47.035342] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.308 [2024-05-15 09:08:47.035371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.308 qpair failed and we were unable to recover it. 00:42:52.308 [2024-05-15 09:08:47.045234] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.308 [2024-05-15 09:08:47.045342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.309 [2024-05-15 09:08:47.045369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.309 [2024-05-15 09:08:47.045383] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.309 [2024-05-15 09:08:47.045396] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.309 [2024-05-15 09:08:47.045426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.309 qpair failed and we were unable to recover it. 00:42:52.309 [2024-05-15 09:08:47.055223] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.309 [2024-05-15 09:08:47.055331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.309 [2024-05-15 09:08:47.055361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.309 [2024-05-15 09:08:47.055376] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.309 [2024-05-15 09:08:47.055389] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.309 [2024-05-15 09:08:47.055420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.309 qpair failed and we were unable to recover it. 00:42:52.309 [2024-05-15 09:08:47.065263] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.309 [2024-05-15 09:08:47.065418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.309 [2024-05-15 09:08:47.065445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.309 [2024-05-15 09:08:47.065460] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.309 [2024-05-15 09:08:47.065473] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.309 [2024-05-15 09:08:47.065503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.309 qpair failed and we were unable to recover it. 00:42:52.309 [2024-05-15 09:08:47.075294] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.309 [2024-05-15 09:08:47.075422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.309 [2024-05-15 09:08:47.075447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.309 [2024-05-15 09:08:47.075461] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.309 [2024-05-15 09:08:47.075480] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.309 [2024-05-15 09:08:47.075511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.309 qpair failed and we were unable to recover it. 00:42:52.309 [2024-05-15 09:08:47.085324] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.309 [2024-05-15 09:08:47.085450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.309 [2024-05-15 09:08:47.085477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.309 [2024-05-15 09:08:47.085492] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.309 [2024-05-15 09:08:47.085508] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.309 [2024-05-15 09:08:47.085540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.309 qpair failed and we were unable to recover it. 00:42:52.309 [2024-05-15 09:08:47.095359] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.309 [2024-05-15 09:08:47.095475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.309 [2024-05-15 09:08:47.095505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.309 [2024-05-15 09:08:47.095520] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.309 [2024-05-15 09:08:47.095533] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.309 [2024-05-15 09:08:47.095564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.309 qpair failed and we were unable to recover it. 00:42:52.567 [2024-05-15 09:08:47.105374] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.567 [2024-05-15 09:08:47.105498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.567 [2024-05-15 09:08:47.105526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.567 [2024-05-15 09:08:47.105541] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.567 [2024-05-15 09:08:47.105554] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.567 [2024-05-15 09:08:47.105584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.567 qpair failed and we were unable to recover it. 00:42:52.567 [2024-05-15 09:08:47.115421] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.567 [2024-05-15 09:08:47.115531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.567 [2024-05-15 09:08:47.115559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.568 [2024-05-15 09:08:47.115574] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.568 [2024-05-15 09:08:47.115587] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.568 [2024-05-15 09:08:47.115628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.568 qpair failed and we were unable to recover it. 00:42:52.568 [2024-05-15 09:08:47.125457] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.568 [2024-05-15 09:08:47.125582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.568 [2024-05-15 09:08:47.125610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.568 [2024-05-15 09:08:47.125625] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.568 [2024-05-15 09:08:47.125637] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.568 [2024-05-15 09:08:47.125667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.568 qpair failed and we were unable to recover it. 00:42:52.568 [2024-05-15 09:08:47.135463] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.568 [2024-05-15 09:08:47.135563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.568 [2024-05-15 09:08:47.135587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.568 [2024-05-15 09:08:47.135602] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.568 [2024-05-15 09:08:47.135615] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.568 [2024-05-15 09:08:47.135644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.568 qpair failed and we were unable to recover it. 00:42:52.568 [2024-05-15 09:08:47.145493] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.568 [2024-05-15 09:08:47.145599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.568 [2024-05-15 09:08:47.145625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.568 [2024-05-15 09:08:47.145639] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.568 [2024-05-15 09:08:47.145652] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.568 [2024-05-15 09:08:47.145682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.568 qpair failed and we were unable to recover it. 00:42:52.568 [2024-05-15 09:08:47.155508] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.568 [2024-05-15 09:08:47.155617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.568 [2024-05-15 09:08:47.155643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.568 [2024-05-15 09:08:47.155658] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.568 [2024-05-15 09:08:47.155670] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.568 [2024-05-15 09:08:47.155699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.568 qpair failed and we were unable to recover it. 00:42:52.568 [2024-05-15 09:08:47.165542] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.568 [2024-05-15 09:08:47.165651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.568 [2024-05-15 09:08:47.165676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.568 [2024-05-15 09:08:47.165697] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.568 [2024-05-15 09:08:47.165710] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.568 [2024-05-15 09:08:47.165753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.568 qpair failed and we were unable to recover it. 00:42:52.568 [2024-05-15 09:08:47.175591] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.568 [2024-05-15 09:08:47.175723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.568 [2024-05-15 09:08:47.175748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.568 [2024-05-15 09:08:47.175763] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.568 [2024-05-15 09:08:47.175776] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.568 [2024-05-15 09:08:47.175806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.568 qpair failed and we were unable to recover it. 00:42:52.568 [2024-05-15 09:08:47.185629] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.568 [2024-05-15 09:08:47.185735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.568 [2024-05-15 09:08:47.185760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.568 [2024-05-15 09:08:47.185774] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.568 [2024-05-15 09:08:47.185787] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.568 [2024-05-15 09:08:47.185817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.568 qpair failed and we were unable to recover it. 00:42:52.568 [2024-05-15 09:08:47.195626] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.568 [2024-05-15 09:08:47.195731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.568 [2024-05-15 09:08:47.195756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.568 [2024-05-15 09:08:47.195770] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.568 [2024-05-15 09:08:47.195783] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.568 [2024-05-15 09:08:47.195812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.568 qpair failed and we were unable to recover it. 00:42:52.568 [2024-05-15 09:08:47.205690] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.568 [2024-05-15 09:08:47.205835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.568 [2024-05-15 09:08:47.205860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.568 [2024-05-15 09:08:47.205874] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.568 [2024-05-15 09:08:47.205888] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.568 [2024-05-15 09:08:47.205916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.568 qpair failed and we were unable to recover it. 00:42:52.568 [2024-05-15 09:08:47.215701] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.568 [2024-05-15 09:08:47.215834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.568 [2024-05-15 09:08:47.215861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.568 [2024-05-15 09:08:47.215876] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.568 [2024-05-15 09:08:47.215893] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.568 [2024-05-15 09:08:47.215924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.568 qpair failed and we were unable to recover it. 00:42:52.568 [2024-05-15 09:08:47.225754] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.568 [2024-05-15 09:08:47.225869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.568 [2024-05-15 09:08:47.225896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.568 [2024-05-15 09:08:47.225910] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.568 [2024-05-15 09:08:47.225924] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.568 [2024-05-15 09:08:47.225966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.568 qpair failed and we were unable to recover it. 00:42:52.568 [2024-05-15 09:08:47.235777] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.568 [2024-05-15 09:08:47.235894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.568 [2024-05-15 09:08:47.235919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.568 [2024-05-15 09:08:47.235933] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.568 [2024-05-15 09:08:47.235946] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.568 [2024-05-15 09:08:47.235976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.568 qpair failed and we were unable to recover it. 00:42:52.568 [2024-05-15 09:08:47.245797] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.568 [2024-05-15 09:08:47.245904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.568 [2024-05-15 09:08:47.245929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.568 [2024-05-15 09:08:47.245944] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.568 [2024-05-15 09:08:47.245957] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.568 [2024-05-15 09:08:47.246000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.569 qpair failed and we were unable to recover it. 00:42:52.569 [2024-05-15 09:08:47.255813] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.569 [2024-05-15 09:08:47.255920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.569 [2024-05-15 09:08:47.255951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.569 [2024-05-15 09:08:47.255966] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.569 [2024-05-15 09:08:47.255979] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.569 [2024-05-15 09:08:47.256009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.569 qpair failed and we were unable to recover it. 00:42:52.569 [2024-05-15 09:08:47.265856] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.569 [2024-05-15 09:08:47.265970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.569 [2024-05-15 09:08:47.265995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.569 [2024-05-15 09:08:47.266010] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.569 [2024-05-15 09:08:47.266023] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.569 [2024-05-15 09:08:47.266053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.569 qpair failed and we were unable to recover it. 00:42:52.569 [2024-05-15 09:08:47.275851] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.569 [2024-05-15 09:08:47.275956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.569 [2024-05-15 09:08:47.275981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.569 [2024-05-15 09:08:47.275996] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.569 [2024-05-15 09:08:47.276008] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.569 [2024-05-15 09:08:47.276038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.569 qpair failed and we were unable to recover it. 00:42:52.569 [2024-05-15 09:08:47.285871] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.569 [2024-05-15 09:08:47.286001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.569 [2024-05-15 09:08:47.286029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.569 [2024-05-15 09:08:47.286047] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.569 [2024-05-15 09:08:47.286060] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.569 [2024-05-15 09:08:47.286089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.569 qpair failed and we were unable to recover it. 00:42:52.569 [2024-05-15 09:08:47.295912] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.569 [2024-05-15 09:08:47.296019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.569 [2024-05-15 09:08:47.296045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.569 [2024-05-15 09:08:47.296060] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.569 [2024-05-15 09:08:47.296073] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.569 [2024-05-15 09:08:47.296108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.569 qpair failed and we were unable to recover it. 00:42:52.569 [2024-05-15 09:08:47.305933] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.569 [2024-05-15 09:08:47.306040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.569 [2024-05-15 09:08:47.306067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.569 [2024-05-15 09:08:47.306081] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.569 [2024-05-15 09:08:47.306094] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.569 [2024-05-15 09:08:47.306123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.569 qpair failed and we were unable to recover it. 00:42:52.569 [2024-05-15 09:08:47.315965] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.569 [2024-05-15 09:08:47.316093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.569 [2024-05-15 09:08:47.316120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.569 [2024-05-15 09:08:47.316136] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.569 [2024-05-15 09:08:47.316148] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.569 [2024-05-15 09:08:47.316190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.569 qpair failed and we were unable to recover it. 00:42:52.569 [2024-05-15 09:08:47.325982] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.569 [2024-05-15 09:08:47.326118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.569 [2024-05-15 09:08:47.326145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.569 [2024-05-15 09:08:47.326160] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.569 [2024-05-15 09:08:47.326172] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.569 [2024-05-15 09:08:47.326201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.569 qpair failed and we were unable to recover it. 00:42:52.569 [2024-05-15 09:08:47.336076] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.569 [2024-05-15 09:08:47.336182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.569 [2024-05-15 09:08:47.336209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.569 [2024-05-15 09:08:47.336231] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.569 [2024-05-15 09:08:47.336245] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.569 [2024-05-15 09:08:47.336275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.569 qpair failed and we were unable to recover it. 00:42:52.569 [2024-05-15 09:08:47.346052] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.569 [2024-05-15 09:08:47.346156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.569 [2024-05-15 09:08:47.346190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.569 [2024-05-15 09:08:47.346206] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.569 [2024-05-15 09:08:47.346227] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.569 [2024-05-15 09:08:47.346259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.569 qpair failed and we were unable to recover it. 00:42:52.569 [2024-05-15 09:08:47.356091] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.569 [2024-05-15 09:08:47.356200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.569 [2024-05-15 09:08:47.356235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.569 [2024-05-15 09:08:47.356251] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.569 [2024-05-15 09:08:47.356263] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.569 [2024-05-15 09:08:47.356293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.569 qpair failed and we were unable to recover it. 00:42:52.828 [2024-05-15 09:08:47.366125] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.828 [2024-05-15 09:08:47.366241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.828 [2024-05-15 09:08:47.366269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.828 [2024-05-15 09:08:47.366284] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.828 [2024-05-15 09:08:47.366297] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.828 [2024-05-15 09:08:47.366327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.828 qpair failed and we were unable to recover it. 00:42:52.828 [2024-05-15 09:08:47.376143] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.828 [2024-05-15 09:08:47.376252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.828 [2024-05-15 09:08:47.376277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.828 [2024-05-15 09:08:47.376292] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.828 [2024-05-15 09:08:47.376305] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.829 [2024-05-15 09:08:47.376334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.829 qpair failed and we were unable to recover it. 00:42:52.829 [2024-05-15 09:08:47.386177] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.829 [2024-05-15 09:08:47.386295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.829 [2024-05-15 09:08:47.386322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.829 [2024-05-15 09:08:47.386337] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.829 [2024-05-15 09:08:47.386350] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.829 [2024-05-15 09:08:47.386385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.829 qpair failed and we were unable to recover it. 00:42:52.829 [2024-05-15 09:08:47.396201] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.829 [2024-05-15 09:08:47.396322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.829 [2024-05-15 09:08:47.396348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.829 [2024-05-15 09:08:47.396362] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.829 [2024-05-15 09:08:47.396375] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.829 [2024-05-15 09:08:47.396404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.829 qpair failed and we were unable to recover it. 00:42:52.829 [2024-05-15 09:08:47.406267] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.829 [2024-05-15 09:08:47.406383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.829 [2024-05-15 09:08:47.406411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.829 [2024-05-15 09:08:47.406426] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.829 [2024-05-15 09:08:47.406438] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.829 [2024-05-15 09:08:47.406468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.829 qpair failed and we were unable to recover it. 00:42:52.829 [2024-05-15 09:08:47.416254] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.829 [2024-05-15 09:08:47.416401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.829 [2024-05-15 09:08:47.416427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.829 [2024-05-15 09:08:47.416442] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.829 [2024-05-15 09:08:47.416455] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.829 [2024-05-15 09:08:47.416485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.829 qpair failed and we were unable to recover it. 00:42:52.829 [2024-05-15 09:08:47.426288] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.829 [2024-05-15 09:08:47.426394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.829 [2024-05-15 09:08:47.426420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.829 [2024-05-15 09:08:47.426435] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.829 [2024-05-15 09:08:47.426449] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.829 [2024-05-15 09:08:47.426479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.829 qpair failed and we were unable to recover it. 00:42:52.829 [2024-05-15 09:08:47.436332] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.829 [2024-05-15 09:08:47.436498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.829 [2024-05-15 09:08:47.436526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.829 [2024-05-15 09:08:47.436541] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.829 [2024-05-15 09:08:47.436558] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.829 [2024-05-15 09:08:47.436589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.829 qpair failed and we were unable to recover it. 00:42:52.829 [2024-05-15 09:08:47.446346] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.829 [2024-05-15 09:08:47.446449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.829 [2024-05-15 09:08:47.446477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.829 [2024-05-15 09:08:47.446492] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.829 [2024-05-15 09:08:47.446504] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.829 [2024-05-15 09:08:47.446535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.829 qpair failed and we were unable to recover it. 00:42:52.829 [2024-05-15 09:08:47.456448] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.829 [2024-05-15 09:08:47.456553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.829 [2024-05-15 09:08:47.456580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.829 [2024-05-15 09:08:47.456595] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.829 [2024-05-15 09:08:47.456608] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.829 [2024-05-15 09:08:47.456638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.829 qpair failed and we were unable to recover it. 00:42:52.829 [2024-05-15 09:08:47.466412] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.829 [2024-05-15 09:08:47.466523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.829 [2024-05-15 09:08:47.466550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.829 [2024-05-15 09:08:47.466565] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.829 [2024-05-15 09:08:47.466578] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.829 [2024-05-15 09:08:47.466608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.829 qpair failed and we were unable to recover it. 00:42:52.829 [2024-05-15 09:08:47.476440] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.829 [2024-05-15 09:08:47.476564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.829 [2024-05-15 09:08:47.476591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.829 [2024-05-15 09:08:47.476605] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.829 [2024-05-15 09:08:47.476623] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.829 [2024-05-15 09:08:47.476653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.829 qpair failed and we were unable to recover it. 00:42:52.829 [2024-05-15 09:08:47.486590] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.829 [2024-05-15 09:08:47.486725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.829 [2024-05-15 09:08:47.486752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.829 [2024-05-15 09:08:47.486767] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.829 [2024-05-15 09:08:47.486780] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.829 [2024-05-15 09:08:47.486809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.829 qpair failed and we were unable to recover it. 00:42:52.829 [2024-05-15 09:08:47.496474] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.829 [2024-05-15 09:08:47.496578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.829 [2024-05-15 09:08:47.496604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.829 [2024-05-15 09:08:47.496619] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.829 [2024-05-15 09:08:47.496632] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.829 [2024-05-15 09:08:47.496661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.829 qpair failed and we were unable to recover it. 00:42:52.829 [2024-05-15 09:08:47.506511] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.829 [2024-05-15 09:08:47.506618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.829 [2024-05-15 09:08:47.506645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.829 [2024-05-15 09:08:47.506659] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.829 [2024-05-15 09:08:47.506672] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.829 [2024-05-15 09:08:47.506702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.829 qpair failed and we were unable to recover it. 00:42:52.829 [2024-05-15 09:08:47.516552] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.830 [2024-05-15 09:08:47.516668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.830 [2024-05-15 09:08:47.516694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.830 [2024-05-15 09:08:47.516709] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.830 [2024-05-15 09:08:47.516722] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.830 [2024-05-15 09:08:47.516751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.830 qpair failed and we were unable to recover it. 00:42:52.830 [2024-05-15 09:08:47.526560] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.830 [2024-05-15 09:08:47.526664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.830 [2024-05-15 09:08:47.526690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.830 [2024-05-15 09:08:47.526705] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.830 [2024-05-15 09:08:47.526717] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.830 [2024-05-15 09:08:47.526747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.830 qpair failed and we were unable to recover it. 00:42:52.830 [2024-05-15 09:08:47.536594] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.830 [2024-05-15 09:08:47.536708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.830 [2024-05-15 09:08:47.536735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.830 [2024-05-15 09:08:47.536750] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.830 [2024-05-15 09:08:47.536763] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.830 [2024-05-15 09:08:47.536792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.830 qpair failed and we were unable to recover it. 00:42:52.830 [2024-05-15 09:08:47.546662] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.830 [2024-05-15 09:08:47.546773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.830 [2024-05-15 09:08:47.546799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.830 [2024-05-15 09:08:47.546814] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.830 [2024-05-15 09:08:47.546827] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.830 [2024-05-15 09:08:47.546856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.830 qpair failed and we were unable to recover it. 00:42:52.830 [2024-05-15 09:08:47.556666] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.830 [2024-05-15 09:08:47.556819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.830 [2024-05-15 09:08:47.556846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.830 [2024-05-15 09:08:47.556861] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.830 [2024-05-15 09:08:47.556874] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.830 [2024-05-15 09:08:47.556903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.830 qpair failed and we were unable to recover it. 00:42:52.830 [2024-05-15 09:08:47.566716] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.830 [2024-05-15 09:08:47.566816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.830 [2024-05-15 09:08:47.566840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.830 [2024-05-15 09:08:47.566860] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.830 [2024-05-15 09:08:47.566873] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.830 [2024-05-15 09:08:47.566902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.830 qpair failed and we were unable to recover it. 00:42:52.830 [2024-05-15 09:08:47.576740] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.830 [2024-05-15 09:08:47.576889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.830 [2024-05-15 09:08:47.576916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.830 [2024-05-15 09:08:47.576931] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.830 [2024-05-15 09:08:47.576944] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.830 [2024-05-15 09:08:47.576974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.830 qpair failed and we were unable to recover it. 00:42:52.830 [2024-05-15 09:08:47.586770] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.830 [2024-05-15 09:08:47.586882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.830 [2024-05-15 09:08:47.586908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.830 [2024-05-15 09:08:47.586923] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.830 [2024-05-15 09:08:47.586937] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.830 [2024-05-15 09:08:47.586966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.830 qpair failed and we were unable to recover it. 00:42:52.830 [2024-05-15 09:08:47.596750] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.830 [2024-05-15 09:08:47.596858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.830 [2024-05-15 09:08:47.596885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.830 [2024-05-15 09:08:47.596899] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.830 [2024-05-15 09:08:47.596912] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.830 [2024-05-15 09:08:47.596942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.830 qpair failed and we were unable to recover it. 00:42:52.830 [2024-05-15 09:08:47.606772] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.830 [2024-05-15 09:08:47.606873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.830 [2024-05-15 09:08:47.606899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.830 [2024-05-15 09:08:47.606914] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.830 [2024-05-15 09:08:47.606927] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.830 [2024-05-15 09:08:47.606956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.830 qpair failed and we were unable to recover it. 00:42:52.830 [2024-05-15 09:08:47.616802] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.830 [2024-05-15 09:08:47.616900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.830 [2024-05-15 09:08:47.616926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.830 [2024-05-15 09:08:47.616940] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.830 [2024-05-15 09:08:47.616953] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:52.830 [2024-05-15 09:08:47.616982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.830 qpair failed and we were unable to recover it. 00:42:53.090 [2024-05-15 09:08:47.626845] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.090 [2024-05-15 09:08:47.626954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.090 [2024-05-15 09:08:47.626981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.090 [2024-05-15 09:08:47.626998] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.090 [2024-05-15 09:08:47.627010] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.090 [2024-05-15 09:08:47.627042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.090 qpair failed and we were unable to recover it. 00:42:53.090 [2024-05-15 09:08:47.636859] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.090 [2024-05-15 09:08:47.636964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.090 [2024-05-15 09:08:47.636991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.090 [2024-05-15 09:08:47.637006] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.090 [2024-05-15 09:08:47.637019] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.090 [2024-05-15 09:08:47.637048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.090 qpair failed and we were unable to recover it. 00:42:53.090 [2024-05-15 09:08:47.646902] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.090 [2024-05-15 09:08:47.647000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.090 [2024-05-15 09:08:47.647025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.090 [2024-05-15 09:08:47.647040] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.090 [2024-05-15 09:08:47.647053] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.090 [2024-05-15 09:08:47.647082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.090 qpair failed and we were unable to recover it. 00:42:53.090 [2024-05-15 09:08:47.656933] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.090 [2024-05-15 09:08:47.657039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.090 [2024-05-15 09:08:47.657065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.090 [2024-05-15 09:08:47.657085] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.090 [2024-05-15 09:08:47.657098] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.090 [2024-05-15 09:08:47.657128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.090 qpair failed and we were unable to recover it. 00:42:53.090 [2024-05-15 09:08:47.666959] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.090 [2024-05-15 09:08:47.667084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.090 [2024-05-15 09:08:47.667110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.090 [2024-05-15 09:08:47.667125] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.091 [2024-05-15 09:08:47.667138] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.091 [2024-05-15 09:08:47.667168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.091 qpair failed and we were unable to recover it. 00:42:53.091 [2024-05-15 09:08:47.676982] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.091 [2024-05-15 09:08:47.677090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.091 [2024-05-15 09:08:47.677116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.091 [2024-05-15 09:08:47.677131] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.091 [2024-05-15 09:08:47.677144] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.091 [2024-05-15 09:08:47.677174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.091 qpair failed and we were unable to recover it. 00:42:53.091 [2024-05-15 09:08:47.686989] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.091 [2024-05-15 09:08:47.687092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.091 [2024-05-15 09:08:47.687116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.091 [2024-05-15 09:08:47.687131] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.091 [2024-05-15 09:08:47.687143] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.091 [2024-05-15 09:08:47.687173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.091 qpair failed and we were unable to recover it. 00:42:53.091 [2024-05-15 09:08:47.697039] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.091 [2024-05-15 09:08:47.697140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.091 [2024-05-15 09:08:47.697166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.091 [2024-05-15 09:08:47.697181] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.091 [2024-05-15 09:08:47.697194] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.091 [2024-05-15 09:08:47.697231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.091 qpair failed and we were unable to recover it. 00:42:53.091 [2024-05-15 09:08:47.707059] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.091 [2024-05-15 09:08:47.707164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.091 [2024-05-15 09:08:47.707190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.091 [2024-05-15 09:08:47.707205] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.091 [2024-05-15 09:08:47.707225] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.091 [2024-05-15 09:08:47.707256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.091 qpair failed and we were unable to recover it. 00:42:53.091 [2024-05-15 09:08:47.717095] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.091 [2024-05-15 09:08:47.717205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.091 [2024-05-15 09:08:47.717240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.091 [2024-05-15 09:08:47.717255] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.091 [2024-05-15 09:08:47.717268] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.091 [2024-05-15 09:08:47.717299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.091 qpair failed and we were unable to recover it. 00:42:53.091 [2024-05-15 09:08:47.727136] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.091 [2024-05-15 09:08:47.727250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.091 [2024-05-15 09:08:47.727278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.091 [2024-05-15 09:08:47.727293] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.091 [2024-05-15 09:08:47.727308] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.091 [2024-05-15 09:08:47.727351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.091 qpair failed and we were unable to recover it. 00:42:53.091 [2024-05-15 09:08:47.737184] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.091 [2024-05-15 09:08:47.737307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.091 [2024-05-15 09:08:47.737334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.091 [2024-05-15 09:08:47.737349] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.091 [2024-05-15 09:08:47.737362] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.091 [2024-05-15 09:08:47.737391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.091 qpair failed and we were unable to recover it. 00:42:53.091 [2024-05-15 09:08:47.747201] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.091 [2024-05-15 09:08:47.747320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.091 [2024-05-15 09:08:47.747353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.091 [2024-05-15 09:08:47.747369] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.091 [2024-05-15 09:08:47.747382] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.091 [2024-05-15 09:08:47.747412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.091 qpair failed and we were unable to recover it. 00:42:53.091 [2024-05-15 09:08:47.757211] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.091 [2024-05-15 09:08:47.757339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.091 [2024-05-15 09:08:47.757366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.091 [2024-05-15 09:08:47.757381] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.091 [2024-05-15 09:08:47.757394] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.091 [2024-05-15 09:08:47.757424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.091 qpair failed and we were unable to recover it. 00:42:53.091 [2024-05-15 09:08:47.767244] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.092 [2024-05-15 09:08:47.767348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.092 [2024-05-15 09:08:47.767375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.092 [2024-05-15 09:08:47.767390] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.092 [2024-05-15 09:08:47.767403] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.092 [2024-05-15 09:08:47.767432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.092 qpair failed and we were unable to recover it. 00:42:53.092 [2024-05-15 09:08:47.777292] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.092 [2024-05-15 09:08:47.777402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.092 [2024-05-15 09:08:47.777428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.092 [2024-05-15 09:08:47.777443] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.092 [2024-05-15 09:08:47.777455] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.092 [2024-05-15 09:08:47.777485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.092 qpair failed and we were unable to recover it. 00:42:53.092 [2024-05-15 09:08:47.787303] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.092 [2024-05-15 09:08:47.787461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.092 [2024-05-15 09:08:47.787488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.092 [2024-05-15 09:08:47.787503] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.092 [2024-05-15 09:08:47.787516] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.092 [2024-05-15 09:08:47.787551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.092 qpair failed and we were unable to recover it. 00:42:53.092 [2024-05-15 09:08:47.797333] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.092 [2024-05-15 09:08:47.797439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.092 [2024-05-15 09:08:47.797466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.092 [2024-05-15 09:08:47.797481] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.092 [2024-05-15 09:08:47.797494] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.092 [2024-05-15 09:08:47.797524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.092 qpair failed and we were unable to recover it. 00:42:53.092 [2024-05-15 09:08:47.807396] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.092 [2024-05-15 09:08:47.807501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.092 [2024-05-15 09:08:47.807528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.092 [2024-05-15 09:08:47.807543] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.092 [2024-05-15 09:08:47.807555] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.092 [2024-05-15 09:08:47.807585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.092 qpair failed and we were unable to recover it. 00:42:53.092 [2024-05-15 09:08:47.817423] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.092 [2024-05-15 09:08:47.817538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.092 [2024-05-15 09:08:47.817565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.092 [2024-05-15 09:08:47.817579] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.092 [2024-05-15 09:08:47.817592] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.092 [2024-05-15 09:08:47.817622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.092 qpair failed and we were unable to recover it. 00:42:53.092 [2024-05-15 09:08:47.827457] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.092 [2024-05-15 09:08:47.827592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.092 [2024-05-15 09:08:47.827618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.092 [2024-05-15 09:08:47.827634] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.092 [2024-05-15 09:08:47.827647] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.092 [2024-05-15 09:08:47.827690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.092 qpair failed and we were unable to recover it. 00:42:53.092 [2024-05-15 09:08:47.837464] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.092 [2024-05-15 09:08:47.837568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.092 [2024-05-15 09:08:47.837600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.092 [2024-05-15 09:08:47.837616] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.092 [2024-05-15 09:08:47.837629] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.092 [2024-05-15 09:08:47.837658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.092 qpair failed and we were unable to recover it. 00:42:53.092 [2024-05-15 09:08:47.847555] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.092 [2024-05-15 09:08:47.847657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.092 [2024-05-15 09:08:47.847684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.092 [2024-05-15 09:08:47.847698] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.092 [2024-05-15 09:08:47.847712] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.092 [2024-05-15 09:08:47.847742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.092 qpair failed and we were unable to recover it. 00:42:53.092 [2024-05-15 09:08:47.857490] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.092 [2024-05-15 09:08:47.857589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.092 [2024-05-15 09:08:47.857615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.092 [2024-05-15 09:08:47.857630] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.092 [2024-05-15 09:08:47.857642] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.092 [2024-05-15 09:08:47.857672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.093 qpair failed and we were unable to recover it. 00:42:53.093 [2024-05-15 09:08:47.867528] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.093 [2024-05-15 09:08:47.867644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.093 [2024-05-15 09:08:47.867671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.093 [2024-05-15 09:08:47.867685] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.093 [2024-05-15 09:08:47.867698] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.093 [2024-05-15 09:08:47.867741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.093 qpair failed and we were unable to recover it. 00:42:53.093 [2024-05-15 09:08:47.877539] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.093 [2024-05-15 09:08:47.877646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.093 [2024-05-15 09:08:47.877673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.093 [2024-05-15 09:08:47.877688] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.093 [2024-05-15 09:08:47.877706] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.093 [2024-05-15 09:08:47.877736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.093 qpair failed and we were unable to recover it. 00:42:53.352 [2024-05-15 09:08:47.887615] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.352 [2024-05-15 09:08:47.887761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.352 [2024-05-15 09:08:47.887788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.352 [2024-05-15 09:08:47.887803] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.352 [2024-05-15 09:08:47.887815] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.352 [2024-05-15 09:08:47.887845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.352 qpair failed and we were unable to recover it. 00:42:53.352 [2024-05-15 09:08:47.897593] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.352 [2024-05-15 09:08:47.897692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.352 [2024-05-15 09:08:47.897719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.352 [2024-05-15 09:08:47.897733] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.352 [2024-05-15 09:08:47.897745] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.352 [2024-05-15 09:08:47.897775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.352 qpair failed and we were unable to recover it. 00:42:53.352 [2024-05-15 09:08:47.907640] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.352 [2024-05-15 09:08:47.907747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.352 [2024-05-15 09:08:47.907774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.352 [2024-05-15 09:08:47.907790] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.352 [2024-05-15 09:08:47.907802] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.352 [2024-05-15 09:08:47.907834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.352 qpair failed and we were unable to recover it. 00:42:53.352 [2024-05-15 09:08:47.917691] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.352 [2024-05-15 09:08:47.917804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.352 [2024-05-15 09:08:47.917831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.352 [2024-05-15 09:08:47.917846] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.352 [2024-05-15 09:08:47.917860] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.352 [2024-05-15 09:08:47.917889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.352 qpair failed and we were unable to recover it. 00:42:53.352 [2024-05-15 09:08:47.927719] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.352 [2024-05-15 09:08:47.927870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.352 [2024-05-15 09:08:47.927897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.352 [2024-05-15 09:08:47.927912] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.353 [2024-05-15 09:08:47.927925] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.353 [2024-05-15 09:08:47.927954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.353 qpair failed and we were unable to recover it. 00:42:53.353 [2024-05-15 09:08:47.937736] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.353 [2024-05-15 09:08:47.937864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.353 [2024-05-15 09:08:47.937893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.353 [2024-05-15 09:08:47.937908] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.353 [2024-05-15 09:08:47.937925] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.353 [2024-05-15 09:08:47.937956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.353 qpair failed and we were unable to recover it. 00:42:53.353 [2024-05-15 09:08:47.947742] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.353 [2024-05-15 09:08:47.947851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.353 [2024-05-15 09:08:47.947878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.353 [2024-05-15 09:08:47.947893] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.353 [2024-05-15 09:08:47.947905] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.353 [2024-05-15 09:08:47.947935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.353 qpair failed and we were unable to recover it. 00:42:53.353 [2024-05-15 09:08:47.957786] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.353 [2024-05-15 09:08:47.957893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.353 [2024-05-15 09:08:47.957919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.353 [2024-05-15 09:08:47.957933] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.353 [2024-05-15 09:08:47.957946] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.353 [2024-05-15 09:08:47.957975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.353 qpair failed and we were unable to recover it. 00:42:53.353 [2024-05-15 09:08:47.967827] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.353 [2024-05-15 09:08:47.967940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.353 [2024-05-15 09:08:47.967968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.353 [2024-05-15 09:08:47.967989] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.353 [2024-05-15 09:08:47.968002] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.353 [2024-05-15 09:08:47.968032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.353 qpair failed and we were unable to recover it. 00:42:53.353 [2024-05-15 09:08:47.977833] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.353 [2024-05-15 09:08:47.977951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.353 [2024-05-15 09:08:47.977978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.353 [2024-05-15 09:08:47.977993] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.353 [2024-05-15 09:08:47.978006] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.353 [2024-05-15 09:08:47.978035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.353 qpair failed and we were unable to recover it. 00:42:53.353 [2024-05-15 09:08:47.987841] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.353 [2024-05-15 09:08:47.987947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.353 [2024-05-15 09:08:47.987973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.353 [2024-05-15 09:08:47.987988] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.353 [2024-05-15 09:08:47.988001] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.353 [2024-05-15 09:08:47.988031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.353 qpair failed and we were unable to recover it. 00:42:53.353 [2024-05-15 09:08:47.997858] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.353 [2024-05-15 09:08:47.997967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.353 [2024-05-15 09:08:47.997993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.353 [2024-05-15 09:08:47.998008] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.353 [2024-05-15 09:08:47.998021] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.353 [2024-05-15 09:08:47.998051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.353 qpair failed and we were unable to recover it. 00:42:53.353 [2024-05-15 09:08:48.007914] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.353 [2024-05-15 09:08:48.008012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.353 [2024-05-15 09:08:48.008037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.353 [2024-05-15 09:08:48.008052] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.353 [2024-05-15 09:08:48.008065] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.353 [2024-05-15 09:08:48.008094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.353 qpair failed and we were unable to recover it. 00:42:53.353 [2024-05-15 09:08:48.018019] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.353 [2024-05-15 09:08:48.018165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.353 [2024-05-15 09:08:48.018192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.353 [2024-05-15 09:08:48.018207] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.353 [2024-05-15 09:08:48.018227] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.353 [2024-05-15 09:08:48.018258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.353 qpair failed and we were unable to recover it. 00:42:53.353 [2024-05-15 09:08:48.028007] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.353 [2024-05-15 09:08:48.028112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.353 [2024-05-15 09:08:48.028137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.353 [2024-05-15 09:08:48.028151] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.353 [2024-05-15 09:08:48.028163] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.353 [2024-05-15 09:08:48.028193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.353 qpair failed and we were unable to recover it. 00:42:53.353 [2024-05-15 09:08:48.038005] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.353 [2024-05-15 09:08:48.038112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.353 [2024-05-15 09:08:48.038142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.353 [2024-05-15 09:08:48.038159] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.353 [2024-05-15 09:08:48.038172] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.353 [2024-05-15 09:08:48.038202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.353 qpair failed and we were unable to recover it. 00:42:53.353 [2024-05-15 09:08:48.048053] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.353 [2024-05-15 09:08:48.048159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.353 [2024-05-15 09:08:48.048186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.353 [2024-05-15 09:08:48.048201] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.353 [2024-05-15 09:08:48.048227] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.353 [2024-05-15 09:08:48.048261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.353 qpair failed and we were unable to recover it. 00:42:53.353 [2024-05-15 09:08:48.058037] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.353 [2024-05-15 09:08:48.058135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.353 [2024-05-15 09:08:48.058162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.353 [2024-05-15 09:08:48.058183] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.353 [2024-05-15 09:08:48.058196] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.353 [2024-05-15 09:08:48.058233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.353 qpair failed and we were unable to recover it. 00:42:53.353 [2024-05-15 09:08:48.068077] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.353 [2024-05-15 09:08:48.068183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.353 [2024-05-15 09:08:48.068209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.353 [2024-05-15 09:08:48.068232] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.353 [2024-05-15 09:08:48.068245] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.353 [2024-05-15 09:08:48.068275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.353 qpair failed and we were unable to recover it. 00:42:53.353 [2024-05-15 09:08:48.078115] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.353 [2024-05-15 09:08:48.078226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.353 [2024-05-15 09:08:48.078251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.353 [2024-05-15 09:08:48.078267] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.353 [2024-05-15 09:08:48.078283] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.353 [2024-05-15 09:08:48.078324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.353 qpair failed and we were unable to recover it. 00:42:53.353 [2024-05-15 09:08:48.088135] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.353 [2024-05-15 09:08:48.088254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.353 [2024-05-15 09:08:48.088282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.353 [2024-05-15 09:08:48.088297] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.353 [2024-05-15 09:08:48.088310] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.353 [2024-05-15 09:08:48.088342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.353 qpair failed and we were unable to recover it. 00:42:53.353 [2024-05-15 09:08:48.098158] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.353 [2024-05-15 09:08:48.098286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.354 [2024-05-15 09:08:48.098315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.354 [2024-05-15 09:08:48.098330] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.354 [2024-05-15 09:08:48.098343] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.354 [2024-05-15 09:08:48.098373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.354 qpair failed and we were unable to recover it. 00:42:53.354 [2024-05-15 09:08:48.108199] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.354 [2024-05-15 09:08:48.108323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.354 [2024-05-15 09:08:48.108349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.354 [2024-05-15 09:08:48.108364] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.354 [2024-05-15 09:08:48.108376] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.354 [2024-05-15 09:08:48.108407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.354 qpair failed and we were unable to recover it. 00:42:53.354 [2024-05-15 09:08:48.118251] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.354 [2024-05-15 09:08:48.118355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.354 [2024-05-15 09:08:48.118382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.354 [2024-05-15 09:08:48.118396] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.354 [2024-05-15 09:08:48.118409] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.354 [2024-05-15 09:08:48.118451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.354 qpair failed and we were unable to recover it. 00:42:53.354 [2024-05-15 09:08:48.128254] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.354 [2024-05-15 09:08:48.128353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.354 [2024-05-15 09:08:48.128381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.354 [2024-05-15 09:08:48.128396] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.354 [2024-05-15 09:08:48.128409] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.354 [2024-05-15 09:08:48.128452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.354 qpair failed and we were unable to recover it. 00:42:53.354 [2024-05-15 09:08:48.138340] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.354 [2024-05-15 09:08:48.138445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.354 [2024-05-15 09:08:48.138471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.354 [2024-05-15 09:08:48.138486] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.354 [2024-05-15 09:08:48.138500] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.354 [2024-05-15 09:08:48.138530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.354 qpair failed and we were unable to recover it. 00:42:53.613 [2024-05-15 09:08:48.148322] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.613 [2024-05-15 09:08:48.148431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.613 [2024-05-15 09:08:48.148464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.613 [2024-05-15 09:08:48.148480] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.613 [2024-05-15 09:08:48.148493] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.613 [2024-05-15 09:08:48.148523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.613 qpair failed and we were unable to recover it. 00:42:53.613 [2024-05-15 09:08:48.158374] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.614 [2024-05-15 09:08:48.158482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.614 [2024-05-15 09:08:48.158507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.614 [2024-05-15 09:08:48.158522] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.614 [2024-05-15 09:08:48.158535] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.614 [2024-05-15 09:08:48.158577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.614 qpair failed and we were unable to recover it. 00:42:53.614 [2024-05-15 09:08:48.168392] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.614 [2024-05-15 09:08:48.168500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.614 [2024-05-15 09:08:48.168528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.614 [2024-05-15 09:08:48.168543] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.614 [2024-05-15 09:08:48.168555] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.614 [2024-05-15 09:08:48.168585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.614 qpair failed and we were unable to recover it. 00:42:53.614 [2024-05-15 09:08:48.178396] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.614 [2024-05-15 09:08:48.178496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.614 [2024-05-15 09:08:48.178523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.614 [2024-05-15 09:08:48.178538] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.614 [2024-05-15 09:08:48.178552] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.614 [2024-05-15 09:08:48.178582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.614 qpair failed and we were unable to recover it. 00:42:53.614 [2024-05-15 09:08:48.188434] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.614 [2024-05-15 09:08:48.188552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.614 [2024-05-15 09:08:48.188579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.614 [2024-05-15 09:08:48.188595] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.614 [2024-05-15 09:08:48.188607] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.614 [2024-05-15 09:08:48.188643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.614 qpair failed and we were unable to recover it. 00:42:53.614 [2024-05-15 09:08:48.198463] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.614 [2024-05-15 09:08:48.198570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.614 [2024-05-15 09:08:48.198599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.614 [2024-05-15 09:08:48.198614] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.614 [2024-05-15 09:08:48.198626] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.614 [2024-05-15 09:08:48.198655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.614 qpair failed and we were unable to recover it. 00:42:53.614 [2024-05-15 09:08:48.208475] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.614 [2024-05-15 09:08:48.208592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.614 [2024-05-15 09:08:48.208617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.614 [2024-05-15 09:08:48.208633] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.614 [2024-05-15 09:08:48.208646] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.614 [2024-05-15 09:08:48.208674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.614 qpair failed and we were unable to recover it. 00:42:53.614 [2024-05-15 09:08:48.218540] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.614 [2024-05-15 09:08:48.218652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.614 [2024-05-15 09:08:48.218680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.614 [2024-05-15 09:08:48.218709] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.614 [2024-05-15 09:08:48.218722] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.614 [2024-05-15 09:08:48.218766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.614 qpair failed and we were unable to recover it. 00:42:53.614 [2024-05-15 09:08:48.228537] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.614 [2024-05-15 09:08:48.228664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.614 [2024-05-15 09:08:48.228691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.614 [2024-05-15 09:08:48.228706] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.614 [2024-05-15 09:08:48.228719] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.614 [2024-05-15 09:08:48.228749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.614 qpair failed and we were unable to recover it. 00:42:53.614 [2024-05-15 09:08:48.238546] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.614 [2024-05-15 09:08:48.238651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.614 [2024-05-15 09:08:48.238681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.614 [2024-05-15 09:08:48.238696] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.614 [2024-05-15 09:08:48.238709] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.614 [2024-05-15 09:08:48.238739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.614 qpair failed and we were unable to recover it. 00:42:53.614 [2024-05-15 09:08:48.248617] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.614 [2024-05-15 09:08:48.248743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.614 [2024-05-15 09:08:48.248770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.614 [2024-05-15 09:08:48.248785] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.614 [2024-05-15 09:08:48.248798] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.614 [2024-05-15 09:08:48.248828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.614 qpair failed and we were unable to recover it. 00:42:53.614 [2024-05-15 09:08:48.258614] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.614 [2024-05-15 09:08:48.258728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.614 [2024-05-15 09:08:48.258754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.614 [2024-05-15 09:08:48.258769] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.614 [2024-05-15 09:08:48.258782] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.614 [2024-05-15 09:08:48.258812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.614 qpair failed and we were unable to recover it. 00:42:53.614 [2024-05-15 09:08:48.268678] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.614 [2024-05-15 09:08:48.268797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.614 [2024-05-15 09:08:48.268824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.614 [2024-05-15 09:08:48.268839] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.615 [2024-05-15 09:08:48.268852] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.615 [2024-05-15 09:08:48.268882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.615 qpair failed and we were unable to recover it. 00:42:53.615 [2024-05-15 09:08:48.278698] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.615 [2024-05-15 09:08:48.278821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.615 [2024-05-15 09:08:48.278846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.615 [2024-05-15 09:08:48.278860] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.615 [2024-05-15 09:08:48.278878] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.615 [2024-05-15 09:08:48.278908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.615 qpair failed and we were unable to recover it. 00:42:53.615 [2024-05-15 09:08:48.288675] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.615 [2024-05-15 09:08:48.288779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.615 [2024-05-15 09:08:48.288806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.615 [2024-05-15 09:08:48.288821] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.615 [2024-05-15 09:08:48.288834] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.615 [2024-05-15 09:08:48.288863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.615 qpair failed and we were unable to recover it. 00:42:53.615 [2024-05-15 09:08:48.298872] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.615 [2024-05-15 09:08:48.298986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.615 [2024-05-15 09:08:48.299013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.615 [2024-05-15 09:08:48.299027] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.615 [2024-05-15 09:08:48.299040] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.615 [2024-05-15 09:08:48.299069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.615 qpair failed and we were unable to recover it. 00:42:53.615 [2024-05-15 09:08:48.308771] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.615 [2024-05-15 09:08:48.308884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.615 [2024-05-15 09:08:48.308911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.615 [2024-05-15 09:08:48.308926] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.615 [2024-05-15 09:08:48.308938] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.615 [2024-05-15 09:08:48.308967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.615 qpair failed and we were unable to recover it. 00:42:53.615 [2024-05-15 09:08:48.318801] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.615 [2024-05-15 09:08:48.318920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.615 [2024-05-15 09:08:48.318946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.615 [2024-05-15 09:08:48.318961] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.615 [2024-05-15 09:08:48.318974] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.615 [2024-05-15 09:08:48.319003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.615 qpair failed and we were unable to recover it. 00:42:53.615 [2024-05-15 09:08:48.328812] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.615 [2024-05-15 09:08:48.328928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.615 [2024-05-15 09:08:48.328955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.615 [2024-05-15 09:08:48.328970] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.615 [2024-05-15 09:08:48.328983] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.615 [2024-05-15 09:08:48.329015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.615 qpair failed and we were unable to recover it. 00:42:53.615 [2024-05-15 09:08:48.338832] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.615 [2024-05-15 09:08:48.338941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.615 [2024-05-15 09:08:48.338968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.615 [2024-05-15 09:08:48.338983] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.615 [2024-05-15 09:08:48.338996] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.615 [2024-05-15 09:08:48.339025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.615 qpair failed and we were unable to recover it. 00:42:53.615 [2024-05-15 09:08:48.348854] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.615 [2024-05-15 09:08:48.348973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.615 [2024-05-15 09:08:48.348999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.615 [2024-05-15 09:08:48.349014] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.615 [2024-05-15 09:08:48.349027] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.615 [2024-05-15 09:08:48.349057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.615 qpair failed and we were unable to recover it. 00:42:53.615 [2024-05-15 09:08:48.358951] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.615 [2024-05-15 09:08:48.359073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.615 [2024-05-15 09:08:48.359100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.615 [2024-05-15 09:08:48.359115] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.615 [2024-05-15 09:08:48.359127] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.615 [2024-05-15 09:08:48.359157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.615 qpair failed and we were unable to recover it. 00:42:53.615 [2024-05-15 09:08:48.368972] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.615 [2024-05-15 09:08:48.369093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.615 [2024-05-15 09:08:48.369120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.615 [2024-05-15 09:08:48.369135] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.615 [2024-05-15 09:08:48.369152] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.615 [2024-05-15 09:08:48.369182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.615 qpair failed and we were unable to recover it. 00:42:53.615 [2024-05-15 09:08:48.378942] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.615 [2024-05-15 09:08:48.379052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.615 [2024-05-15 09:08:48.379078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.615 [2024-05-15 09:08:48.379092] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.615 [2024-05-15 09:08:48.379105] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.615 [2024-05-15 09:08:48.379135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.615 qpair failed and we were unable to recover it. 00:42:53.615 [2024-05-15 09:08:48.389070] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.615 [2024-05-15 09:08:48.389235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.615 [2024-05-15 09:08:48.389262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.616 [2024-05-15 09:08:48.389277] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.616 [2024-05-15 09:08:48.389290] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.616 [2024-05-15 09:08:48.389322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.616 qpair failed and we were unable to recover it. 00:42:53.616 [2024-05-15 09:08:48.399031] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.616 [2024-05-15 09:08:48.399144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.616 [2024-05-15 09:08:48.399171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.616 [2024-05-15 09:08:48.399186] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.616 [2024-05-15 09:08:48.399199] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.616 [2024-05-15 09:08:48.399235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.616 qpair failed and we were unable to recover it. 00:42:53.876 [2024-05-15 09:08:48.409065] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.876 [2024-05-15 09:08:48.409189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.876 [2024-05-15 09:08:48.409221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.876 [2024-05-15 09:08:48.409238] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.877 [2024-05-15 09:08:48.409251] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.877 [2024-05-15 09:08:48.409280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.877 qpair failed and we were unable to recover it. 00:42:53.877 [2024-05-15 09:08:48.419055] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.877 [2024-05-15 09:08:48.419209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.877 [2024-05-15 09:08:48.419242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.877 [2024-05-15 09:08:48.419257] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.877 [2024-05-15 09:08:48.419270] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.877 [2024-05-15 09:08:48.419299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.877 qpair failed and we were unable to recover it. 00:42:53.877 [2024-05-15 09:08:48.429113] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.877 [2024-05-15 09:08:48.429241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.877 [2024-05-15 09:08:48.429268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.877 [2024-05-15 09:08:48.429283] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.877 [2024-05-15 09:08:48.429295] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.877 [2024-05-15 09:08:48.429327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.877 qpair failed and we were unable to recover it. 00:42:53.877 [2024-05-15 09:08:48.439134] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.877 [2024-05-15 09:08:48.439245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.877 [2024-05-15 09:08:48.439273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.877 [2024-05-15 09:08:48.439288] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.877 [2024-05-15 09:08:48.439301] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.877 [2024-05-15 09:08:48.439330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.877 qpair failed and we were unable to recover it. 00:42:53.877 [2024-05-15 09:08:48.449136] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.877 [2024-05-15 09:08:48.449264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.877 [2024-05-15 09:08:48.449291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.877 [2024-05-15 09:08:48.449306] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.877 [2024-05-15 09:08:48.449318] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.877 [2024-05-15 09:08:48.449349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.877 qpair failed and we were unable to recover it. 00:42:53.877 [2024-05-15 09:08:48.459195] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.877 [2024-05-15 09:08:48.459308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.877 [2024-05-15 09:08:48.459335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.877 [2024-05-15 09:08:48.459356] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.877 [2024-05-15 09:08:48.459369] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.877 [2024-05-15 09:08:48.459412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.877 qpair failed and we were unable to recover it. 00:42:53.877 [2024-05-15 09:08:48.469209] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.877 [2024-05-15 09:08:48.469331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.877 [2024-05-15 09:08:48.469357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.877 [2024-05-15 09:08:48.469372] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.877 [2024-05-15 09:08:48.469385] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.877 [2024-05-15 09:08:48.469414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.877 qpair failed and we were unable to recover it. 00:42:53.877 [2024-05-15 09:08:48.479253] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.877 [2024-05-15 09:08:48.479376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.877 [2024-05-15 09:08:48.479403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.877 [2024-05-15 09:08:48.479417] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.877 [2024-05-15 09:08:48.479431] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.877 [2024-05-15 09:08:48.479473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.877 qpair failed and we were unable to recover it. 00:42:53.877 [2024-05-15 09:08:48.489311] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.877 [2024-05-15 09:08:48.489421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.877 [2024-05-15 09:08:48.489447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.877 [2024-05-15 09:08:48.489462] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.877 [2024-05-15 09:08:48.489477] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.877 [2024-05-15 09:08:48.489507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.877 qpair failed and we were unable to recover it. 00:42:53.877 [2024-05-15 09:08:48.499307] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.877 [2024-05-15 09:08:48.499402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.877 [2024-05-15 09:08:48.499427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.877 [2024-05-15 09:08:48.499442] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.877 [2024-05-15 09:08:48.499455] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.877 [2024-05-15 09:08:48.499484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.877 qpair failed and we were unable to recover it. 00:42:53.877 [2024-05-15 09:08:48.509340] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.877 [2024-05-15 09:08:48.509450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.877 [2024-05-15 09:08:48.509475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.877 [2024-05-15 09:08:48.509490] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.877 [2024-05-15 09:08:48.509503] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.877 [2024-05-15 09:08:48.509543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.877 qpair failed and we were unable to recover it. 00:42:53.877 [2024-05-15 09:08:48.519356] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.877 [2024-05-15 09:08:48.519455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.877 [2024-05-15 09:08:48.519480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.877 [2024-05-15 09:08:48.519495] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.877 [2024-05-15 09:08:48.519508] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.877 [2024-05-15 09:08:48.519537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.877 qpair failed and we were unable to recover it. 00:42:53.877 [2024-05-15 09:08:48.529402] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.877 [2024-05-15 09:08:48.529507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.877 [2024-05-15 09:08:48.529532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.877 [2024-05-15 09:08:48.529547] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.877 [2024-05-15 09:08:48.529560] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.877 [2024-05-15 09:08:48.529590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.878 qpair failed and we were unable to recover it. 00:42:53.878 [2024-05-15 09:08:48.539456] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.878 [2024-05-15 09:08:48.539566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.878 [2024-05-15 09:08:48.539591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.878 [2024-05-15 09:08:48.539606] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.878 [2024-05-15 09:08:48.539620] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.878 [2024-05-15 09:08:48.539649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.878 qpair failed and we were unable to recover it. 00:42:53.878 [2024-05-15 09:08:48.549473] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.878 [2024-05-15 09:08:48.549583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.878 [2024-05-15 09:08:48.549613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.878 [2024-05-15 09:08:48.549628] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.878 [2024-05-15 09:08:48.549642] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.878 [2024-05-15 09:08:48.549673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.878 qpair failed and we were unable to recover it. 00:42:53.878 [2024-05-15 09:08:48.559478] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.878 [2024-05-15 09:08:48.559605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.878 [2024-05-15 09:08:48.559630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.878 [2024-05-15 09:08:48.559645] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.878 [2024-05-15 09:08:48.559658] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.878 [2024-05-15 09:08:48.559688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.878 qpair failed and we were unable to recover it. 00:42:53.878 [2024-05-15 09:08:48.569500] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.878 [2024-05-15 09:08:48.569606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.878 [2024-05-15 09:08:48.569632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.878 [2024-05-15 09:08:48.569646] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.878 [2024-05-15 09:08:48.569660] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.878 [2024-05-15 09:08:48.569690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.878 qpair failed and we were unable to recover it. 00:42:53.878 [2024-05-15 09:08:48.579525] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.878 [2024-05-15 09:08:48.579625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.878 [2024-05-15 09:08:48.579651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.878 [2024-05-15 09:08:48.579665] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.878 [2024-05-15 09:08:48.579677] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.878 [2024-05-15 09:08:48.579720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.878 qpair failed and we were unable to recover it. 00:42:53.878 [2024-05-15 09:08:48.589571] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.878 [2024-05-15 09:08:48.589706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.878 [2024-05-15 09:08:48.589731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.878 [2024-05-15 09:08:48.589745] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.878 [2024-05-15 09:08:48.589758] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.878 [2024-05-15 09:08:48.589793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.878 qpair failed and we were unable to recover it. 00:42:53.878 [2024-05-15 09:08:48.599587] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.878 [2024-05-15 09:08:48.599691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.878 [2024-05-15 09:08:48.599717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.878 [2024-05-15 09:08:48.599732] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.878 [2024-05-15 09:08:48.599745] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.878 [2024-05-15 09:08:48.599777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.878 qpair failed and we were unable to recover it. 00:42:53.878 [2024-05-15 09:08:48.609600] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.878 [2024-05-15 09:08:48.609703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.878 [2024-05-15 09:08:48.609728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.878 [2024-05-15 09:08:48.609742] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.878 [2024-05-15 09:08:48.609756] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.878 [2024-05-15 09:08:48.609786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.878 qpair failed and we were unable to recover it. 00:42:53.878 [2024-05-15 09:08:48.619668] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.878 [2024-05-15 09:08:48.619775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.878 [2024-05-15 09:08:48.619801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.878 [2024-05-15 09:08:48.619815] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.878 [2024-05-15 09:08:48.619828] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.878 [2024-05-15 09:08:48.619857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.878 qpair failed and we were unable to recover it. 00:42:53.878 [2024-05-15 09:08:48.629675] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.878 [2024-05-15 09:08:48.629782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.878 [2024-05-15 09:08:48.629807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.878 [2024-05-15 09:08:48.629822] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.878 [2024-05-15 09:08:48.629835] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.878 [2024-05-15 09:08:48.629877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.878 qpair failed and we were unable to recover it. 00:42:53.878 [2024-05-15 09:08:48.639720] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.878 [2024-05-15 09:08:48.639832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.878 [2024-05-15 09:08:48.639862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.878 [2024-05-15 09:08:48.639878] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.878 [2024-05-15 09:08:48.639891] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.878 [2024-05-15 09:08:48.639920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.878 qpair failed and we were unable to recover it. 00:42:53.878 [2024-05-15 09:08:48.649751] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.878 [2024-05-15 09:08:48.649855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.878 [2024-05-15 09:08:48.649881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.879 [2024-05-15 09:08:48.649896] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.879 [2024-05-15 09:08:48.649909] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.879 [2024-05-15 09:08:48.649940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.879 qpair failed and we were unable to recover it. 00:42:53.879 [2024-05-15 09:08:48.659766] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.879 [2024-05-15 09:08:48.659875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.879 [2024-05-15 09:08:48.659900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.879 [2024-05-15 09:08:48.659915] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.879 [2024-05-15 09:08:48.659927] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:53.879 [2024-05-15 09:08:48.659957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.879 qpair failed and we were unable to recover it. 00:42:54.138 [2024-05-15 09:08:48.669777] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.138 [2024-05-15 09:08:48.669886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.138 [2024-05-15 09:08:48.669910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.138 [2024-05-15 09:08:48.669925] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.138 [2024-05-15 09:08:48.669938] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:54.138 [2024-05-15 09:08:48.669968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:54.138 qpair failed and we were unable to recover it. 00:42:54.138 [2024-05-15 09:08:48.679817] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.138 [2024-05-15 09:08:48.679920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.138 [2024-05-15 09:08:48.679945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.138 [2024-05-15 09:08:48.679959] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.138 [2024-05-15 09:08:48.679977] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:54.138 [2024-05-15 09:08:48.680008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:54.138 qpair failed and we were unable to recover it. 00:42:54.138 [2024-05-15 09:08:48.689847] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.138 [2024-05-15 09:08:48.689992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.138 [2024-05-15 09:08:48.690020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.138 [2024-05-15 09:08:48.690036] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.138 [2024-05-15 09:08:48.690051] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:54.138 [2024-05-15 09:08:48.690082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:54.138 qpair failed and we were unable to recover it. 00:42:54.138 [2024-05-15 09:08:48.699900] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.138 [2024-05-15 09:08:48.700005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.138 [2024-05-15 09:08:48.700031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.138 [2024-05-15 09:08:48.700046] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.138 [2024-05-15 09:08:48.700060] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:54.138 [2024-05-15 09:08:48.700090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:54.138 qpair failed and we were unable to recover it. 00:42:54.138 [2024-05-15 09:08:48.709942] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.138 [2024-05-15 09:08:48.710087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.138 [2024-05-15 09:08:48.710112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.138 [2024-05-15 09:08:48.710127] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.138 [2024-05-15 09:08:48.710140] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:54.138 [2024-05-15 09:08:48.710170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:54.138 qpair failed and we were unable to recover it. 00:42:54.138 [2024-05-15 09:08:48.719912] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.138 [2024-05-15 09:08:48.720017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.138 [2024-05-15 09:08:48.720043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.138 [2024-05-15 09:08:48.720057] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.138 [2024-05-15 09:08:48.720070] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:54.138 [2024-05-15 09:08:48.720100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:54.138 qpair failed and we were unable to recover it. 00:42:54.138 [2024-05-15 09:08:48.729977] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.138 [2024-05-15 09:08:48.730106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.138 [2024-05-15 09:08:48.730132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.138 [2024-05-15 09:08:48.730146] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.138 [2024-05-15 09:08:48.730160] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:54.138 [2024-05-15 09:08:48.730189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:54.138 qpair failed and we were unable to recover it. 00:42:54.138 [2024-05-15 09:08:48.739977] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.138 [2024-05-15 09:08:48.740074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.138 [2024-05-15 09:08:48.740099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.138 [2024-05-15 09:08:48.740113] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.138 [2024-05-15 09:08:48.740126] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:54.138 [2024-05-15 09:08:48.740156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:54.138 qpair failed and we were unable to recover it. 00:42:54.138 [2024-05-15 09:08:48.750035] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.138 [2024-05-15 09:08:48.750158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.138 [2024-05-15 09:08:48.750183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.138 [2024-05-15 09:08:48.750198] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.138 [2024-05-15 09:08:48.750211] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:54.138 [2024-05-15 09:08:48.750250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:54.138 qpair failed and we were unable to recover it. 00:42:54.138 [2024-05-15 09:08:48.760047] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.138 [2024-05-15 09:08:48.760156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.138 [2024-05-15 09:08:48.760181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.138 [2024-05-15 09:08:48.760195] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.138 [2024-05-15 09:08:48.760208] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f30000b90 00:42:54.138 [2024-05-15 09:08:48.760249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:42:54.138 qpair failed and we were unable to recover it. 00:42:54.138 [2024-05-15 09:08:48.770084] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.138 [2024-05-15 09:08:48.770203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.139 [2024-05-15 09:08:48.770241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.139 [2024-05-15 09:08:48.770257] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.139 [2024-05-15 09:08:48.770285] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16a1570 00:42:54.139 [2024-05-15 09:08:48.770315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:42:54.139 qpair failed and we were unable to recover it. 00:42:54.139 [2024-05-15 09:08:48.780128] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.139 [2024-05-15 09:08:48.780267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.139 [2024-05-15 09:08:48.780302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.139 [2024-05-15 09:08:48.780326] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.139 [2024-05-15 09:08:48.780348] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16a1570 00:42:54.139 [2024-05-15 09:08:48.780380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:42:54.139 qpair failed and we were unable to recover it. 00:42:54.139 [2024-05-15 09:08:48.790133] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.139 [2024-05-15 09:08:48.790246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.139 [2024-05-15 09:08:48.790278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.139 [2024-05-15 09:08:48.790293] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.139 [2024-05-15 09:08:48.790305] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:54.139 [2024-05-15 09:08:48.790337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:54.139 qpair failed and we were unable to recover it. 00:42:54.139 [2024-05-15 09:08:48.800150] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.139 [2024-05-15 09:08:48.800267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.139 [2024-05-15 09:08:48.800294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.139 [2024-05-15 09:08:48.800309] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.139 [2024-05-15 09:08:48.800321] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f38000b90 00:42:54.139 [2024-05-15 09:08:48.800352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:42:54.139 qpair failed and we were unable to recover it. 00:42:54.139 [2024-05-15 09:08:48.810203] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.139 [2024-05-15 09:08:48.810312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.139 [2024-05-15 09:08:48.810343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.139 [2024-05-15 09:08:48.810358] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.139 [2024-05-15 09:08:48.810371] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f40000b90 00:42:54.139 [2024-05-15 09:08:48.810402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:42:54.139 qpair failed and we were unable to recover it. 00:42:54.139 Controller properly reset. 00:42:54.139 Initializing NVMe Controllers 00:42:54.139 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:42:54.139 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:42:54.139 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:42:54.139 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:42:54.139 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:42:54.139 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:42:54.139 Initialization complete. Launching workers. 00:42:54.139 Starting thread on core 1 00:42:54.139 Starting thread on core 2 00:42:54.139 Starting thread on core 3 00:42:54.139 Starting thread on core 0 00:42:54.139 09:08:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:42:54.139 00:42:54.139 real 0m10.786s 00:42:54.139 user 0m18.498s 00:42:54.139 sys 0m5.117s 00:42:54.139 09:08:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:42:54.139 09:08:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:42:54.139 ************************************ 00:42:54.139 END TEST nvmf_target_disconnect_tc2 00:42:54.139 ************************************ 00:42:54.139 09:08:48 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:42:54.139 09:08:48 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:42:54.139 09:08:48 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:42:54.139 09:08:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:42:54.139 09:08:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:42:54.139 09:08:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:42:54.139 09:08:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:42:54.139 09:08:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:42:54.139 09:08:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:42:54.139 rmmod nvme_tcp 00:42:54.139 rmmod nvme_fabrics 00:42:54.139 rmmod nvme_keyring 00:42:54.139 09:08:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:42:54.139 09:08:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:42:54.139 09:08:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:42:54.139 09:08:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 2441890 ']' 00:42:54.139 09:08:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 2441890 00:42:54.139 09:08:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@947 -- # '[' -z 2441890 ']' 00:42:54.139 09:08:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # kill -0 2441890 00:42:54.139 09:08:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # uname 00:42:54.139 09:08:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:42:54.139 09:08:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2441890 00:42:54.398 09:08:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # process_name=reactor_4 00:42:54.398 09:08:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # '[' reactor_4 = sudo ']' 00:42:54.398 09:08:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2441890' 00:42:54.398 killing process with pid 2441890 00:42:54.398 09:08:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # kill 2441890 00:42:54.398 [2024-05-15 09:08:48.949835] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:42:54.398 09:08:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@971 -- # wait 2441890 00:42:54.658 09:08:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:42:54.658 09:08:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:42:54.658 09:08:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:42:54.658 09:08:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:42:54.658 09:08:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:42:54.658 09:08:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:54.658 09:08:49 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:42:54.658 09:08:49 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:56.559 09:08:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:42:56.559 00:42:56.559 real 0m15.830s 00:42:56.559 user 0m44.725s 00:42:56.559 sys 0m7.270s 00:42:56.559 09:08:51 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # xtrace_disable 00:42:56.559 09:08:51 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:42:56.559 ************************************ 00:42:56.559 END TEST nvmf_target_disconnect 00:42:56.559 ************************************ 00:42:56.559 09:08:51 nvmf_tcp -- nvmf/nvmf.sh@125 -- # timing_exit host 00:42:56.559 09:08:51 nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:42:56.559 09:08:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:56.559 09:08:51 nvmf_tcp -- nvmf/nvmf.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:42:56.559 00:42:56.559 real 26m58.288s 00:42:56.559 user 72m56.332s 00:42:56.559 sys 6m25.658s 00:42:56.559 09:08:51 nvmf_tcp -- common/autotest_common.sh@1123 -- # xtrace_disable 00:42:56.559 09:08:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:56.559 ************************************ 00:42:56.559 END TEST nvmf_tcp 00:42:56.559 ************************************ 00:42:56.559 09:08:51 -- spdk/autotest.sh@284 -- # [[ 0 -eq 0 ]] 00:42:56.559 09:08:51 -- spdk/autotest.sh@285 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:42:56.559 09:08:51 -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:42:56.559 09:08:51 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:42:56.559 09:08:51 -- common/autotest_common.sh@10 -- # set +x 00:42:56.817 ************************************ 00:42:56.817 START TEST spdkcli_nvmf_tcp 00:42:56.817 ************************************ 00:42:56.817 09:08:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:42:56.817 * Looking for test storage... 00:42:56.817 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:42:56.817 09:08:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:42:56.817 09:08:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:42:56.817 09:08:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:42:56.817 09:08:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:56.817 09:08:51 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:42:56.817 09:08:51 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:56.817 09:08:51 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:56.817 09:08:51 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:56.817 09:08:51 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:56.817 09:08:51 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:56.817 09:08:51 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:56.817 09:08:51 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:56.817 09:08:51 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:56.817 09:08:51 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:56.817 09:08:51 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:56.817 09:08:51 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:42:56.817 09:08:51 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:42:56.817 09:08:51 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:56.817 09:08:51 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:56.817 09:08:51 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:56.817 09:08:51 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:56.817 09:08:51 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:56.817 09:08:51 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:56.817 09:08:51 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:56.817 09:08:51 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:56.817 09:08:51 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:56.817 09:08:51 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:56.817 09:08:51 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:56.817 09:08:51 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:42:56.817 09:08:51 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:56.817 09:08:51 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:42:56.817 09:08:51 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:42:56.817 09:08:51 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:42:56.817 09:08:51 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:56.817 09:08:51 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:56.817 09:08:51 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:56.817 09:08:51 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:42:56.817 09:08:51 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:42:56.817 09:08:51 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:42:56.817 09:08:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:42:56.817 09:08:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:42:56.817 09:08:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:42:56.817 09:08:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:42:56.817 09:08:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:42:56.817 09:08:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:56.817 09:08:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:42:56.817 09:08:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2443085 00:42:56.817 09:08:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:42:56.817 09:08:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2443085 00:42:56.817 09:08:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@828 -- # '[' -z 2443085 ']' 00:42:56.817 09:08:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:56.817 09:08:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local max_retries=100 00:42:56.817 09:08:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:56.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:56.817 09:08:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@837 -- # xtrace_disable 00:42:56.817 09:08:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:56.817 [2024-05-15 09:08:51.463721] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:42:56.817 [2024-05-15 09:08:51.463796] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2443085 ] 00:42:56.817 EAL: No free 2048 kB hugepages reported on node 1 00:42:56.817 [2024-05-15 09:08:51.529203] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:42:57.076 [2024-05-15 09:08:51.615702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:42:57.076 [2024-05-15 09:08:51.615707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:42:57.076 09:08:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:42:57.076 09:08:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@861 -- # return 0 00:42:57.076 09:08:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:42:57.076 09:08:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:42:57.076 09:08:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:57.076 09:08:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:42:57.076 09:08:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:42:57.076 09:08:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:42:57.076 09:08:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:42:57.076 09:08:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:57.076 09:08:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:42:57.076 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:42:57.076 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:42:57.076 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:42:57.076 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:42:57.076 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:42:57.076 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:42:57.076 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:42:57.076 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:42:57.076 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:42:57.076 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:42:57.076 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:42:57.076 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:42:57.076 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:42:57.076 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:42:57.076 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:42:57.076 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:42:57.076 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:42:57.076 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:42:57.076 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:42:57.076 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:42:57.076 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:42:57.076 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:42:57.076 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:42:57.076 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:42:57.076 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:42:57.076 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:42:57.076 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:42:57.076 ' 00:42:59.605 [2024-05-15 09:08:54.288198] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:00.978 [2024-05-15 09:08:55.528010] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:43:00.978 [2024-05-15 09:08:55.528701] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:43:03.505 [2024-05-15 09:08:57.803850] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:43:05.404 [2024-05-15 09:08:59.757908] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:43:06.777 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:43:06.777 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:43:06.777 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:43:06.777 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:43:06.777 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:43:06.777 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:43:06.777 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:43:06.777 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:43:06.777 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:43:06.777 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:43:06.777 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:43:06.777 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:43:06.777 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:43:06.777 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:43:06.777 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:43:06.777 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:43:06.777 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:43:06.777 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:43:06.777 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:43:06.777 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:43:06.777 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:43:06.777 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:43:06.777 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:43:06.777 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:43:06.777 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:43:06.777 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:43:06.777 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:43:06.777 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:43:06.777 09:09:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:43:06.777 09:09:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:43:06.777 09:09:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:06.777 09:09:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:43:06.777 09:09:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:43:06.777 09:09:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:06.777 09:09:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:43:06.777 09:09:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:43:07.035 09:09:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:43:07.293 09:09:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:43:07.293 09:09:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:43:07.293 09:09:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:43:07.293 09:09:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:07.293 09:09:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:43:07.293 09:09:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:43:07.293 09:09:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:07.293 09:09:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:43:07.293 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:43:07.293 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:43:07.293 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:43:07.293 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:43:07.293 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:43:07.293 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:43:07.293 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:43:07.293 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:43:07.293 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:43:07.293 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:43:07.293 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:43:07.293 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:43:07.293 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:43:07.293 ' 00:43:12.555 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:43:12.555 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:43:12.555 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:43:12.555 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:43:12.555 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:43:12.555 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:43:12.555 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:43:12.555 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:43:12.555 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:43:12.555 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:43:12.555 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:43:12.555 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:43:12.555 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:43:12.555 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:43:12.555 09:09:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:43:12.555 09:09:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:43:12.555 09:09:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:12.555 09:09:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2443085 00:43:12.555 09:09:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@947 -- # '[' -z 2443085 ']' 00:43:12.555 09:09:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # kill -0 2443085 00:43:12.555 09:09:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # uname 00:43:12.555 09:09:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:43:12.555 09:09:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2443085 00:43:12.555 09:09:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:43:12.555 09:09:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:43:12.555 09:09:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2443085' 00:43:12.555 killing process with pid 2443085 00:43:12.555 09:09:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # kill 2443085 00:43:12.555 [2024-05-15 09:09:07.109341] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:43:12.555 09:09:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@971 -- # wait 2443085 00:43:12.555 09:09:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:43:12.555 09:09:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:43:12.555 09:09:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2443085 ']' 00:43:12.555 09:09:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2443085 00:43:12.555 09:09:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@947 -- # '[' -z 2443085 ']' 00:43:12.555 09:09:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # kill -0 2443085 00:43:12.555 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 951: kill: (2443085) - No such process 00:43:12.555 09:09:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # echo 'Process with pid 2443085 is not found' 00:43:12.555 Process with pid 2443085 is not found 00:43:12.555 09:09:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:43:12.555 09:09:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:43:12.556 09:09:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:43:12.556 00:43:12.556 real 0m15.968s 00:43:12.556 user 0m33.731s 00:43:12.556 sys 0m0.802s 00:43:12.556 09:09:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # xtrace_disable 00:43:12.556 09:09:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:12.556 ************************************ 00:43:12.556 END TEST spdkcli_nvmf_tcp 00:43:12.556 ************************************ 00:43:12.556 09:09:07 -- spdk/autotest.sh@286 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:43:12.556 09:09:07 -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:43:12.556 09:09:07 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:43:12.556 09:09:07 -- common/autotest_common.sh@10 -- # set +x 00:43:12.814 ************************************ 00:43:12.814 START TEST nvmf_identify_passthru 00:43:12.814 ************************************ 00:43:12.814 09:09:07 nvmf_identify_passthru -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:43:12.814 * Looking for test storage... 00:43:12.814 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:12.814 09:09:07 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:12.814 09:09:07 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:43:12.814 09:09:07 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:12.814 09:09:07 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:12.814 09:09:07 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:12.814 09:09:07 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:12.814 09:09:07 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:12.814 09:09:07 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:12.814 09:09:07 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:12.814 09:09:07 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:12.814 09:09:07 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:12.814 09:09:07 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:12.814 09:09:07 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:43:12.814 09:09:07 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:43:12.814 09:09:07 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:12.814 09:09:07 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:12.814 09:09:07 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:12.814 09:09:07 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:12.814 09:09:07 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:12.814 09:09:07 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:12.814 09:09:07 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:12.814 09:09:07 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:12.814 09:09:07 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:12.814 09:09:07 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:12.814 09:09:07 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:12.814 09:09:07 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:43:12.814 09:09:07 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:12.814 09:09:07 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:43:12.814 09:09:07 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:43:12.814 09:09:07 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:43:12.814 09:09:07 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:12.814 09:09:07 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:12.814 09:09:07 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:12.814 09:09:07 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:43:12.814 09:09:07 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:43:12.814 09:09:07 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:43:12.814 09:09:07 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:12.814 09:09:07 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:12.814 09:09:07 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:12.814 09:09:07 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:12.814 09:09:07 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:12.814 09:09:07 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:12.814 09:09:07 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:12.814 09:09:07 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:43:12.814 09:09:07 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:12.814 09:09:07 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:43:12.814 09:09:07 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:43:12.814 09:09:07 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:12.814 09:09:07 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:43:12.815 09:09:07 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:43:12.815 09:09:07 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:43:12.815 09:09:07 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:12.815 09:09:07 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:12.815 09:09:07 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:12.815 09:09:07 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:43:12.815 09:09:07 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:43:12.815 09:09:07 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:43:12.815 09:09:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:15.375 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:15.375 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:43:15.375 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:43:15.375 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:43:15.375 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:43:15.375 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:43:15.375 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:43:15.375 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:43:15.375 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:43:15.375 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:43:15.375 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:43:15.375 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:43:15.375 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:43:15.375 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:43:15.375 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:43:15.375 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:15.375 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:15.375 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:15.375 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:15.375 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:15.375 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:15.375 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:15.375 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:15.375 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:15.375 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:15.375 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:15.375 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:43:15.375 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:43:15.375 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:43:15.375 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:43:15.375 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:43:15.375 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:43:15.375 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:43:15.375 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:43:15.375 Found 0000:09:00.0 (0x8086 - 0x159b) 00:43:15.375 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:43:15.375 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:43:15.375 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:15.375 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:15.375 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:43:15.375 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:43:15.375 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:43:15.375 Found 0000:09:00.1 (0x8086 - 0x159b) 00:43:15.375 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:43:15.375 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:43:15.375 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:15.375 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:15.375 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:43:15.375 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:43:15.375 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:43:15.375 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:43:15.375 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:43:15.375 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:15.375 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:43:15.375 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:15.375 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:43:15.375 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:43:15.375 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:15.375 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:43:15.375 Found net devices under 0000:09:00.0: cvl_0_0 00:43:15.375 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:43:15.375 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:43:15.375 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:15.375 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:43:15.375 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:15.375 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:43:15.375 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:43:15.375 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:15.376 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:43:15.376 Found net devices under 0000:09:00.1: cvl_0_1 00:43:15.376 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:43:15.376 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:43:15.376 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:43:15.376 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:43:15.376 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:43:15.376 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:43:15.376 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:15.376 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:15.376 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:15.376 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:43:15.376 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:15.376 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:15.376 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:43:15.376 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:15.376 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:15.376 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:43:15.376 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:43:15.376 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:43:15.376 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:15.376 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:15.376 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:15.376 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:43:15.376 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:15.376 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:15.376 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:15.376 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:43:15.376 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:15.376 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:43:15.376 00:43:15.376 --- 10.0.0.2 ping statistics --- 00:43:15.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:15.376 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:43:15.376 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:15.376 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:15.376 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:43:15.376 00:43:15.376 --- 10.0.0.1 ping statistics --- 00:43:15.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:15.376 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:43:15.376 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:15.376 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:43:15.376 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:43:15.376 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:15.376 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:43:15.376 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:43:15.376 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:15.376 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:43:15.376 09:09:09 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:43:15.376 09:09:09 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:43:15.376 09:09:09 nvmf_identify_passthru -- common/autotest_common.sh@721 -- # xtrace_disable 00:43:15.376 09:09:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:15.376 09:09:09 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:43:15.376 09:09:09 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # bdfs=() 00:43:15.376 09:09:09 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # local bdfs 00:43:15.376 09:09:09 nvmf_identify_passthru -- common/autotest_common.sh@1522 -- # bdfs=($(get_nvme_bdfs)) 00:43:15.376 09:09:09 nvmf_identify_passthru -- common/autotest_common.sh@1522 -- # get_nvme_bdfs 00:43:15.376 09:09:09 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=() 00:43:15.376 09:09:09 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # local bdfs 00:43:15.376 09:09:09 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:43:15.376 09:09:09 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:43:15.376 09:09:09 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # jq -r '.config[].params.traddr' 00:43:15.376 09:09:09 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # (( 1 == 0 )) 00:43:15.376 09:09:09 nvmf_identify_passthru -- common/autotest_common.sh@1516 -- # printf '%s\n' 0000:0b:00.0 00:43:15.376 09:09:09 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # echo 0000:0b:00.0 00:43:15.376 09:09:09 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:0b:00.0 00:43:15.376 09:09:09 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:0b:00.0 ']' 00:43:15.376 09:09:09 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:43:15.376 09:09:09 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:43:15.376 09:09:09 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:43:15.376 EAL: No free 2048 kB hugepages reported on node 1 00:43:19.558 09:09:14 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F4Q1P0FGN 00:43:19.558 09:09:14 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:43:19.558 09:09:14 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:43:19.558 09:09:14 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:43:19.558 EAL: No free 2048 kB hugepages reported on node 1 00:43:23.740 09:09:18 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:43:23.740 09:09:18 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:43:23.740 09:09:18 nvmf_identify_passthru -- common/autotest_common.sh@727 -- # xtrace_disable 00:43:23.740 09:09:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:23.740 09:09:18 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:43:23.740 09:09:18 nvmf_identify_passthru -- common/autotest_common.sh@721 -- # xtrace_disable 00:43:23.740 09:09:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:23.740 09:09:18 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2448489 00:43:23.740 09:09:18 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:43:23.740 09:09:18 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:43:23.740 09:09:18 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2448489 00:43:23.740 09:09:18 nvmf_identify_passthru -- common/autotest_common.sh@828 -- # '[' -z 2448489 ']' 00:43:23.740 09:09:18 nvmf_identify_passthru -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:23.740 09:09:18 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local max_retries=100 00:43:23.740 09:09:18 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:23.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:23.740 09:09:18 nvmf_identify_passthru -- common/autotest_common.sh@837 -- # xtrace_disable 00:43:23.740 09:09:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:23.740 [2024-05-15 09:09:18.271282] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:43:23.740 [2024-05-15 09:09:18.271371] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:23.740 EAL: No free 2048 kB hugepages reported on node 1 00:43:23.740 [2024-05-15 09:09:18.344470] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:23.740 [2024-05-15 09:09:18.431107] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:23.740 [2024-05-15 09:09:18.431159] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:23.740 [2024-05-15 09:09:18.431181] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:23.740 [2024-05-15 09:09:18.431192] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:23.740 [2024-05-15 09:09:18.431222] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:23.740 [2024-05-15 09:09:18.431308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:43:23.740 [2024-05-15 09:09:18.431339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:43:23.740 [2024-05-15 09:09:18.431387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:43:23.740 [2024-05-15 09:09:18.431390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:43:23.740 09:09:18 nvmf_identify_passthru -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:43:23.740 09:09:18 nvmf_identify_passthru -- common/autotest_common.sh@861 -- # return 0 00:43:23.740 09:09:18 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:43:23.741 09:09:18 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:23.741 09:09:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:23.741 INFO: Log level set to 20 00:43:23.741 INFO: Requests: 00:43:23.741 { 00:43:23.741 "jsonrpc": "2.0", 00:43:23.741 "method": "nvmf_set_config", 00:43:23.741 "id": 1, 00:43:23.741 "params": { 00:43:23.741 "admin_cmd_passthru": { 00:43:23.741 "identify_ctrlr": true 00:43:23.741 } 00:43:23.741 } 00:43:23.741 } 00:43:23.741 00:43:23.741 INFO: response: 00:43:23.741 { 00:43:23.741 "jsonrpc": "2.0", 00:43:23.741 "id": 1, 00:43:23.741 "result": true 00:43:23.741 } 00:43:23.741 00:43:23.741 09:09:18 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:23.741 09:09:18 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:43:23.741 09:09:18 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:23.741 09:09:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:23.741 INFO: Setting log level to 20 00:43:23.741 INFO: Setting log level to 20 00:43:23.741 INFO: Log level set to 20 00:43:23.741 INFO: Log level set to 20 00:43:23.741 INFO: Requests: 00:43:23.741 { 00:43:23.741 "jsonrpc": "2.0", 00:43:23.741 "method": "framework_start_init", 00:43:23.741 "id": 1 00:43:23.741 } 00:43:23.741 00:43:23.741 INFO: Requests: 00:43:23.741 { 00:43:23.741 "jsonrpc": "2.0", 00:43:23.741 "method": "framework_start_init", 00:43:23.741 "id": 1 00:43:23.741 } 00:43:23.741 00:43:23.998 [2024-05-15 09:09:18.584519] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:43:23.998 INFO: response: 00:43:23.998 { 00:43:23.998 "jsonrpc": "2.0", 00:43:23.998 "id": 1, 00:43:23.998 "result": true 00:43:23.998 } 00:43:23.998 00:43:23.998 INFO: response: 00:43:23.998 { 00:43:23.998 "jsonrpc": "2.0", 00:43:23.998 "id": 1, 00:43:23.998 "result": true 00:43:23.998 } 00:43:23.998 00:43:23.998 09:09:18 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:23.998 09:09:18 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:23.998 09:09:18 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:23.998 09:09:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:23.998 INFO: Setting log level to 40 00:43:23.998 INFO: Setting log level to 40 00:43:23.998 INFO: Setting log level to 40 00:43:23.998 [2024-05-15 09:09:18.594561] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:23.998 09:09:18 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:23.998 09:09:18 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:43:23.998 09:09:18 nvmf_identify_passthru -- common/autotest_common.sh@727 -- # xtrace_disable 00:43:23.998 09:09:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:23.998 09:09:18 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:0b:00.0 00:43:23.998 09:09:18 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:23.998 09:09:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:27.274 Nvme0n1 00:43:27.274 09:09:21 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:27.274 09:09:21 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:43:27.274 09:09:21 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:27.274 09:09:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:27.274 09:09:21 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:27.274 09:09:21 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:43:27.274 09:09:21 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:27.274 09:09:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:27.274 09:09:21 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:27.274 09:09:21 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:27.274 09:09:21 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:27.274 09:09:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:27.274 [2024-05-15 09:09:21.490823] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:43:27.274 [2024-05-15 09:09:21.491155] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:27.274 09:09:21 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:27.274 09:09:21 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:43:27.274 09:09:21 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:27.274 09:09:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:27.274 [ 00:43:27.274 { 00:43:27.274 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:43:27.274 "subtype": "Discovery", 00:43:27.274 "listen_addresses": [], 00:43:27.274 "allow_any_host": true, 00:43:27.274 "hosts": [] 00:43:27.274 }, 00:43:27.274 { 00:43:27.274 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:43:27.274 "subtype": "NVMe", 00:43:27.274 "listen_addresses": [ 00:43:27.274 { 00:43:27.274 "trtype": "TCP", 00:43:27.274 "adrfam": "IPv4", 00:43:27.274 "traddr": "10.0.0.2", 00:43:27.274 "trsvcid": "4420" 00:43:27.274 } 00:43:27.274 ], 00:43:27.274 "allow_any_host": true, 00:43:27.274 "hosts": [], 00:43:27.274 "serial_number": "SPDK00000000000001", 00:43:27.274 "model_number": "SPDK bdev Controller", 00:43:27.274 "max_namespaces": 1, 00:43:27.274 "min_cntlid": 1, 00:43:27.274 "max_cntlid": 65519, 00:43:27.274 "namespaces": [ 00:43:27.274 { 00:43:27.274 "nsid": 1, 00:43:27.274 "bdev_name": "Nvme0n1", 00:43:27.274 "name": "Nvme0n1", 00:43:27.274 "nguid": "BF526BEADAD44A28B9B67D209CB834CA", 00:43:27.274 "uuid": "bf526bea-dad4-4a28-b9b6-7d209cb834ca" 00:43:27.274 } 00:43:27.274 ] 00:43:27.274 } 00:43:27.274 ] 00:43:27.274 09:09:21 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:27.274 09:09:21 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:43:27.274 09:09:21 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:43:27.274 09:09:21 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:43:27.274 EAL: No free 2048 kB hugepages reported on node 1 00:43:27.274 09:09:21 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F4Q1P0FGN 00:43:27.274 09:09:21 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:43:27.274 09:09:21 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:43:27.274 09:09:21 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:43:27.274 EAL: No free 2048 kB hugepages reported on node 1 00:43:27.274 09:09:21 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:43:27.274 09:09:21 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F4Q1P0FGN '!=' BTLJ72430F4Q1P0FGN ']' 00:43:27.274 09:09:21 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:43:27.274 09:09:21 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:27.274 09:09:21 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:27.274 09:09:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:27.274 09:09:21 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:27.274 09:09:21 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:43:27.274 09:09:21 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:43:27.274 09:09:21 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:43:27.274 09:09:21 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:43:27.274 09:09:21 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:43:27.274 09:09:21 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:43:27.274 09:09:21 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:43:27.274 09:09:21 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:43:27.274 rmmod nvme_tcp 00:43:27.274 rmmod nvme_fabrics 00:43:27.274 rmmod nvme_keyring 00:43:27.274 09:09:21 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:43:27.274 09:09:21 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:43:27.274 09:09:21 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:43:27.274 09:09:21 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 2448489 ']' 00:43:27.274 09:09:21 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 2448489 00:43:27.274 09:09:21 nvmf_identify_passthru -- common/autotest_common.sh@947 -- # '[' -z 2448489 ']' 00:43:27.274 09:09:21 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # kill -0 2448489 00:43:27.274 09:09:21 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # uname 00:43:27.274 09:09:21 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:43:27.274 09:09:21 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2448489 00:43:27.274 09:09:21 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:43:27.274 09:09:21 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:43:27.274 09:09:21 nvmf_identify_passthru -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2448489' 00:43:27.274 killing process with pid 2448489 00:43:27.274 09:09:21 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # kill 2448489 00:43:27.274 [2024-05-15 09:09:21.887840] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:43:27.274 09:09:21 nvmf_identify_passthru -- common/autotest_common.sh@971 -- # wait 2448489 00:43:28.646 09:09:23 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:43:28.646 09:09:23 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:43:28.646 09:09:23 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:43:28.646 09:09:23 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:43:28.646 09:09:23 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:43:28.646 09:09:23 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:28.646 09:09:23 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:28.646 09:09:23 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:31.173 09:09:25 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:43:31.173 00:43:31.173 real 0m18.046s 00:43:31.173 user 0m26.104s 00:43:31.173 sys 0m2.587s 00:43:31.173 09:09:25 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # xtrace_disable 00:43:31.173 09:09:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:31.173 ************************************ 00:43:31.173 END TEST nvmf_identify_passthru 00:43:31.173 ************************************ 00:43:31.173 09:09:25 -- spdk/autotest.sh@288 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:43:31.173 09:09:25 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:43:31.173 09:09:25 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:43:31.173 09:09:25 -- common/autotest_common.sh@10 -- # set +x 00:43:31.173 ************************************ 00:43:31.173 START TEST nvmf_dif 00:43:31.173 ************************************ 00:43:31.173 09:09:25 nvmf_dif -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:43:31.173 * Looking for test storage... 00:43:31.173 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:31.173 09:09:25 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:31.173 09:09:25 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:43:31.173 09:09:25 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:31.173 09:09:25 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:31.173 09:09:25 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:31.173 09:09:25 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:31.173 09:09:25 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:31.173 09:09:25 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:31.173 09:09:25 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:31.173 09:09:25 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:31.173 09:09:25 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:31.173 09:09:25 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:31.173 09:09:25 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:43:31.173 09:09:25 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:43:31.173 09:09:25 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:31.173 09:09:25 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:31.173 09:09:25 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:31.173 09:09:25 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:31.173 09:09:25 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:31.173 09:09:25 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:31.173 09:09:25 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:31.173 09:09:25 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:31.173 09:09:25 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:31.173 09:09:25 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:31.173 09:09:25 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:31.173 09:09:25 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:43:31.173 09:09:25 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:31.173 09:09:25 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:43:31.173 09:09:25 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:43:31.173 09:09:25 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:43:31.173 09:09:25 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:31.173 09:09:25 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:31.173 09:09:25 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:31.173 09:09:25 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:43:31.173 09:09:25 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:43:31.173 09:09:25 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:43:31.173 09:09:25 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:43:31.173 09:09:25 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:43:31.173 09:09:25 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:43:31.173 09:09:25 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:43:31.173 09:09:25 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:43:31.173 09:09:25 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:43:31.173 09:09:25 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:31.173 09:09:25 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:43:31.173 09:09:25 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:43:31.173 09:09:25 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:43:31.173 09:09:25 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:31.173 09:09:25 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:31.173 09:09:25 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:31.173 09:09:25 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:43:31.173 09:09:25 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:43:31.173 09:09:25 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:43:31.173 09:09:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:33.702 09:09:27 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:33.702 09:09:27 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:43:33.702 09:09:27 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:43:33.702 09:09:27 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:43:33.702 09:09:27 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:43:33.702 09:09:27 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:43:33.702 09:09:27 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:43:33.702 09:09:27 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:43:33.702 09:09:27 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:43:33.702 09:09:27 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:43:33.702 09:09:27 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:43:33.702 09:09:27 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:43:33.702 09:09:27 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:43:33.702 09:09:27 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:43:33.702 09:09:27 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:43:33.702 09:09:27 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:33.702 09:09:27 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:33.702 09:09:27 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:33.702 09:09:27 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:33.702 09:09:27 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:33.702 09:09:27 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:33.702 09:09:27 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:33.702 09:09:27 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:33.702 09:09:27 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:33.702 09:09:27 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:33.702 09:09:27 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:33.702 09:09:27 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:43:33.702 09:09:27 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:43:33.702 09:09:27 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:43:33.702 09:09:27 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:43:33.702 09:09:27 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:43:33.702 09:09:27 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:43:33.702 09:09:27 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:43:33.702 09:09:27 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:43:33.702 Found 0000:09:00.0 (0x8086 - 0x159b) 00:43:33.702 09:09:27 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:43:33.702 09:09:27 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:43:33.702 09:09:27 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:33.703 09:09:27 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:33.703 09:09:27 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:43:33.703 09:09:27 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:43:33.703 09:09:27 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:43:33.703 Found 0000:09:00.1 (0x8086 - 0x159b) 00:43:33.703 09:09:27 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:43:33.703 09:09:27 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:43:33.703 09:09:27 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:33.703 09:09:27 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:33.703 09:09:27 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:43:33.703 09:09:27 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:43:33.703 09:09:27 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:43:33.703 09:09:27 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:43:33.703 09:09:27 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:43:33.703 09:09:27 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:33.703 09:09:27 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:43:33.703 09:09:27 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:33.703 09:09:27 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:43:33.703 09:09:27 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:43:33.703 09:09:27 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:33.703 09:09:27 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:43:33.703 Found net devices under 0000:09:00.0: cvl_0_0 00:43:33.703 09:09:27 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:43:33.703 09:09:27 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:43:33.703 09:09:27 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:33.703 09:09:27 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:43:33.703 09:09:27 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:33.703 09:09:27 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:43:33.703 09:09:27 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:43:33.703 09:09:27 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:33.703 09:09:27 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:43:33.703 Found net devices under 0000:09:00.1: cvl_0_1 00:43:33.703 09:09:27 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:43:33.703 09:09:27 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:43:33.703 09:09:27 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:43:33.703 09:09:27 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:43:33.703 09:09:27 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:43:33.703 09:09:27 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:43:33.703 09:09:27 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:33.703 09:09:27 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:33.703 09:09:27 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:33.703 09:09:27 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:43:33.703 09:09:27 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:33.703 09:09:27 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:33.703 09:09:27 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:43:33.703 09:09:27 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:33.703 09:09:27 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:33.703 09:09:27 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:43:33.703 09:09:27 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:43:33.703 09:09:27 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:43:33.703 09:09:27 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:33.703 09:09:27 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:33.703 09:09:27 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:33.703 09:09:27 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:43:33.703 09:09:27 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:33.703 09:09:27 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:33.703 09:09:27 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:33.703 09:09:27 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:43:33.703 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:33.703 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:43:33.703 00:43:33.703 --- 10.0.0.2 ping statistics --- 00:43:33.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:33.703 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:43:33.703 09:09:27 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:33.703 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:33.703 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:43:33.703 00:43:33.703 --- 10.0.0.1 ping statistics --- 00:43:33.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:33.703 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:43:33.703 09:09:28 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:33.703 09:09:28 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:43:33.703 09:09:28 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:43:33.703 09:09:28 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:43:34.637 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:43:34.637 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:43:34.637 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:43:34.637 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:43:34.637 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:43:34.637 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:43:34.637 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:43:34.637 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:43:34.637 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:43:34.637 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:43:34.637 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:43:34.637 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:43:34.637 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:43:34.637 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:43:34.637 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:43:34.637 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:43:34.637 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:43:34.637 09:09:29 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:34.637 09:09:29 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:43:34.637 09:09:29 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:43:34.638 09:09:29 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:34.638 09:09:29 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:43:34.638 09:09:29 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:43:34.638 09:09:29 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:43:34.638 09:09:29 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:43:34.638 09:09:29 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:43:34.638 09:09:29 nvmf_dif -- common/autotest_common.sh@721 -- # xtrace_disable 00:43:34.638 09:09:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:34.638 09:09:29 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=2452147 00:43:34.638 09:09:29 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:43:34.638 09:09:29 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 2452147 00:43:34.638 09:09:29 nvmf_dif -- common/autotest_common.sh@828 -- # '[' -z 2452147 ']' 00:43:34.638 09:09:29 nvmf_dif -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:34.638 09:09:29 nvmf_dif -- common/autotest_common.sh@833 -- # local max_retries=100 00:43:34.638 09:09:29 nvmf_dif -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:34.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:34.638 09:09:29 nvmf_dif -- common/autotest_common.sh@837 -- # xtrace_disable 00:43:34.638 09:09:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:34.896 [2024-05-15 09:09:29.465095] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:43:34.896 [2024-05-15 09:09:29.465168] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:34.896 EAL: No free 2048 kB hugepages reported on node 1 00:43:34.896 [2024-05-15 09:09:29.538735] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:34.896 [2024-05-15 09:09:29.621620] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:34.896 [2024-05-15 09:09:29.621672] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:34.896 [2024-05-15 09:09:29.621686] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:34.896 [2024-05-15 09:09:29.621696] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:34.896 [2024-05-15 09:09:29.621706] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:34.896 [2024-05-15 09:09:29.621746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:43:35.154 09:09:29 nvmf_dif -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:43:35.154 09:09:29 nvmf_dif -- common/autotest_common.sh@861 -- # return 0 00:43:35.154 09:09:29 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:43:35.154 09:09:29 nvmf_dif -- common/autotest_common.sh@727 -- # xtrace_disable 00:43:35.154 09:09:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:35.154 09:09:29 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:35.154 09:09:29 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:43:35.155 09:09:29 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:43:35.155 09:09:29 nvmf_dif -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:35.155 09:09:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:35.155 [2024-05-15 09:09:29.753473] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:35.155 09:09:29 nvmf_dif -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:35.155 09:09:29 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:43:35.155 09:09:29 nvmf_dif -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:43:35.155 09:09:29 nvmf_dif -- common/autotest_common.sh@1104 -- # xtrace_disable 00:43:35.155 09:09:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:35.155 ************************************ 00:43:35.155 START TEST fio_dif_1_default 00:43:35.155 ************************************ 00:43:35.155 09:09:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1122 -- # fio_dif_1 00:43:35.155 09:09:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:43:35.155 09:09:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:43:35.155 09:09:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:43:35.155 09:09:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:43:35.155 09:09:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:43:35.155 09:09:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:43:35.155 09:09:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:35.155 09:09:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:35.155 bdev_null0 00:43:35.155 09:09:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:35.155 09:09:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:35.155 09:09:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:35.155 09:09:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:35.155 09:09:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:35.155 09:09:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:35.155 09:09:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:35.155 09:09:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:35.155 09:09:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:35.155 09:09:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:35.155 09:09:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:35.155 09:09:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:35.155 [2024-05-15 09:09:29.817597] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:43:35.155 [2024-05-15 09:09:29.817839] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:35.155 09:09:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:35.155 09:09:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:43:35.155 09:09:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:43:35.155 09:09:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:43:35.155 09:09:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:43:35.155 09:09:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:43:35.155 09:09:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:35.155 09:09:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:43:35.155 09:09:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:43:35.155 { 00:43:35.155 "params": { 00:43:35.155 "name": "Nvme$subsystem", 00:43:35.155 "trtype": "$TEST_TRANSPORT", 00:43:35.155 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:35.155 "adrfam": "ipv4", 00:43:35.155 "trsvcid": "$NVMF_PORT", 00:43:35.155 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:35.155 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:35.155 "hdgst": ${hdgst:-false}, 00:43:35.155 "ddgst": ${ddgst:-false} 00:43:35.155 }, 00:43:35.155 "method": "bdev_nvme_attach_controller" 00:43:35.155 } 00:43:35.155 EOF 00:43:35.155 )") 00:43:35.155 09:09:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1353 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:35.155 09:09:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:43:35.155 09:09:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:43:35.155 09:09:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:43:35.155 09:09:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:35.155 09:09:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:43:35.155 09:09:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local sanitizers 00:43:35.155 09:09:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:35.155 09:09:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # shift 00:43:35.155 09:09:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local asan_lib= 00:43:35.155 09:09:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:43:35.155 09:09:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:43:35.155 09:09:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:35.155 09:09:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:43:35.155 09:09:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:43:35.155 09:09:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # grep libasan 00:43:35.155 09:09:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:43:35.155 09:09:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:43:35.155 09:09:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:43:35.155 09:09:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:43:35.155 "params": { 00:43:35.155 "name": "Nvme0", 00:43:35.155 "trtype": "tcp", 00:43:35.155 "traddr": "10.0.0.2", 00:43:35.155 "adrfam": "ipv4", 00:43:35.155 "trsvcid": "4420", 00:43:35.155 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:35.155 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:35.155 "hdgst": false, 00:43:35.155 "ddgst": false 00:43:35.155 }, 00:43:35.155 "method": "bdev_nvme_attach_controller" 00:43:35.155 }' 00:43:35.155 09:09:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # asan_lib= 00:43:35.155 09:09:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:43:35.155 09:09:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:43:35.155 09:09:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:35.155 09:09:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:43:35.155 09:09:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:43:35.155 09:09:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # asan_lib= 00:43:35.155 09:09:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:43:35.155 09:09:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:35.155 09:09:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:35.413 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:43:35.413 fio-3.35 00:43:35.413 Starting 1 thread 00:43:35.413 EAL: No free 2048 kB hugepages reported on node 1 00:43:47.646 00:43:47.646 filename0: (groupid=0, jobs=1): err= 0: pid=2452378: Wed May 15 09:09:40 2024 00:43:47.646 read: IOPS=189, BW=760KiB/s (778kB/s)(7600KiB/10002msec) 00:43:47.646 slat (nsec): min=4943, max=66945, avg=9788.40, stdev=2994.16 00:43:47.646 clat (usec): min=621, max=47777, avg=21026.42, stdev=20290.45 00:43:47.646 lat (usec): min=629, max=47807, avg=21036.21, stdev=20290.65 00:43:47.646 clat percentiles (usec): 00:43:47.646 | 1.00th=[ 652], 5.00th=[ 668], 10.00th=[ 676], 20.00th=[ 693], 00:43:47.646 | 30.00th=[ 709], 40.00th=[ 725], 50.00th=[41157], 60.00th=[41157], 00:43:47.646 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:43:47.646 | 99.00th=[41157], 99.50th=[41157], 99.90th=[47973], 99.95th=[47973], 00:43:47.646 | 99.99th=[47973] 00:43:47.646 bw ( KiB/s): min= 704, max= 768, per=100.00%, avg=761.26, stdev=20.18, samples=19 00:43:47.646 iops : min= 176, max= 192, avg=190.32, stdev= 5.04, samples=19 00:43:47.646 lat (usec) : 750=48.00%, 1000=1.89% 00:43:47.646 lat (msec) : 50=50.11% 00:43:47.646 cpu : usr=89.53%, sys=10.20%, ctx=13, majf=0, minf=226 00:43:47.646 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:47.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:47.646 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:47.646 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:47.646 latency : target=0, window=0, percentile=100.00%, depth=4 00:43:47.646 00:43:47.646 Run status group 0 (all jobs): 00:43:47.646 READ: bw=760KiB/s (778kB/s), 760KiB/s-760KiB/s (778kB/s-778kB/s), io=7600KiB (7782kB), run=10002-10002msec 00:43:47.646 09:09:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:43:47.646 09:09:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:43:47.646 09:09:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:43:47.646 09:09:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:47.646 09:09:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:43:47.646 09:09:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:47.646 09:09:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:47.646 09:09:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:47.646 09:09:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:47.646 09:09:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:47.646 09:09:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:47.646 09:09:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:47.647 09:09:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:47.647 00:43:47.647 real 0m11.214s 00:43:47.647 user 0m10.162s 00:43:47.647 sys 0m1.346s 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # xtrace_disable 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:47.647 ************************************ 00:43:47.647 END TEST fio_dif_1_default 00:43:47.647 ************************************ 00:43:47.647 09:09:41 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:43:47.647 09:09:41 nvmf_dif -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:43:47.647 09:09:41 nvmf_dif -- common/autotest_common.sh@1104 -- # xtrace_disable 00:43:47.647 09:09:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:47.647 ************************************ 00:43:47.647 START TEST fio_dif_1_multi_subsystems 00:43:47.647 ************************************ 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1122 -- # fio_dif_1_multi_subsystems 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:47.647 bdev_null0 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:47.647 [2024-05-15 09:09:41.081226] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:47.647 bdev_null1 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:43:47.647 { 00:43:47.647 "params": { 00:43:47.647 "name": "Nvme$subsystem", 00:43:47.647 "trtype": "$TEST_TRANSPORT", 00:43:47.647 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:47.647 "adrfam": "ipv4", 00:43:47.647 "trsvcid": "$NVMF_PORT", 00:43:47.647 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:47.647 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:47.647 "hdgst": ${hdgst:-false}, 00:43:47.647 "ddgst": ${ddgst:-false} 00:43:47.647 }, 00:43:47.647 "method": "bdev_nvme_attach_controller" 00:43:47.647 } 00:43:47.647 EOF 00:43:47.647 )") 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1353 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local sanitizers 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # shift 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local asan_lib= 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # grep libasan 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:43:47.647 { 00:43:47.647 "params": { 00:43:47.647 "name": "Nvme$subsystem", 00:43:47.647 "trtype": "$TEST_TRANSPORT", 00:43:47.647 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:47.647 "adrfam": "ipv4", 00:43:47.647 "trsvcid": "$NVMF_PORT", 00:43:47.647 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:47.647 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:47.647 "hdgst": ${hdgst:-false}, 00:43:47.647 "ddgst": ${ddgst:-false} 00:43:47.647 }, 00:43:47.647 "method": "bdev_nvme_attach_controller" 00:43:47.647 } 00:43:47.647 EOF 00:43:47.647 )") 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:43:47.647 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:43:47.647 "params": { 00:43:47.647 "name": "Nvme0", 00:43:47.647 "trtype": "tcp", 00:43:47.647 "traddr": "10.0.0.2", 00:43:47.647 "adrfam": "ipv4", 00:43:47.647 "trsvcid": "4420", 00:43:47.647 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:47.647 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:47.647 "hdgst": false, 00:43:47.647 "ddgst": false 00:43:47.647 }, 00:43:47.647 "method": "bdev_nvme_attach_controller" 00:43:47.647 },{ 00:43:47.647 "params": { 00:43:47.647 "name": "Nvme1", 00:43:47.647 "trtype": "tcp", 00:43:47.647 "traddr": "10.0.0.2", 00:43:47.647 "adrfam": "ipv4", 00:43:47.648 "trsvcid": "4420", 00:43:47.648 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:47.648 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:47.648 "hdgst": false, 00:43:47.648 "ddgst": false 00:43:47.648 }, 00:43:47.648 "method": "bdev_nvme_attach_controller" 00:43:47.648 }' 00:43:47.648 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # asan_lib= 00:43:47.648 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:43:47.648 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:43:47.648 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:47.648 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:43:47.648 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:43:47.648 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # asan_lib= 00:43:47.648 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:43:47.648 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:47.648 09:09:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:47.648 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:43:47.648 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:43:47.648 fio-3.35 00:43:47.648 Starting 2 threads 00:43:47.648 EAL: No free 2048 kB hugepages reported on node 1 00:43:57.611 00:43:57.611 filename0: (groupid=0, jobs=1): err= 0: pid=2453779: Wed May 15 09:09:52 2024 00:43:57.611 read: IOPS=96, BW=388KiB/s (397kB/s)(3888KiB/10029msec) 00:43:57.611 slat (nsec): min=7359, max=47112, avg=9566.09, stdev=3309.12 00:43:57.611 clat (usec): min=40729, max=42867, avg=41241.59, stdev=450.91 00:43:57.611 lat (usec): min=40737, max=42906, avg=41251.16, stdev=451.02 00:43:57.611 clat percentiles (usec): 00:43:57.611 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:43:57.611 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:43:57.611 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:43:57.611 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:43:57.611 | 99.99th=[42730] 00:43:57.611 bw ( KiB/s): min= 384, max= 416, per=33.78%, avg=387.20, stdev= 9.85, samples=20 00:43:57.611 iops : min= 96, max= 104, avg=96.80, stdev= 2.46, samples=20 00:43:57.611 lat (msec) : 50=100.00% 00:43:57.611 cpu : usr=94.14%, sys=5.56%, ctx=17, majf=0, minf=72 00:43:57.611 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:57.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:57.611 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:57.611 issued rwts: total=972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:57.611 latency : target=0, window=0, percentile=100.00%, depth=4 00:43:57.611 filename1: (groupid=0, jobs=1): err= 0: pid=2453780: Wed May 15 09:09:52 2024 00:43:57.611 read: IOPS=189, BW=759KiB/s (777kB/s)(7616KiB/10040msec) 00:43:57.611 slat (nsec): min=7321, max=60857, avg=9328.89, stdev=3131.05 00:43:57.611 clat (usec): min=642, max=42907, avg=21063.21, stdev=20286.44 00:43:57.611 lat (usec): min=650, max=42931, avg=21072.54, stdev=20286.20 00:43:57.611 clat percentiles (usec): 00:43:57.611 | 1.00th=[ 660], 5.00th=[ 660], 10.00th=[ 668], 20.00th=[ 676], 00:43:57.611 | 30.00th=[ 685], 40.00th=[ 701], 50.00th=[40633], 60.00th=[41157], 00:43:57.611 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:43:57.611 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42730], 99.95th=[42730], 00:43:57.611 | 99.99th=[42730] 00:43:57.611 bw ( KiB/s): min= 672, max= 768, per=66.33%, avg=760.00, stdev=25.16, samples=20 00:43:57.611 iops : min= 168, max= 192, avg=190.00, stdev= 6.29, samples=20 00:43:57.611 lat (usec) : 750=46.90%, 1000=2.05% 00:43:57.611 lat (msec) : 2=0.84%, 50=50.21% 00:43:57.611 cpu : usr=95.02%, sys=4.68%, ctx=14, majf=0, minf=187 00:43:57.611 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:57.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:57.611 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:57.611 issued rwts: total=1904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:57.611 latency : target=0, window=0, percentile=100.00%, depth=4 00:43:57.611 00:43:57.611 Run status group 0 (all jobs): 00:43:57.611 READ: bw=1146KiB/s (1173kB/s), 388KiB/s-759KiB/s (397kB/s-777kB/s), io=11.2MiB (11.8MB), run=10029-10040msec 00:43:57.869 09:09:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:43:57.869 09:09:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:43:57.869 09:09:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:43:57.869 09:09:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:57.869 09:09:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:43:57.869 09:09:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:57.869 09:09:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:57.869 09:09:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:57.869 09:09:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:57.869 09:09:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:57.869 09:09:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:57.869 09:09:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:57.869 09:09:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:57.869 09:09:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:43:57.869 09:09:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:43:57.869 09:09:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:43:57.869 09:09:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:57.869 09:09:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:57.869 09:09:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:57.869 09:09:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:57.869 09:09:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:43:57.869 09:09:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:57.869 09:09:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:57.869 09:09:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:57.869 00:43:57.869 real 0m11.563s 00:43:57.869 user 0m20.512s 00:43:57.869 sys 0m1.308s 00:43:57.869 09:09:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # xtrace_disable 00:43:57.869 09:09:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:57.869 ************************************ 00:43:57.869 END TEST fio_dif_1_multi_subsystems 00:43:57.869 ************************************ 00:43:57.869 09:09:52 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:43:57.869 09:09:52 nvmf_dif -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:43:57.869 09:09:52 nvmf_dif -- common/autotest_common.sh@1104 -- # xtrace_disable 00:43:57.869 09:09:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:58.128 ************************************ 00:43:58.128 START TEST fio_dif_rand_params 00:43:58.128 ************************************ 00:43:58.128 09:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1122 -- # fio_dif_rand_params 00:43:58.128 09:09:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:43:58.128 09:09:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:43:58.128 09:09:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:43:58.128 09:09:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:43:58.128 09:09:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:43:58.128 09:09:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:43:58.128 09:09:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:43:58.128 09:09:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:43:58.128 09:09:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:43:58.128 09:09:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:58.128 09:09:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:43:58.128 09:09:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:43:58.128 09:09:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:43:58.128 09:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:58.128 09:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:58.128 bdev_null0 00:43:58.128 09:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:58.128 09:09:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:58.128 09:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:58.128 09:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:58.128 09:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:58.128 09:09:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:58.128 09:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:58.128 09:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:58.128 09:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:58.128 09:09:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:58.128 09:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:58.128 09:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:58.128 [2024-05-15 09:09:52.708152] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:58.128 09:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:58.128 09:09:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:43:58.128 09:09:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:43:58.128 09:09:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:43:58.128 09:09:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:58.128 09:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1353 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:58.128 09:09:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:43:58.128 09:09:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:43:58.128 09:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:43:58.128 09:09:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:43:58.128 09:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:58.128 09:09:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:43:58.128 09:09:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:43:58.128 09:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local sanitizers 00:43:58.128 09:09:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:43:58.128 09:09:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:43:58.128 { 00:43:58.128 "params": { 00:43:58.128 "name": "Nvme$subsystem", 00:43:58.128 "trtype": "$TEST_TRANSPORT", 00:43:58.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:58.128 "adrfam": "ipv4", 00:43:58.128 "trsvcid": "$NVMF_PORT", 00:43:58.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:58.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:58.128 "hdgst": ${hdgst:-false}, 00:43:58.128 "ddgst": ${ddgst:-false} 00:43:58.128 }, 00:43:58.128 "method": "bdev_nvme_attach_controller" 00:43:58.128 } 00:43:58.128 EOF 00:43:58.128 )") 00:43:58.128 09:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:58.128 09:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # shift 00:43:58.128 09:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local asan_lib= 00:43:58.128 09:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:43:58.128 09:09:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:43:58.129 09:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:58.129 09:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # grep libasan 00:43:58.129 09:09:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:43:58.129 09:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:43:58.129 09:09:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:58.129 09:09:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:43:58.129 09:09:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:43:58.129 09:09:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:43:58.129 "params": { 00:43:58.129 "name": "Nvme0", 00:43:58.129 "trtype": "tcp", 00:43:58.129 "traddr": "10.0.0.2", 00:43:58.129 "adrfam": "ipv4", 00:43:58.129 "trsvcid": "4420", 00:43:58.129 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:58.129 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:58.129 "hdgst": false, 00:43:58.129 "ddgst": false 00:43:58.129 }, 00:43:58.129 "method": "bdev_nvme_attach_controller" 00:43:58.129 }' 00:43:58.129 09:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # asan_lib= 00:43:58.129 09:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:43:58.129 09:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:43:58.129 09:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:58.129 09:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:43:58.129 09:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:43:58.129 09:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # asan_lib= 00:43:58.129 09:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:43:58.129 09:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:58.129 09:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:58.387 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:43:58.387 ... 00:43:58.387 fio-3.35 00:43:58.387 Starting 3 threads 00:43:58.387 EAL: No free 2048 kB hugepages reported on node 1 00:44:04.943 00:44:04.943 filename0: (groupid=0, jobs=1): err= 0: pid=2455175: Wed May 15 09:09:58 2024 00:44:04.943 read: IOPS=211, BW=26.4MiB/s (27.7MB/s)(132MiB/5006msec) 00:44:04.943 slat (nsec): min=7449, max=61701, avg=15542.82, stdev=5695.77 00:44:04.943 clat (usec): min=5206, max=92773, avg=14158.78, stdev=10221.50 00:44:04.943 lat (usec): min=5218, max=92786, avg=14174.32, stdev=10221.45 00:44:04.943 clat percentiles (usec): 00:44:04.943 | 1.00th=[ 5800], 5.00th=[ 6652], 10.00th=[ 8356], 20.00th=[ 9503], 00:44:04.943 | 30.00th=[10683], 40.00th=[11731], 50.00th=[12518], 60.00th=[13042], 00:44:04.943 | 70.00th=[13566], 80.00th=[14484], 90.00th=[16057], 95.00th=[47973], 00:44:04.943 | 99.00th=[54789], 99.50th=[55837], 99.90th=[91751], 99.95th=[92799], 00:44:04.943 | 99.99th=[92799] 00:44:04.943 bw ( KiB/s): min=18944, max=34816, per=34.35%, avg=27033.60, stdev=5711.85, samples=10 00:44:04.944 iops : min= 148, max= 272, avg=211.20, stdev=44.62, samples=10 00:44:04.944 lat (msec) : 10=24.74%, 20=69.97%, 50=0.85%, 100=4.44% 00:44:04.944 cpu : usr=92.01%, sys=7.53%, ctx=15, majf=0, minf=147 00:44:04.944 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:04.944 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.944 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.944 issued rwts: total=1059,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:04.944 latency : target=0, window=0, percentile=100.00%, depth=3 00:44:04.944 filename0: (groupid=0, jobs=1): err= 0: pid=2455176: Wed May 15 09:09:58 2024 00:44:04.944 read: IOPS=203, BW=25.4MiB/s (26.6MB/s)(127MiB/5006msec) 00:44:04.944 slat (nsec): min=7475, max=46993, avg=14529.73, stdev=4634.24 00:44:04.944 clat (usec): min=5088, max=58052, avg=14747.41, stdev=10189.70 00:44:04.944 lat (usec): min=5099, max=58066, avg=14761.94, stdev=10189.95 00:44:04.944 clat percentiles (usec): 00:44:04.944 | 1.00th=[ 5473], 5.00th=[ 5800], 10.00th=[ 7963], 20.00th=[ 9241], 00:44:04.944 | 30.00th=[10683], 40.00th=[11863], 50.00th=[12780], 60.00th=[13566], 00:44:04.944 | 70.00th=[14746], 80.00th=[16057], 90.00th=[17957], 95.00th=[49021], 00:44:04.944 | 99.00th=[55837], 99.50th=[56361], 99.90th=[57410], 99.95th=[57934], 00:44:04.944 | 99.99th=[57934] 00:44:04.944 bw ( KiB/s): min=20480, max=30720, per=32.99%, avg=25963.70, stdev=3522.45, samples=10 00:44:04.944 iops : min= 160, max= 240, avg=202.80, stdev=27.51, samples=10 00:44:04.944 lat (msec) : 10=25.07%, 20=68.44%, 50=1.97%, 100=4.52% 00:44:04.944 cpu : usr=92.09%, sys=7.47%, ctx=14, majf=0, minf=102 00:44:04.944 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:04.944 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.944 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.944 issued rwts: total=1017,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:04.944 latency : target=0, window=0, percentile=100.00%, depth=3 00:44:04.944 filename0: (groupid=0, jobs=1): err= 0: pid=2455177: Wed May 15 09:09:58 2024 00:44:04.944 read: IOPS=203, BW=25.5MiB/s (26.7MB/s)(129MiB/5048msec) 00:44:04.944 slat (nsec): min=5821, max=57038, avg=19619.89, stdev=8765.38 00:44:04.944 clat (usec): min=4693, max=55766, avg=14662.94, stdev=10215.90 00:44:04.944 lat (usec): min=4706, max=55781, avg=14682.56, stdev=10215.82 00:44:04.944 clat percentiles (usec): 00:44:04.944 | 1.00th=[ 5407], 5.00th=[ 5866], 10.00th=[ 8455], 20.00th=[ 9896], 00:44:04.944 | 30.00th=[11076], 40.00th=[11863], 50.00th=[12518], 60.00th=[13304], 00:44:04.944 | 70.00th=[14091], 80.00th=[15008], 90.00th=[16712], 95.00th=[50594], 00:44:04.944 | 99.00th=[53740], 99.50th=[54264], 99.90th=[54789], 99.95th=[55837], 00:44:04.944 | 99.99th=[55837] 00:44:04.944 bw ( KiB/s): min=22272, max=32256, per=33.33%, avg=26234.90, stdev=3064.80, samples=10 00:44:04.944 iops : min= 174, max= 252, avg=204.90, stdev=23.97, samples=10 00:44:04.944 lat (msec) : 10=21.11%, 20=72.28%, 50=1.46%, 100=5.16% 00:44:04.944 cpu : usr=92.61%, sys=6.88%, ctx=11, majf=0, minf=61 00:44:04.944 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:04.944 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.944 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.944 issued rwts: total=1028,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:04.944 latency : target=0, window=0, percentile=100.00%, depth=3 00:44:04.944 00:44:04.944 Run status group 0 (all jobs): 00:44:04.944 READ: bw=76.9MiB/s (80.6MB/s), 25.4MiB/s-26.4MiB/s (26.6MB/s-27.7MB/s), io=388MiB (407MB), run=5006-5048msec 00:44:04.944 09:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:44:04.944 09:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:44:04.944 09:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:04.944 09:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:44:04.944 09:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:44:04.944 09:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:44:04.944 09:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:04.944 09:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:04.944 09:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:04.944 09:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:44:04.944 09:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:04.944 09:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:04.944 09:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:04.944 09:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:44:04.944 09:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:44:04.944 09:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:44:04.944 09:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:44:04.944 09:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:44:04.944 09:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:44:04.944 09:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:44:04.944 09:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:44:04.944 09:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:44:04.944 09:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:44:04.944 09:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:44:04.944 09:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:44:04.944 09:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:04.944 09:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:04.944 bdev_null0 00:44:04.944 09:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:04.944 09:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:44:04.944 09:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:04.944 09:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:04.944 09:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:04.944 09:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:44:04.944 09:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:04.944 09:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:04.944 09:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:04.944 09:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:44:04.944 09:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:04.944 09:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:04.944 [2024-05-15 09:09:58.985786] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:04.944 09:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:04.944 09:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:44:04.944 09:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:44:04.944 09:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:44:04.944 09:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:44:04.944 09:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:04.944 09:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:04.944 bdev_null1 00:44:04.944 09:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:04.944 09:09:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:44:04.944 09:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:04.944 09:09:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:04.944 09:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:04.944 09:09:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:44:04.944 09:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:04.944 09:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:04.944 09:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:04.944 09:09:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:04.944 09:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:04.944 09:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:04.944 09:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:04.944 09:09:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:44:04.944 09:09:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:44:04.944 09:09:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:44:04.944 09:09:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:44:04.944 09:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:04.944 09:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:04.944 bdev_null2 00:44:04.944 09:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:04.944 09:09:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:44:04.944 09:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:04.944 09:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:04.944 09:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:04.944 09:09:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:44:04.944 09:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:04.944 09:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:04.944 09:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:04.944 09:09:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:44:04.944 09:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:04.944 09:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:04.945 09:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:04.945 09:09:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:44:04.945 09:09:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:44:04.945 09:09:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:04.945 09:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1353 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:04.945 09:09:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:44:04.945 09:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:44:04.945 09:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:44:04.945 09:09:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:44:04.945 09:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local sanitizers 00:44:04.945 09:09:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:44:04.945 09:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:04.945 09:09:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:44:04.945 09:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # shift 00:44:04.945 09:09:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:44:04.945 09:09:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:44:04.945 09:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local asan_lib= 00:44:04.945 09:09:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:44:04.945 09:09:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:44:04.945 { 00:44:04.945 "params": { 00:44:04.945 "name": "Nvme$subsystem", 00:44:04.945 "trtype": "$TEST_TRANSPORT", 00:44:04.945 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:04.945 "adrfam": "ipv4", 00:44:04.945 "trsvcid": "$NVMF_PORT", 00:44:04.945 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:04.945 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:04.945 "hdgst": ${hdgst:-false}, 00:44:04.945 "ddgst": ${ddgst:-false} 00:44:04.945 }, 00:44:04.945 "method": "bdev_nvme_attach_controller" 00:44:04.945 } 00:44:04.945 EOF 00:44:04.945 )") 00:44:04.945 09:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:44:04.945 09:09:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:44:04.945 09:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:04.945 09:09:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:44:04.945 09:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # grep libasan 00:44:04.945 09:09:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:44:04.945 09:09:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:44:04.945 09:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:44:04.945 09:09:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:44:04.945 09:09:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:44:04.945 { 00:44:04.945 "params": { 00:44:04.945 "name": "Nvme$subsystem", 00:44:04.945 "trtype": "$TEST_TRANSPORT", 00:44:04.945 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:04.945 "adrfam": "ipv4", 00:44:04.945 "trsvcid": "$NVMF_PORT", 00:44:04.945 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:04.945 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:04.945 "hdgst": ${hdgst:-false}, 00:44:04.945 "ddgst": ${ddgst:-false} 00:44:04.945 }, 00:44:04.945 "method": "bdev_nvme_attach_controller" 00:44:04.945 } 00:44:04.945 EOF 00:44:04.945 )") 00:44:04.945 09:09:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:44:04.945 09:09:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:44:04.945 09:09:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:44:04.945 09:09:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:44:04.945 09:09:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:44:04.945 09:09:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:44:04.945 09:09:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:44:04.945 09:09:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:44:04.945 { 00:44:04.945 "params": { 00:44:04.945 "name": "Nvme$subsystem", 00:44:04.945 "trtype": "$TEST_TRANSPORT", 00:44:04.945 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:04.945 "adrfam": "ipv4", 00:44:04.945 "trsvcid": "$NVMF_PORT", 00:44:04.945 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:04.945 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:04.945 "hdgst": ${hdgst:-false}, 00:44:04.945 "ddgst": ${ddgst:-false} 00:44:04.945 }, 00:44:04.945 "method": "bdev_nvme_attach_controller" 00:44:04.945 } 00:44:04.945 EOF 00:44:04.945 )") 00:44:04.945 09:09:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:44:04.945 09:09:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:44:04.945 09:09:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:44:04.945 09:09:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:44:04.945 "params": { 00:44:04.945 "name": "Nvme0", 00:44:04.945 "trtype": "tcp", 00:44:04.945 "traddr": "10.0.0.2", 00:44:04.945 "adrfam": "ipv4", 00:44:04.945 "trsvcid": "4420", 00:44:04.945 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:04.945 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:04.945 "hdgst": false, 00:44:04.945 "ddgst": false 00:44:04.945 }, 00:44:04.945 "method": "bdev_nvme_attach_controller" 00:44:04.945 },{ 00:44:04.945 "params": { 00:44:04.945 "name": "Nvme1", 00:44:04.945 "trtype": "tcp", 00:44:04.945 "traddr": "10.0.0.2", 00:44:04.945 "adrfam": "ipv4", 00:44:04.945 "trsvcid": "4420", 00:44:04.945 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:44:04.945 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:44:04.945 "hdgst": false, 00:44:04.945 "ddgst": false 00:44:04.945 }, 00:44:04.945 "method": "bdev_nvme_attach_controller" 00:44:04.945 },{ 00:44:04.945 "params": { 00:44:04.945 "name": "Nvme2", 00:44:04.945 "trtype": "tcp", 00:44:04.945 "traddr": "10.0.0.2", 00:44:04.945 "adrfam": "ipv4", 00:44:04.945 "trsvcid": "4420", 00:44:04.945 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:44:04.945 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:44:04.945 "hdgst": false, 00:44:04.945 "ddgst": false 00:44:04.945 }, 00:44:04.945 "method": "bdev_nvme_attach_controller" 00:44:04.945 }' 00:44:04.945 09:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # asan_lib= 00:44:04.945 09:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:44:04.945 09:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:44:04.945 09:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:04.945 09:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:44:04.945 09:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:44:04.945 09:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # asan_lib= 00:44:04.945 09:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:44:04.945 09:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:44:04.945 09:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:04.945 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:44:04.945 ... 00:44:04.945 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:44:04.945 ... 00:44:04.945 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:44:04.945 ... 00:44:04.945 fio-3.35 00:44:04.945 Starting 24 threads 00:44:04.945 EAL: No free 2048 kB hugepages reported on node 1 00:44:17.148 00:44:17.148 filename0: (groupid=0, jobs=1): err= 0: pid=2456041: Wed May 15 09:10:10 2024 00:44:17.148 read: IOPS=468, BW=1874KiB/s (1919kB/s)(18.3MiB/10005msec) 00:44:17.148 slat (nsec): min=8175, max=78971, avg=23269.01, stdev=11049.90 00:44:17.148 clat (usec): min=16272, max=92677, avg=33942.82, stdev=2561.44 00:44:17.148 lat (usec): min=16284, max=92715, avg=33966.09, stdev=2563.56 00:44:17.148 clat percentiles (usec): 00:44:17.148 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33162], 20.00th=[33424], 00:44:17.148 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:44:17.148 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34866], 95.00th=[35914], 00:44:17.148 | 99.00th=[36963], 99.50th=[36963], 99.90th=[69731], 99.95th=[69731], 00:44:17.148 | 99.99th=[92799] 00:44:17.148 bw ( KiB/s): min= 1532, max= 1936, per=4.14%, avg=1865.89, stdev=98.25, samples=19 00:44:17.148 iops : min= 383, max= 484, avg=466.47, stdev=24.56, samples=19 00:44:17.148 lat (msec) : 20=0.04%, 50=99.62%, 100=0.34% 00:44:17.148 cpu : usr=98.35%, sys=1.27%, ctx=13, majf=0, minf=54 00:44:17.148 IO depths : 1=4.5%, 2=10.7%, 4=25.0%, 8=51.8%, 16=8.0%, 32=0.0%, >=64=0.0% 00:44:17.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.148 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.148 issued rwts: total=4688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:17.148 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:17.148 filename0: (groupid=0, jobs=1): err= 0: pid=2456042: Wed May 15 09:10:10 2024 00:44:17.148 read: IOPS=486, BW=1945KiB/s (1991kB/s)(19.0MiB/10017msec) 00:44:17.148 slat (usec): min=5, max=118, avg=25.69, stdev=21.38 00:44:17.148 clat (usec): min=17355, max=60232, avg=32757.14, stdev=4558.27 00:44:17.148 lat (usec): min=17448, max=60248, avg=32782.83, stdev=4556.06 00:44:17.148 clat percentiles (usec): 00:44:17.148 | 1.00th=[21365], 5.00th=[23987], 10.00th=[25822], 20.00th=[30278], 00:44:17.148 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:44:17.148 | 70.00th=[33817], 80.00th=[34341], 90.00th=[37487], 95.00th=[38011], 00:44:17.148 | 99.00th=[44303], 99.50th=[49021], 99.90th=[60031], 99.95th=[60031], 00:44:17.148 | 99.99th=[60031] 00:44:17.148 bw ( KiB/s): min= 1539, max= 2240, per=4.31%, avg=1942.75, stdev=140.05, samples=20 00:44:17.148 iops : min= 384, max= 560, avg=485.65, stdev=35.13, samples=20 00:44:17.148 lat (msec) : 20=0.29%, 50=99.38%, 100=0.33% 00:44:17.148 cpu : usr=95.90%, sys=2.55%, ctx=136, majf=0, minf=73 00:44:17.148 IO depths : 1=1.0%, 2=2.3%, 4=6.9%, 8=75.3%, 16=14.6%, 32=0.0%, >=64=0.0% 00:44:17.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.148 complete : 0=0.0%, 4=90.0%, 8=7.4%, 16=2.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.148 issued rwts: total=4870,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:17.148 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:17.148 filename0: (groupid=0, jobs=1): err= 0: pid=2456043: Wed May 15 09:10:10 2024 00:44:17.148 read: IOPS=472, BW=1892KiB/s (1937kB/s)(18.5MiB/10027msec) 00:44:17.148 slat (usec): min=3, max=101, avg=35.49, stdev=16.20 00:44:17.148 clat (usec): min=7260, max=47267, avg=33554.82, stdev=2367.65 00:44:17.148 lat (usec): min=7271, max=47304, avg=33590.31, stdev=2368.60 00:44:17.148 clat percentiles (usec): 00:44:17.148 | 1.00th=[21890], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:44:17.148 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:44:17.148 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34341], 95.00th=[35914], 00:44:17.148 | 99.00th=[36963], 99.50th=[36963], 99.90th=[37487], 99.95th=[37487], 00:44:17.148 | 99.99th=[47449] 00:44:17.148 bw ( KiB/s): min= 1792, max= 1968, per=4.19%, avg=1890.40, stdev=59.25, samples=20 00:44:17.148 iops : min= 448, max= 492, avg=472.60, stdev=14.81, samples=20 00:44:17.148 lat (msec) : 10=0.42%, 20=0.46%, 50=99.11% 00:44:17.148 cpu : usr=95.17%, sys=2.62%, ctx=212, majf=0, minf=65 00:44:17.148 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:44:17.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.148 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.148 issued rwts: total=4742,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:17.148 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:17.148 filename0: (groupid=0, jobs=1): err= 0: pid=2456044: Wed May 15 09:10:10 2024 00:44:17.148 read: IOPS=468, BW=1874KiB/s (1919kB/s)(18.3MiB/10007msec) 00:44:17.148 slat (nsec): min=8485, max=82211, avg=25005.10, stdev=13328.67 00:44:17.148 clat (usec): min=26765, max=62014, avg=33945.61, stdev=1843.04 00:44:17.148 lat (usec): min=26799, max=62054, avg=33970.62, stdev=1844.82 00:44:17.148 clat percentiles (usec): 00:44:17.149 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33162], 20.00th=[33424], 00:44:17.149 | 30.00th=[33424], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:44:17.149 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34341], 95.00th=[35914], 00:44:17.149 | 99.00th=[36963], 99.50th=[37487], 99.90th=[62129], 99.95th=[62129], 00:44:17.149 | 99.99th=[62129] 00:44:17.149 bw ( KiB/s): min= 1536, max= 1920, per=4.15%, avg=1872.84, stdev=97.39, samples=19 00:44:17.149 iops : min= 384, max= 480, avg=468.21, stdev=24.35, samples=19 00:44:17.149 lat (msec) : 50=99.66%, 100=0.34% 00:44:17.149 cpu : usr=95.60%, sys=2.61%, ctx=152, majf=0, minf=48 00:44:17.149 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:44:17.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.149 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.149 issued rwts: total=4688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:17.149 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:17.149 filename0: (groupid=0, jobs=1): err= 0: pid=2456045: Wed May 15 09:10:10 2024 00:44:17.149 read: IOPS=470, BW=1884KiB/s (1929kB/s)(18.4MiB/10023msec) 00:44:17.149 slat (usec): min=5, max=131, avg=81.63, stdev=12.81 00:44:17.149 clat (usec): min=4694, max=37432, avg=33247.29, stdev=1886.13 00:44:17.149 lat (usec): min=4723, max=37449, avg=33328.91, stdev=1886.54 00:44:17.149 clat percentiles (usec): 00:44:17.149 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:44:17.149 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33424], 00:44:17.149 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[35914], 00:44:17.149 | 99.00th=[36963], 99.50th=[36963], 99.90th=[37487], 99.95th=[37487], 00:44:17.149 | 99.99th=[37487] 00:44:17.149 bw ( KiB/s): min= 1792, max= 1923, per=4.17%, avg=1882.20, stdev=59.92, samples=20 00:44:17.149 iops : min= 448, max= 480, avg=470.40, stdev=15.05, samples=20 00:44:17.149 lat (msec) : 10=0.30%, 20=0.04%, 50=99.66% 00:44:17.149 cpu : usr=98.34%, sys=1.20%, ctx=15, majf=0, minf=68 00:44:17.149 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:17.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.149 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.149 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:17.149 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:17.149 filename0: (groupid=0, jobs=1): err= 0: pid=2456046: Wed May 15 09:10:10 2024 00:44:17.149 read: IOPS=469, BW=1879KiB/s (1924kB/s)(18.4MiB/10012msec) 00:44:17.149 slat (usec): min=8, max=160, avg=62.06, stdev=26.90 00:44:17.149 clat (usec): min=11686, max=55093, avg=33495.28, stdev=2040.45 00:44:17.149 lat (usec): min=11711, max=55130, avg=33557.34, stdev=2036.98 00:44:17.149 clat percentiles (usec): 00:44:17.149 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:44:17.149 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:44:17.149 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[35914], 00:44:17.149 | 99.00th=[36963], 99.50th=[37487], 99.90th=[54789], 99.95th=[54789], 00:44:17.149 | 99.99th=[55313] 00:44:17.149 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1875.20, stdev=75.15, samples=20 00:44:17.149 iops : min= 416, max= 480, avg=468.80, stdev=18.79, samples=20 00:44:17.149 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:44:17.149 cpu : usr=98.07%, sys=1.46%, ctx=37, majf=0, minf=50 00:44:17.149 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:44:17.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.149 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.149 issued rwts: total=4704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:17.149 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:17.149 filename0: (groupid=0, jobs=1): err= 0: pid=2456047: Wed May 15 09:10:10 2024 00:44:17.149 read: IOPS=469, BW=1877KiB/s (1922kB/s)(18.4MiB/10024msec) 00:44:17.149 slat (nsec): min=13379, max=95368, avg=42314.55, stdev=13555.34 00:44:17.149 clat (usec): min=23928, max=46238, avg=33720.18, stdev=1224.86 00:44:17.149 lat (usec): min=23945, max=46293, avg=33762.50, stdev=1224.01 00:44:17.149 clat percentiles (usec): 00:44:17.149 | 1.00th=[32637], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:44:17.149 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:44:17.149 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[35914], 00:44:17.149 | 99.00th=[36963], 99.50th=[37487], 99.90th=[45876], 99.95th=[46400], 00:44:17.149 | 99.99th=[46400] 00:44:17.149 bw ( KiB/s): min= 1667, max= 1920, per=4.16%, avg=1875.35, stdev=74.71, samples=20 00:44:17.149 iops : min= 416, max= 480, avg=468.80, stdev=18.79, samples=20 00:44:17.149 lat (msec) : 50=100.00% 00:44:17.149 cpu : usr=94.16%, sys=3.40%, ctx=361, majf=0, minf=53 00:44:17.149 IO depths : 1=6.3%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:44:17.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.149 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.149 issued rwts: total=4703,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:17.149 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:17.149 filename0: (groupid=0, jobs=1): err= 0: pid=2456048: Wed May 15 09:10:10 2024 00:44:17.149 read: IOPS=468, BW=1874KiB/s (1919kB/s)(18.3MiB/10008msec) 00:44:17.149 slat (nsec): min=8613, max=96394, avg=25292.18, stdev=14486.58 00:44:17.149 clat (usec): min=19887, max=73526, avg=33937.38, stdev=1684.34 00:44:17.149 lat (usec): min=19942, max=73588, avg=33962.67, stdev=1686.98 00:44:17.149 clat percentiles (usec): 00:44:17.149 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33162], 20.00th=[33424], 00:44:17.149 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:44:17.149 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34866], 95.00th=[35914], 00:44:17.149 | 99.00th=[36963], 99.50th=[36963], 99.90th=[55837], 99.95th=[56361], 00:44:17.149 | 99.99th=[73925] 00:44:17.149 bw ( KiB/s): min= 1539, max= 1920, per=4.14%, avg=1868.95, stdev=95.96, samples=20 00:44:17.149 iops : min= 384, max= 480, avg=467.20, stdev=24.13, samples=20 00:44:17.149 lat (msec) : 20=0.04%, 50=99.62%, 100=0.34% 00:44:17.149 cpu : usr=97.84%, sys=1.56%, ctx=59, majf=0, minf=62 00:44:17.149 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:17.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.149 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.149 issued rwts: total=4688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:17.149 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:17.149 filename1: (groupid=0, jobs=1): err= 0: pid=2456049: Wed May 15 09:10:10 2024 00:44:17.149 read: IOPS=469, BW=1876KiB/s (1921kB/s)(18.4MiB/10025msec) 00:44:17.149 slat (nsec): min=11166, max=83223, avg=41428.65, stdev=11646.62 00:44:17.149 clat (usec): min=24023, max=46299, avg=33733.26, stdev=1213.50 00:44:17.149 lat (usec): min=24067, max=46329, avg=33774.69, stdev=1212.58 00:44:17.149 clat percentiles (usec): 00:44:17.149 | 1.00th=[32637], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:44:17.149 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:44:17.149 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[35914], 00:44:17.149 | 99.00th=[36963], 99.50th=[36963], 99.90th=[46400], 99.95th=[46400], 00:44:17.149 | 99.99th=[46400] 00:44:17.149 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1875.20, stdev=75.15, samples=20 00:44:17.149 iops : min= 416, max= 480, avg=468.80, stdev=18.79, samples=20 00:44:17.149 lat (msec) : 50=100.00% 00:44:17.149 cpu : usr=97.70%, sys=1.63%, ctx=70, majf=0, minf=48 00:44:17.149 IO depths : 1=6.3%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:44:17.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.149 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.149 issued rwts: total=4702,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:17.149 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:17.149 filename1: (groupid=0, jobs=1): err= 0: pid=2456050: Wed May 15 09:10:10 2024 00:44:17.149 read: IOPS=468, BW=1874KiB/s (1919kB/s)(18.3MiB/10006msec) 00:44:17.149 slat (usec): min=9, max=111, avg=35.58, stdev=23.80 00:44:17.149 clat (usec): min=21020, max=70851, avg=33815.52, stdev=2425.95 00:44:17.149 lat (usec): min=21034, max=70866, avg=33851.10, stdev=2425.81 00:44:17.149 clat percentiles (usec): 00:44:17.149 | 1.00th=[32113], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:44:17.149 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:44:17.149 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34341], 95.00th=[35914], 00:44:17.149 | 99.00th=[36963], 99.50th=[36963], 99.90th=[70779], 99.95th=[70779], 00:44:17.149 | 99.99th=[70779] 00:44:17.149 bw ( KiB/s): min= 1536, max= 1920, per=4.14%, avg=1866.11, stdev=98.37, samples=19 00:44:17.149 iops : min= 384, max= 480, avg=466.53, stdev=24.59, samples=19 00:44:17.149 lat (msec) : 50=99.66%, 100=0.34% 00:44:17.149 cpu : usr=97.68%, sys=1.81%, ctx=14, majf=0, minf=67 00:44:17.149 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:44:17.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.149 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.149 issued rwts: total=4688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:17.149 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:17.149 filename1: (groupid=0, jobs=1): err= 0: pid=2456051: Wed May 15 09:10:10 2024 00:44:17.149 read: IOPS=468, BW=1874KiB/s (1919kB/s)(18.3MiB/10007msec) 00:44:17.149 slat (usec): min=8, max=110, avg=29.17, stdev=19.01 00:44:17.149 clat (usec): min=16414, max=69577, avg=33886.33, stdev=2492.64 00:44:17.149 lat (usec): min=16436, max=69622, avg=33915.50, stdev=2493.68 00:44:17.149 clat percentiles (usec): 00:44:17.149 | 1.00th=[32375], 5.00th=[32900], 10.00th=[33162], 20.00th=[33424], 00:44:17.149 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:44:17.149 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34866], 95.00th=[35914], 00:44:17.149 | 99.00th=[36963], 99.50th=[37487], 99.90th=[69731], 99.95th=[69731], 00:44:17.149 | 99.99th=[69731] 00:44:17.149 bw ( KiB/s): min= 1536, max= 1920, per=4.14%, avg=1866.11, stdev=98.37, samples=19 00:44:17.149 iops : min= 384, max= 480, avg=466.53, stdev=24.59, samples=19 00:44:17.149 lat (msec) : 20=0.13%, 50=99.45%, 100=0.43% 00:44:17.149 cpu : usr=98.21%, sys=1.26%, ctx=25, majf=0, minf=63 00:44:17.149 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:17.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.149 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.149 issued rwts: total=4688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:17.149 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:17.150 filename1: (groupid=0, jobs=1): err= 0: pid=2456052: Wed May 15 09:10:10 2024 00:44:17.150 read: IOPS=469, BW=1877KiB/s (1922kB/s)(18.4MiB/10025msec) 00:44:17.150 slat (usec): min=11, max=111, avg=48.39, stdev=18.17 00:44:17.150 clat (usec): min=24222, max=46469, avg=33667.18, stdev=1229.47 00:44:17.150 lat (usec): min=24260, max=46491, avg=33715.57, stdev=1229.64 00:44:17.150 clat percentiles (usec): 00:44:17.150 | 1.00th=[32637], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:44:17.150 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:44:17.150 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[35914], 00:44:17.150 | 99.00th=[36963], 99.50th=[36963], 99.90th=[46400], 99.95th=[46400], 00:44:17.150 | 99.99th=[46400] 00:44:17.150 bw ( KiB/s): min= 1667, max= 1920, per=4.16%, avg=1875.35, stdev=74.71, samples=20 00:44:17.150 iops : min= 416, max= 480, avg=468.80, stdev=18.79, samples=20 00:44:17.150 lat (msec) : 50=100.00% 00:44:17.150 cpu : usr=97.97%, sys=1.46%, ctx=49, majf=0, minf=36 00:44:17.150 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:44:17.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.150 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.150 issued rwts: total=4704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:17.150 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:17.150 filename1: (groupid=0, jobs=1): err= 0: pid=2456053: Wed May 15 09:10:10 2024 00:44:17.150 read: IOPS=472, BW=1890KiB/s (1935kB/s)(18.5MiB/10024msec) 00:44:17.150 slat (usec): min=8, max=124, avg=37.02, stdev=25.48 00:44:17.150 clat (usec): min=9267, max=46299, avg=33590.14, stdev=2252.66 00:44:17.150 lat (usec): min=9279, max=46361, avg=33627.15, stdev=2253.05 00:44:17.150 clat percentiles (usec): 00:44:17.150 | 1.00th=[22414], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:44:17.150 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:44:17.150 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34341], 95.00th=[35914], 00:44:17.150 | 99.00th=[36963], 99.50th=[36963], 99.90th=[37487], 99.95th=[45351], 00:44:17.150 | 99.99th=[46400] 00:44:17.150 bw ( KiB/s): min= 1792, max= 1920, per=4.19%, avg=1888.00, stdev=55.18, samples=20 00:44:17.150 iops : min= 448, max= 480, avg=472.00, stdev=13.80, samples=20 00:44:17.150 lat (msec) : 10=0.34%, 20=0.34%, 50=99.32% 00:44:17.150 cpu : usr=97.39%, sys=2.04%, ctx=67, majf=0, minf=67 00:44:17.150 IO depths : 1=4.5%, 2=10.7%, 4=25.0%, 8=51.8%, 16=8.0%, 32=0.0%, >=64=0.0% 00:44:17.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.150 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.150 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:17.150 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:17.150 filename1: (groupid=0, jobs=1): err= 0: pid=2456054: Wed May 15 09:10:10 2024 00:44:17.150 read: IOPS=471, BW=1885KiB/s (1930kB/s)(18.4MiB/10016msec) 00:44:17.150 slat (nsec): min=6225, max=97660, avg=41319.58, stdev=16180.33 00:44:17.150 clat (usec): min=12992, max=46290, avg=33610.73, stdev=1733.55 00:44:17.150 lat (usec): min=12999, max=46351, avg=33652.05, stdev=1734.95 00:44:17.150 clat percentiles (usec): 00:44:17.150 | 1.00th=[26346], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:44:17.150 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:44:17.150 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[35914], 00:44:17.150 | 99.00th=[36963], 99.50th=[36963], 99.90th=[42206], 99.95th=[42206], 00:44:17.150 | 99.99th=[46400] 00:44:17.150 bw ( KiB/s): min= 1792, max= 1923, per=4.17%, avg=1882.05, stdev=59.82, samples=20 00:44:17.150 iops : min= 448, max= 480, avg=470.40, stdev=15.05, samples=20 00:44:17.150 lat (msec) : 20=0.34%, 50=99.66% 00:44:17.150 cpu : usr=97.47%, sys=1.78%, ctx=152, majf=0, minf=55 00:44:17.150 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:17.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.150 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.150 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:17.150 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:17.150 filename1: (groupid=0, jobs=1): err= 0: pid=2456055: Wed May 15 09:10:10 2024 00:44:17.150 read: IOPS=468, BW=1875KiB/s (1920kB/s)(18.3MiB/10002msec) 00:44:17.150 slat (nsec): min=7030, max=59234, avg=27567.39, stdev=8467.08 00:44:17.150 clat (usec): min=26520, max=57129, avg=33893.91, stdev=1629.81 00:44:17.150 lat (usec): min=26544, max=57149, avg=33921.48, stdev=1628.38 00:44:17.150 clat percentiles (usec): 00:44:17.150 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33162], 20.00th=[33424], 00:44:17.150 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:44:17.150 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34341], 95.00th=[35914], 00:44:17.150 | 99.00th=[36963], 99.50th=[37487], 99.90th=[56886], 99.95th=[56886], 00:44:17.150 | 99.99th=[56886] 00:44:17.150 bw ( KiB/s): min= 1536, max= 1920, per=4.15%, avg=1872.84, stdev=97.39, samples=19 00:44:17.150 iops : min= 384, max= 480, avg=468.21, stdev=24.35, samples=19 00:44:17.150 lat (msec) : 50=99.66%, 100=0.34% 00:44:17.150 cpu : usr=98.26%, sys=1.35%, ctx=12, majf=0, minf=38 00:44:17.150 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:44:17.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.150 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.150 issued rwts: total=4688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:17.150 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:17.150 filename1: (groupid=0, jobs=1): err= 0: pid=2456056: Wed May 15 09:10:10 2024 00:44:17.150 read: IOPS=468, BW=1873KiB/s (1918kB/s)(18.3MiB/10029msec) 00:44:17.150 slat (usec): min=11, max=117, avg=50.42, stdev=19.75 00:44:17.150 clat (usec): min=27233, max=65082, avg=33651.35, stdev=1294.13 00:44:17.150 lat (usec): min=27262, max=65107, avg=33701.76, stdev=1293.78 00:44:17.150 clat percentiles (usec): 00:44:17.150 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:44:17.150 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:44:17.150 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[35914], 00:44:17.150 | 99.00th=[36963], 99.50th=[36963], 99.90th=[46400], 99.95th=[46400], 00:44:17.150 | 99.99th=[65274] 00:44:17.150 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1875.20, stdev=75.15, samples=20 00:44:17.150 iops : min= 416, max= 480, avg=468.80, stdev=18.79, samples=20 00:44:17.150 lat (msec) : 50=99.96%, 100=0.04% 00:44:17.150 cpu : usr=98.40%, sys=1.19%, ctx=6, majf=0, minf=33 00:44:17.150 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:17.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.150 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.150 issued rwts: total=4697,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:17.150 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:17.150 filename2: (groupid=0, jobs=1): err= 0: pid=2456057: Wed May 15 09:10:10 2024 00:44:17.150 read: IOPS=469, BW=1880KiB/s (1925kB/s)(18.4MiB/10009msec) 00:44:17.150 slat (usec): min=9, max=120, avg=58.69, stdev=27.14 00:44:17.150 clat (usec): min=11867, max=52104, avg=33522.96, stdev=1899.40 00:44:17.150 lat (usec): min=11905, max=52145, avg=33581.65, stdev=1898.00 00:44:17.150 clat percentiles (usec): 00:44:17.150 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:44:17.150 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:44:17.150 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[35390], 00:44:17.150 | 99.00th=[36963], 99.50th=[37487], 99.90th=[52167], 99.95th=[52167], 00:44:17.150 | 99.99th=[52167] 00:44:17.150 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1872.84, stdev=76.45, samples=19 00:44:17.150 iops : min= 416, max= 480, avg=468.21, stdev=19.11, samples=19 00:44:17.150 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:44:17.150 cpu : usr=98.42%, sys=1.11%, ctx=35, majf=0, minf=36 00:44:17.150 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:44:17.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.150 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.150 issued rwts: total=4704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:17.150 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:17.150 filename2: (groupid=0, jobs=1): err= 0: pid=2456058: Wed May 15 09:10:10 2024 00:44:17.150 read: IOPS=468, BW=1874KiB/s (1919kB/s)(18.4MiB/10029msec) 00:44:17.150 slat (usec): min=11, max=101, avg=41.53, stdev=12.81 00:44:17.150 clat (usec): min=26662, max=66083, avg=33731.64, stdev=1330.58 00:44:17.150 lat (usec): min=26714, max=66109, avg=33773.17, stdev=1329.95 00:44:17.150 clat percentiles (usec): 00:44:17.150 | 1.00th=[32637], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:44:17.150 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:44:17.150 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[35914], 00:44:17.150 | 99.00th=[36963], 99.50th=[37487], 99.90th=[46400], 99.95th=[46400], 00:44:17.150 | 99.99th=[66323] 00:44:17.150 bw ( KiB/s): min= 1667, max= 1920, per=4.16%, avg=1875.35, stdev=74.71, samples=20 00:44:17.150 iops : min= 416, max= 480, avg=468.80, stdev=18.79, samples=20 00:44:17.150 lat (msec) : 50=99.96%, 100=0.04% 00:44:17.150 cpu : usr=98.47%, sys=1.13%, ctx=15, majf=0, minf=48 00:44:17.150 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:17.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.150 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.150 issued rwts: total=4699,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:17.150 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:17.150 filename2: (groupid=0, jobs=1): err= 0: pid=2456059: Wed May 15 09:10:10 2024 00:44:17.150 read: IOPS=472, BW=1890KiB/s (1935kB/s)(18.5MiB/10024msec) 00:44:17.150 slat (usec): min=5, max=131, avg=40.38, stdev=16.04 00:44:17.150 clat (usec): min=6791, max=41886, avg=33531.54, stdev=2254.44 00:44:17.150 lat (usec): min=6823, max=41931, avg=33571.92, stdev=2253.78 00:44:17.150 clat percentiles (usec): 00:44:17.150 | 1.00th=[22152], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:44:17.150 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:44:17.150 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[35914], 00:44:17.150 | 99.00th=[36963], 99.50th=[36963], 99.90th=[37487], 99.95th=[41681], 00:44:17.150 | 99.99th=[41681] 00:44:17.150 bw ( KiB/s): min= 1792, max= 1920, per=4.19%, avg=1888.00, stdev=56.87, samples=20 00:44:17.150 iops : min= 448, max= 480, avg=472.00, stdev=14.22, samples=20 00:44:17.150 lat (msec) : 10=0.34%, 20=0.34%, 50=99.32% 00:44:17.150 cpu : usr=96.08%, sys=2.64%, ctx=82, majf=0, minf=58 00:44:17.150 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:17.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.151 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.151 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:17.151 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:17.151 filename2: (groupid=0, jobs=1): err= 0: pid=2456060: Wed May 15 09:10:10 2024 00:44:17.151 read: IOPS=468, BW=1875KiB/s (1921kB/s)(18.4MiB/10024msec) 00:44:17.151 slat (usec): min=14, max=117, avg=45.76, stdev=17.89 00:44:17.151 clat (usec): min=23737, max=46230, avg=33701.31, stdev=1211.03 00:44:17.151 lat (usec): min=23783, max=46259, avg=33747.07, stdev=1209.56 00:44:17.151 clat percentiles (usec): 00:44:17.151 | 1.00th=[32375], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:44:17.151 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:44:17.151 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[35914], 00:44:17.151 | 99.00th=[36963], 99.50th=[36963], 99.90th=[45876], 99.95th=[46400], 00:44:17.151 | 99.99th=[46400] 00:44:17.151 bw ( KiB/s): min= 1667, max= 1920, per=4.16%, avg=1875.35, stdev=74.71, samples=20 00:44:17.151 iops : min= 416, max= 480, avg=468.80, stdev=18.79, samples=20 00:44:17.151 lat (msec) : 50=100.00% 00:44:17.151 cpu : usr=96.10%, sys=2.57%, ctx=211, majf=0, minf=34 00:44:17.151 IO depths : 1=6.3%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:44:17.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.151 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.151 issued rwts: total=4700,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:17.151 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:17.151 filename2: (groupid=0, jobs=1): err= 0: pid=2456061: Wed May 15 09:10:10 2024 00:44:17.151 read: IOPS=469, BW=1879KiB/s (1924kB/s)(18.4MiB/10012msec) 00:44:17.151 slat (nsec): min=8333, max=57429, avg=28231.77, stdev=9375.13 00:44:17.151 clat (usec): min=11710, max=54566, avg=33787.25, stdev=1970.99 00:44:17.151 lat (usec): min=11733, max=54608, avg=33815.48, stdev=1970.82 00:44:17.151 clat percentiles (usec): 00:44:17.151 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33162], 20.00th=[33424], 00:44:17.151 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:44:17.151 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[35914], 00:44:17.151 | 99.00th=[36963], 99.50th=[37487], 99.90th=[54264], 99.95th=[54264], 00:44:17.151 | 99.99th=[54789] 00:44:17.151 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1875.20, stdev=75.15, samples=20 00:44:17.151 iops : min= 416, max= 480, avg=468.80, stdev=18.79, samples=20 00:44:17.151 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:44:17.151 cpu : usr=98.34%, sys=1.24%, ctx=13, majf=0, minf=44 00:44:17.151 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:44:17.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.151 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.151 issued rwts: total=4704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:17.151 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:17.151 filename2: (groupid=0, jobs=1): err= 0: pid=2456062: Wed May 15 09:10:10 2024 00:44:17.151 read: IOPS=468, BW=1875KiB/s (1920kB/s)(18.3MiB/10002msec) 00:44:17.151 slat (usec): min=8, max=120, avg=49.65, stdev=21.41 00:44:17.151 clat (usec): min=24057, max=60182, avg=33665.38, stdev=1837.92 00:44:17.151 lat (usec): min=24068, max=60230, avg=33715.03, stdev=1836.90 00:44:17.151 clat percentiles (usec): 00:44:17.151 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:44:17.151 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:44:17.151 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[35914], 00:44:17.151 | 99.00th=[36963], 99.50th=[36963], 99.90th=[60031], 99.95th=[60031], 00:44:17.151 | 99.99th=[60031] 00:44:17.151 bw ( KiB/s): min= 1536, max= 1920, per=4.15%, avg=1872.84, stdev=97.39, samples=19 00:44:17.151 iops : min= 384, max= 480, avg=468.21, stdev=24.35, samples=19 00:44:17.151 lat (msec) : 50=99.66%, 100=0.34% 00:44:17.151 cpu : usr=98.22%, sys=1.37%, ctx=13, majf=0, minf=33 00:44:17.151 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:44:17.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.151 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.151 issued rwts: total=4688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:17.151 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:17.151 filename2: (groupid=0, jobs=1): err= 0: pid=2456063: Wed May 15 09:10:10 2024 00:44:17.151 read: IOPS=468, BW=1875KiB/s (1920kB/s)(18.4MiB/10024msec) 00:44:17.151 slat (nsec): min=11218, max=92685, avg=43098.55, stdev=14830.07 00:44:17.151 clat (usec): min=23965, max=46583, avg=33702.91, stdev=1186.31 00:44:17.151 lat (usec): min=24010, max=46603, avg=33746.01, stdev=1187.56 00:44:17.151 clat percentiles (usec): 00:44:17.151 | 1.00th=[32637], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:44:17.151 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:44:17.151 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[35914], 00:44:17.151 | 99.00th=[36963], 99.50th=[36963], 99.90th=[46400], 99.95th=[46400], 00:44:17.151 | 99.99th=[46400] 00:44:17.151 bw ( KiB/s): min= 1667, max= 1920, per=4.16%, avg=1875.35, stdev=74.71, samples=20 00:44:17.151 iops : min= 416, max= 480, avg=468.80, stdev=18.79, samples=20 00:44:17.151 lat (msec) : 50=100.00% 00:44:17.151 cpu : usr=97.96%, sys=1.52%, ctx=77, majf=0, minf=51 00:44:17.151 IO depths : 1=6.3%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:44:17.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.151 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.151 issued rwts: total=4698,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:17.151 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:17.151 filename2: (groupid=0, jobs=1): err= 0: pid=2456064: Wed May 15 09:10:10 2024 00:44:17.151 read: IOPS=467, BW=1872KiB/s (1917kB/s)(18.3MiB/10018msec) 00:44:17.151 slat (usec): min=8, max=118, avg=28.26, stdev=29.04 00:44:17.151 clat (usec): min=26550, max=74192, avg=33932.66, stdev=2508.00 00:44:17.151 lat (usec): min=26561, max=74249, avg=33960.93, stdev=2507.98 00:44:17.151 clat percentiles (usec): 00:44:17.151 | 1.00th=[32375], 5.00th=[32900], 10.00th=[33162], 20.00th=[33424], 00:44:17.151 | 30.00th=[33424], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:44:17.151 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34341], 95.00th=[35914], 00:44:17.151 | 99.00th=[36963], 99.50th=[37487], 99.90th=[73925], 99.95th=[73925], 00:44:17.151 | 99.99th=[73925] 00:44:17.151 bw ( KiB/s): min= 1539, max= 1920, per=4.14%, avg=1868.95, stdev=95.96, samples=20 00:44:17.151 iops : min= 384, max= 480, avg=467.20, stdev=24.13, samples=20 00:44:17.151 lat (msec) : 50=99.66%, 100=0.34% 00:44:17.151 cpu : usr=96.24%, sys=2.36%, ctx=193, majf=0, minf=44 00:44:17.151 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:44:17.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.151 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.151 issued rwts: total=4688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:17.151 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:17.151 00:44:17.151 Run status group 0 (all jobs): 00:44:17.151 READ: bw=44.0MiB/s (46.2MB/s), 1872KiB/s-1945KiB/s (1917kB/s-1991kB/s), io=442MiB (463MB), run=10002-10029msec 00:44:17.151 09:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:44:17.151 09:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:44:17.151 09:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:17.151 09:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:44:17.151 09:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:44:17.151 09:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:44:17.151 09:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:17.151 09:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:17.151 09:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:17.151 09:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:44:17.151 09:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:17.151 09:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:17.151 09:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:17.151 09:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:17.151 09:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:44:17.151 09:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:44:17.151 09:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:44:17.151 09:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:17.151 09:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:17.151 09:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:17.151 09:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:44:17.151 09:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:17.151 09:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:17.151 09:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:17.151 09:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:17.151 09:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:44:17.151 09:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:44:17.151 09:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:44:17.151 09:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:17.151 09:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:17.151 09:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:17.151 09:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:44:17.151 09:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:17.151 09:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:17.151 09:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:17.151 09:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:44:17.151 09:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:44:17.151 09:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:44:17.151 09:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:44:17.151 09:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:17.152 bdev_null0 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:17.152 [2024-05-15 09:10:10.709454] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:17.152 bdev_null1 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:44:17.152 { 00:44:17.152 "params": { 00:44:17.152 "name": "Nvme$subsystem", 00:44:17.152 "trtype": "$TEST_TRANSPORT", 00:44:17.152 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:17.152 "adrfam": "ipv4", 00:44:17.152 "trsvcid": "$NVMF_PORT", 00:44:17.152 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:17.152 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:17.152 "hdgst": ${hdgst:-false}, 00:44:17.152 "ddgst": ${ddgst:-false} 00:44:17.152 }, 00:44:17.152 "method": "bdev_nvme_attach_controller" 00:44:17.152 } 00:44:17.152 EOF 00:44:17.152 )") 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1353 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local sanitizers 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # shift 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local asan_lib= 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # grep libasan 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:44:17.152 { 00:44:17.152 "params": { 00:44:17.152 "name": "Nvme$subsystem", 00:44:17.152 "trtype": "$TEST_TRANSPORT", 00:44:17.152 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:17.152 "adrfam": "ipv4", 00:44:17.152 "trsvcid": "$NVMF_PORT", 00:44:17.152 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:17.152 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:17.152 "hdgst": ${hdgst:-false}, 00:44:17.152 "ddgst": ${ddgst:-false} 00:44:17.152 }, 00:44:17.152 "method": "bdev_nvme_attach_controller" 00:44:17.152 } 00:44:17.152 EOF 00:44:17.152 )") 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:44:17.152 "params": { 00:44:17.152 "name": "Nvme0", 00:44:17.152 "trtype": "tcp", 00:44:17.152 "traddr": "10.0.0.2", 00:44:17.152 "adrfam": "ipv4", 00:44:17.152 "trsvcid": "4420", 00:44:17.152 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:17.152 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:17.152 "hdgst": false, 00:44:17.152 "ddgst": false 00:44:17.152 }, 00:44:17.152 "method": "bdev_nvme_attach_controller" 00:44:17.152 },{ 00:44:17.152 "params": { 00:44:17.152 "name": "Nvme1", 00:44:17.152 "trtype": "tcp", 00:44:17.152 "traddr": "10.0.0.2", 00:44:17.152 "adrfam": "ipv4", 00:44:17.152 "trsvcid": "4420", 00:44:17.152 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:44:17.152 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:44:17.152 "hdgst": false, 00:44:17.152 "ddgst": false 00:44:17.152 }, 00:44:17.152 "method": "bdev_nvme_attach_controller" 00:44:17.152 }' 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # asan_lib= 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # asan_lib= 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:44:17.152 09:10:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:17.152 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:44:17.152 ... 00:44:17.152 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:44:17.152 ... 00:44:17.152 fio-3.35 00:44:17.152 Starting 4 threads 00:44:17.152 EAL: No free 2048 kB hugepages reported on node 1 00:44:22.417 00:44:22.417 filename0: (groupid=0, jobs=1): err= 0: pid=2457322: Wed May 15 09:10:16 2024 00:44:22.417 read: IOPS=1831, BW=14.3MiB/s (15.0MB/s)(71.6MiB/5002msec) 00:44:22.417 slat (nsec): min=4100, max=71727, avg=20265.28, stdev=11060.13 00:44:22.417 clat (usec): min=861, max=11037, avg=4293.46, stdev=662.16 00:44:22.417 lat (usec): min=885, max=11049, avg=4313.73, stdev=662.29 00:44:22.417 clat percentiles (usec): 00:44:22.417 | 1.00th=[ 2540], 5.00th=[ 3523], 10.00th=[ 3785], 20.00th=[ 3982], 00:44:22.417 | 30.00th=[ 4080], 40.00th=[ 4178], 50.00th=[ 4228], 60.00th=[ 4359], 00:44:22.417 | 70.00th=[ 4424], 80.00th=[ 4490], 90.00th=[ 4752], 95.00th=[ 5342], 00:44:22.417 | 99.00th=[ 6849], 99.50th=[ 7439], 99.90th=[ 8979], 99.95th=[ 9372], 00:44:22.417 | 99.99th=[11076] 00:44:22.417 bw ( KiB/s): min=12928, max=15600, per=24.94%, avg=14644.80, stdev=826.51, samples=10 00:44:22.417 iops : min= 1616, max= 1950, avg=1830.60, stdev=103.31, samples=10 00:44:22.417 lat (usec) : 1000=0.10% 00:44:22.417 lat (msec) : 2=0.55%, 4=19.81%, 10=79.53%, 20=0.01% 00:44:22.417 cpu : usr=94.32%, sys=5.04%, ctx=11, majf=0, minf=37 00:44:22.417 IO depths : 1=0.1%, 2=16.5%, 4=56.5%, 8=26.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:22.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.417 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.417 issued rwts: total=9161,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:22.417 latency : target=0, window=0, percentile=100.00%, depth=8 00:44:22.417 filename0: (groupid=0, jobs=1): err= 0: pid=2457324: Wed May 15 09:10:16 2024 00:44:22.417 read: IOPS=1836, BW=14.3MiB/s (15.0MB/s)(71.8MiB/5001msec) 00:44:22.417 slat (nsec): min=3959, max=67711, avg=20441.44, stdev=9763.41 00:44:22.417 clat (usec): min=886, max=9502, avg=4284.10, stdev=577.01 00:44:22.417 lat (usec): min=905, max=9516, avg=4304.54, stdev=576.97 00:44:22.417 clat percentiles (usec): 00:44:22.417 | 1.00th=[ 2802], 5.00th=[ 3523], 10.00th=[ 3785], 20.00th=[ 4015], 00:44:22.417 | 30.00th=[ 4080], 40.00th=[ 4178], 50.00th=[ 4228], 60.00th=[ 4359], 00:44:22.417 | 70.00th=[ 4424], 80.00th=[ 4490], 90.00th=[ 4686], 95.00th=[ 5276], 00:44:22.417 | 99.00th=[ 6325], 99.50th=[ 6783], 99.90th=[ 8094], 99.95th=[ 8160], 00:44:22.417 | 99.99th=[ 9503] 00:44:22.417 bw ( KiB/s): min=13595, max=15552, per=24.93%, avg=14637.67, stdev=651.73, samples=9 00:44:22.417 iops : min= 1699, max= 1944, avg=1829.67, stdev=81.54, samples=9 00:44:22.417 lat (usec) : 1000=0.02% 00:44:22.417 lat (msec) : 2=0.40%, 4=19.30%, 10=80.27% 00:44:22.417 cpu : usr=94.72%, sys=4.62%, ctx=11, majf=0, minf=49 00:44:22.417 IO depths : 1=0.1%, 2=15.3%, 4=57.9%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:22.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.417 complete : 0=0.0%, 4=91.6%, 8=8.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.417 issued rwts: total=9186,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:22.417 latency : target=0, window=0, percentile=100.00%, depth=8 00:44:22.417 filename1: (groupid=0, jobs=1): err= 0: pid=2457325: Wed May 15 09:10:16 2024 00:44:22.417 read: IOPS=1841, BW=14.4MiB/s (15.1MB/s)(72.0MiB/5002msec) 00:44:22.417 slat (nsec): min=3875, max=71857, avg=15853.97, stdev=9324.26 00:44:22.417 clat (usec): min=1031, max=9497, avg=4292.76, stdev=535.62 00:44:22.417 lat (usec): min=1044, max=9524, avg=4308.61, stdev=535.48 00:44:22.417 clat percentiles (usec): 00:44:22.417 | 1.00th=[ 2933], 5.00th=[ 3556], 10.00th=[ 3785], 20.00th=[ 4015], 00:44:22.417 | 30.00th=[ 4113], 40.00th=[ 4178], 50.00th=[ 4293], 60.00th=[ 4359], 00:44:22.417 | 70.00th=[ 4424], 80.00th=[ 4555], 90.00th=[ 4686], 95.00th=[ 5276], 00:44:22.417 | 99.00th=[ 6194], 99.50th=[ 6587], 99.90th=[ 7832], 99.95th=[ 8848], 00:44:22.417 | 99.99th=[ 9503] 00:44:22.417 bw ( KiB/s): min=13568, max=15568, per=24.97%, avg=14664.89, stdev=683.85, samples=9 00:44:22.417 iops : min= 1696, max= 1946, avg=1833.11, stdev=85.48, samples=9 00:44:22.417 lat (msec) : 2=0.17%, 4=18.54%, 10=81.29% 00:44:22.417 cpu : usr=93.80%, sys=5.64%, ctx=10, majf=0, minf=78 00:44:22.417 IO depths : 1=0.1%, 2=10.0%, 4=62.2%, 8=27.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:22.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.417 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.417 issued rwts: total=9212,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:22.417 latency : target=0, window=0, percentile=100.00%, depth=8 00:44:22.417 filename1: (groupid=0, jobs=1): err= 0: pid=2457326: Wed May 15 09:10:16 2024 00:44:22.417 read: IOPS=1829, BW=14.3MiB/s (15.0MB/s)(71.5MiB/5002msec) 00:44:22.417 slat (nsec): min=3897, max=66939, avg=20612.55, stdev=10787.49 00:44:22.417 clat (usec): min=783, max=9705, avg=4297.85, stdev=593.39 00:44:22.417 lat (usec): min=796, max=9718, avg=4318.47, stdev=593.15 00:44:22.417 clat percentiles (usec): 00:44:22.417 | 1.00th=[ 2638], 5.00th=[ 3589], 10.00th=[ 3851], 20.00th=[ 4015], 00:44:22.417 | 30.00th=[ 4080], 40.00th=[ 4178], 50.00th=[ 4228], 60.00th=[ 4359], 00:44:22.417 | 70.00th=[ 4424], 80.00th=[ 4490], 90.00th=[ 4752], 95.00th=[ 5342], 00:44:22.417 | 99.00th=[ 6456], 99.50th=[ 6915], 99.90th=[ 8291], 99.95th=[ 8848], 00:44:22.417 | 99.99th=[ 9765] 00:44:22.417 bw ( KiB/s): min=13402, max=15328, per=24.92%, avg=14634.60, stdev=682.54, samples=10 00:44:22.417 iops : min= 1675, max= 1916, avg=1829.30, stdev=85.37, samples=10 00:44:22.417 lat (usec) : 1000=0.07% 00:44:22.417 lat (msec) : 2=0.39%, 4=19.09%, 10=80.45% 00:44:22.417 cpu : usr=93.96%, sys=5.38%, ctx=12, majf=0, minf=44 00:44:22.417 IO depths : 1=0.1%, 2=15.2%, 4=58.1%, 8=26.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:22.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.417 complete : 0=0.0%, 4=91.4%, 8=8.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.417 issued rwts: total=9153,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:22.417 latency : target=0, window=0, percentile=100.00%, depth=8 00:44:22.417 00:44:22.417 Run status group 0 (all jobs): 00:44:22.417 READ: bw=57.3MiB/s (60.1MB/s), 14.3MiB/s-14.4MiB/s (15.0MB/s-15.1MB/s), io=287MiB (301MB), run=5001-5002msec 00:44:22.417 09:10:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:44:22.417 09:10:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:44:22.417 09:10:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:22.417 09:10:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:44:22.417 09:10:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:44:22.417 09:10:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:44:22.417 09:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:22.417 09:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:22.417 09:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:22.417 09:10:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:44:22.417 09:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:22.417 09:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:22.417 09:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:22.417 09:10:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:22.417 09:10:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:44:22.417 09:10:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:44:22.417 09:10:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:44:22.417 09:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:22.417 09:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:22.418 09:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:22.418 09:10:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:44:22.418 09:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:22.418 09:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:22.418 09:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:22.418 00:44:22.418 real 0m24.360s 00:44:22.418 user 4m31.755s 00:44:22.418 sys 0m7.398s 00:44:22.418 09:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # xtrace_disable 00:44:22.418 09:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:22.418 ************************************ 00:44:22.418 END TEST fio_dif_rand_params 00:44:22.418 ************************************ 00:44:22.418 09:10:17 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:44:22.418 09:10:17 nvmf_dif -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:44:22.418 09:10:17 nvmf_dif -- common/autotest_common.sh@1104 -- # xtrace_disable 00:44:22.418 09:10:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:22.418 ************************************ 00:44:22.418 START TEST fio_dif_digest 00:44:22.418 ************************************ 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1122 -- # fio_dif_digest 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:22.418 bdev_null0 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:22.418 [2024-05-15 09:10:17.114371] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:44:22.418 { 00:44:22.418 "params": { 00:44:22.418 "name": "Nvme$subsystem", 00:44:22.418 "trtype": "$TEST_TRANSPORT", 00:44:22.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:22.418 "adrfam": "ipv4", 00:44:22.418 "trsvcid": "$NVMF_PORT", 00:44:22.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:22.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:22.418 "hdgst": ${hdgst:-false}, 00:44:22.418 "ddgst": ${ddgst:-false} 00:44:22.418 }, 00:44:22.418 "method": "bdev_nvme_attach_controller" 00:44:22.418 } 00:44:22.418 EOF 00:44:22.418 )") 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1353 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local sanitizers 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # shift 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local asan_lib= 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # grep libasan 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:44:22.418 "params": { 00:44:22.418 "name": "Nvme0", 00:44:22.418 "trtype": "tcp", 00:44:22.418 "traddr": "10.0.0.2", 00:44:22.418 "adrfam": "ipv4", 00:44:22.418 "trsvcid": "4420", 00:44:22.418 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:22.418 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:22.418 "hdgst": true, 00:44:22.418 "ddgst": true 00:44:22.418 }, 00:44:22.418 "method": "bdev_nvme_attach_controller" 00:44:22.418 }' 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # asan_lib= 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # asan_lib= 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:44:22.418 09:10:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:22.676 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:44:22.676 ... 00:44:22.676 fio-3.35 00:44:22.676 Starting 3 threads 00:44:22.676 EAL: No free 2048 kB hugepages reported on node 1 00:44:34.919 00:44:34.919 filename0: (groupid=0, jobs=1): err= 0: pid=2458191: Wed May 15 09:10:27 2024 00:44:34.919 read: IOPS=212, BW=26.6MiB/s (27.9MB/s)(267MiB/10047msec) 00:44:34.919 slat (nsec): min=5514, max=76836, avg=16455.03, stdev=3630.44 00:44:34.919 clat (usec): min=10880, max=46481, avg=14056.12, stdev=1236.54 00:44:34.919 lat (usec): min=10894, max=46502, avg=14072.57, stdev=1236.66 00:44:34.919 clat percentiles (usec): 00:44:34.919 | 1.00th=[11731], 5.00th=[12387], 10.00th=[12780], 20.00th=[13173], 00:44:34.919 | 30.00th=[13566], 40.00th=[13829], 50.00th=[14091], 60.00th=[14353], 00:44:34.919 | 70.00th=[14484], 80.00th=[14877], 90.00th=[15270], 95.00th=[15664], 00:44:34.919 | 99.00th=[16581], 99.50th=[16909], 99.90th=[19006], 99.95th=[19006], 00:44:34.919 | 99.99th=[46400] 00:44:34.919 bw ( KiB/s): min=26624, max=28416, per=34.26%, avg=27289.60, stdev=553.44, samples=20 00:44:34.919 iops : min= 208, max= 222, avg=213.20, stdev= 4.32, samples=20 00:44:34.919 lat (msec) : 20=99.95%, 50=0.05% 00:44:34.919 cpu : usr=94.38%, sys=5.16%, ctx=34, majf=0, minf=145 00:44:34.919 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:34.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:34.919 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:34.919 issued rwts: total=2135,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:34.919 latency : target=0, window=0, percentile=100.00%, depth=3 00:44:34.919 filename0: (groupid=0, jobs=1): err= 0: pid=2458192: Wed May 15 09:10:27 2024 00:44:34.919 read: IOPS=205, BW=25.7MiB/s (26.9MB/s)(258MiB/10045msec) 00:44:34.919 slat (nsec): min=5192, max=56787, avg=17559.61, stdev=3971.10 00:44:34.919 clat (usec): min=10880, max=53155, avg=14573.45, stdev=1520.65 00:44:34.919 lat (usec): min=10899, max=53167, avg=14591.01, stdev=1520.75 00:44:34.919 clat percentiles (usec): 00:44:34.919 | 1.00th=[12256], 5.00th=[12911], 10.00th=[13304], 20.00th=[13829], 00:44:34.919 | 30.00th=[14091], 40.00th=[14353], 50.00th=[14484], 60.00th=[14746], 00:44:34.919 | 70.00th=[15008], 80.00th=[15401], 90.00th=[15795], 95.00th=[16188], 00:44:34.919 | 99.00th=[17171], 99.50th=[17433], 99.90th=[22938], 99.95th=[49546], 00:44:34.919 | 99.99th=[53216] 00:44:34.919 bw ( KiB/s): min=25856, max=27136, per=33.09%, avg=26357.80, stdev=335.35, samples=20 00:44:34.919 iops : min= 202, max= 212, avg=205.90, stdev= 2.63, samples=20 00:44:34.919 lat (msec) : 20=99.76%, 50=0.19%, 100=0.05% 00:44:34.919 cpu : usr=93.93%, sys=5.63%, ctx=28, majf=0, minf=95 00:44:34.919 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:34.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:34.919 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:34.919 issued rwts: total=2062,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:34.919 latency : target=0, window=0, percentile=100.00%, depth=3 00:44:34.919 filename0: (groupid=0, jobs=1): err= 0: pid=2458193: Wed May 15 09:10:27 2024 00:44:34.919 read: IOPS=204, BW=25.6MiB/s (26.8MB/s)(257MiB/10046msec) 00:44:34.919 slat (nsec): min=5370, max=38579, avg=16625.76, stdev=3411.10 00:44:34.919 clat (usec): min=11294, max=51992, avg=14623.41, stdev=1488.63 00:44:34.919 lat (usec): min=11309, max=52014, avg=14640.03, stdev=1488.64 00:44:34.919 clat percentiles (usec): 00:44:34.919 | 1.00th=[12256], 5.00th=[12911], 10.00th=[13304], 20.00th=[13829], 00:44:34.919 | 30.00th=[14091], 40.00th=[14353], 50.00th=[14615], 60.00th=[14877], 00:44:34.919 | 70.00th=[15008], 80.00th=[15401], 90.00th=[15926], 95.00th=[16319], 00:44:34.919 | 99.00th=[17171], 99.50th=[17433], 99.90th=[19006], 99.95th=[46400], 00:44:34.919 | 99.99th=[52167] 00:44:34.919 bw ( KiB/s): min=25394, max=27136, per=32.99%, avg=26280.90, stdev=385.84, samples=20 00:44:34.919 iops : min= 198, max= 212, avg=205.30, stdev= 3.06, samples=20 00:44:34.919 lat (msec) : 20=99.90%, 50=0.05%, 100=0.05% 00:44:34.919 cpu : usr=94.29%, sys=5.19%, ctx=54, majf=0, minf=102 00:44:34.919 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:34.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:34.919 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:34.919 issued rwts: total=2055,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:34.919 latency : target=0, window=0, percentile=100.00%, depth=3 00:44:34.919 00:44:34.919 Run status group 0 (all jobs): 00:44:34.919 READ: bw=77.8MiB/s (81.6MB/s), 25.6MiB/s-26.6MiB/s (26.8MB/s-27.9MB/s), io=782MiB (819MB), run=10045-10047msec 00:44:34.919 09:10:28 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:44:34.919 09:10:28 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:44:34.919 09:10:28 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:44:34.919 09:10:28 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:44:34.919 09:10:28 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:44:34.919 09:10:28 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:44:34.919 09:10:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:34.920 09:10:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:34.920 09:10:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:34.920 09:10:28 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:44:34.920 09:10:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:34.920 09:10:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:34.920 09:10:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:34.920 00:44:34.920 real 0m11.170s 00:44:34.920 user 0m29.426s 00:44:34.920 sys 0m1.919s 00:44:34.920 09:10:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # xtrace_disable 00:44:34.920 09:10:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:34.920 ************************************ 00:44:34.920 END TEST fio_dif_digest 00:44:34.920 ************************************ 00:44:34.920 09:10:28 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:44:34.920 09:10:28 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:44:34.920 09:10:28 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:44:34.920 09:10:28 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:44:34.920 09:10:28 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:44:34.920 09:10:28 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:44:34.920 09:10:28 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:44:34.920 09:10:28 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:44:34.920 rmmod nvme_tcp 00:44:34.920 rmmod nvme_fabrics 00:44:34.920 rmmod nvme_keyring 00:44:34.920 09:10:28 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:44:34.920 09:10:28 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:44:34.920 09:10:28 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:44:34.920 09:10:28 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 2452147 ']' 00:44:34.920 09:10:28 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 2452147 00:44:34.920 09:10:28 nvmf_dif -- common/autotest_common.sh@947 -- # '[' -z 2452147 ']' 00:44:34.920 09:10:28 nvmf_dif -- common/autotest_common.sh@951 -- # kill -0 2452147 00:44:34.920 09:10:28 nvmf_dif -- common/autotest_common.sh@952 -- # uname 00:44:34.920 09:10:28 nvmf_dif -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:44:34.920 09:10:28 nvmf_dif -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2452147 00:44:34.920 09:10:28 nvmf_dif -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:44:34.920 09:10:28 nvmf_dif -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:44:34.920 09:10:28 nvmf_dif -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2452147' 00:44:34.920 killing process with pid 2452147 00:44:34.920 09:10:28 nvmf_dif -- common/autotest_common.sh@966 -- # kill 2452147 00:44:34.920 [2024-05-15 09:10:28.368540] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:44:34.920 09:10:28 nvmf_dif -- common/autotest_common.sh@971 -- # wait 2452147 00:44:34.920 09:10:28 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:44:34.920 09:10:28 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:44:35.178 Waiting for block devices as requested 00:44:35.178 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:44:35.178 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:44:35.178 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:44:35.178 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:44:35.436 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:44:35.436 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:44:35.436 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:44:35.436 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:44:35.694 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:44:35.694 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:44:35.694 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:44:35.694 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:44:35.951 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:44:35.951 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:44:35.951 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:44:35.951 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:44:36.209 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:44:36.209 09:10:30 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:44:36.209 09:10:30 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:44:36.209 09:10:30 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:44:36.209 09:10:30 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:44:36.209 09:10:30 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:36.209 09:10:30 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:36.209 09:10:30 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:38.133 09:10:32 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:44:38.133 00:44:38.133 real 1m7.426s 00:44:38.133 user 6m28.737s 00:44:38.133 sys 0m19.426s 00:44:38.133 09:10:32 nvmf_dif -- common/autotest_common.sh@1123 -- # xtrace_disable 00:44:38.133 09:10:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:38.133 ************************************ 00:44:38.133 END TEST nvmf_dif 00:44:38.133 ************************************ 00:44:38.133 09:10:32 -- spdk/autotest.sh@289 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:44:38.133 09:10:32 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:44:38.133 09:10:32 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:44:38.133 09:10:32 -- common/autotest_common.sh@10 -- # set +x 00:44:38.391 ************************************ 00:44:38.391 START TEST nvmf_abort_qd_sizes 00:44:38.391 ************************************ 00:44:38.391 09:10:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:44:38.391 * Looking for test storage... 00:44:38.391 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:44:38.391 09:10:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:38.391 09:10:33 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:44:38.391 09:10:33 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:38.391 09:10:33 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:38.391 09:10:33 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:38.391 09:10:33 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:38.391 09:10:33 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:38.391 09:10:33 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:38.391 09:10:33 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:38.391 09:10:33 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:38.391 09:10:33 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:38.391 09:10:33 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:38.391 09:10:33 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:44:38.391 09:10:33 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:44:38.391 09:10:33 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:38.391 09:10:33 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:38.391 09:10:33 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:38.392 09:10:33 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:38.392 09:10:33 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:38.392 09:10:33 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:38.392 09:10:33 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:38.392 09:10:33 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:38.392 09:10:33 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:38.392 09:10:33 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:38.392 09:10:33 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:38.392 09:10:33 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:44:38.392 09:10:33 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:38.392 09:10:33 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:44:38.392 09:10:33 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:44:38.392 09:10:33 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:44:38.392 09:10:33 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:38.392 09:10:33 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:38.392 09:10:33 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:38.392 09:10:33 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:44:38.392 09:10:33 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:44:38.392 09:10:33 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:44:38.392 09:10:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:44:38.392 09:10:33 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:44:38.392 09:10:33 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:38.392 09:10:33 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:44:38.392 09:10:33 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:44:38.392 09:10:33 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:44:38.392 09:10:33 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:38.392 09:10:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:38.392 09:10:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:38.392 09:10:33 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:44:38.392 09:10:33 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:44:38.392 09:10:33 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:44:38.392 09:10:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:40.921 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:40.921 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:44:40.921 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:44:40.921 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:44:40.921 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:44:40.921 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:44:40.921 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:44:40.921 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:44:40.921 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:44:40.921 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:44:40.921 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:44:40.921 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:44:40.921 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:44:40.921 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:44:40.921 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:44:40.921 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:40.921 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:40.921 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:40.921 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:40.921 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:40.921 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:40.921 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:40.921 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:40.921 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:40.921 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:40.921 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:40.921 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:44:40.921 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:44:40.921 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:44:40.921 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:44:40.921 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:44:40.921 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:44:40.921 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:44:40.921 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:44:40.921 Found 0000:09:00.0 (0x8086 - 0x159b) 00:44:40.921 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:44:40.922 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:44:40.922 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:40.922 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:40.922 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:44:40.922 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:44:40.922 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:44:40.922 Found 0000:09:00.1 (0x8086 - 0x159b) 00:44:40.922 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:44:40.922 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:44:40.922 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:40.922 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:40.922 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:44:40.922 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:44:40.922 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:44:40.922 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:44:40.922 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:44:40.922 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:40.922 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:44:40.922 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:40.922 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:44:40.922 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:44:40.922 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:40.922 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:44:40.922 Found net devices under 0000:09:00.0: cvl_0_0 00:44:40.922 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:44:40.922 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:44:40.922 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:40.922 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:44:40.922 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:40.922 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:44:40.922 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:44:40.922 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:40.922 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:44:40.922 Found net devices under 0000:09:00.1: cvl_0_1 00:44:40.922 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:44:40.922 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:44:40.922 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:44:40.922 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:44:40.922 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:44:40.922 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:44:40.922 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:40.922 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:40.922 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:40.922 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:44:40.922 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:40.922 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:40.922 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:44:40.922 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:40.922 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:40.922 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:44:40.922 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:44:40.922 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:44:40.922 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:40.922 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:40.922 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:40.922 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:44:40.922 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:40.922 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:40.922 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:40.922 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:44:40.922 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:40.922 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:44:40.922 00:44:40.922 --- 10.0.0.2 ping statistics --- 00:44:40.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:40.922 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:44:40.922 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:40.922 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:40.922 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:44:40.922 00:44:40.922 --- 10.0.0.1 ping statistics --- 00:44:40.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:40.922 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:44:40.922 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:40.922 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:44:40.922 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:44:40.922 09:10:35 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:44:42.302 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:44:42.302 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:44:42.302 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:44:42.302 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:44:42.302 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:44:42.302 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:44:42.302 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:44:42.302 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:44:42.302 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:44:42.302 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:44:42.302 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:44:42.302 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:44:42.302 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:44:42.302 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:44:42.302 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:44:42.302 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:44:43.241 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:44:43.241 09:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:43.241 09:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:44:43.241 09:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:44:43.241 09:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:43.241 09:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:44:43.241 09:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:44:43.241 09:10:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:44:43.241 09:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:44:43.241 09:10:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@721 -- # xtrace_disable 00:44:43.241 09:10:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:43.241 09:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=2463474 00:44:43.241 09:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:44:43.241 09:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 2463474 00:44:43.241 09:10:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@828 -- # '[' -z 2463474 ']' 00:44:43.241 09:10:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:43.241 09:10:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local max_retries=100 00:44:43.241 09:10:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:43.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:43.241 09:10:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # xtrace_disable 00:44:43.241 09:10:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:43.241 [2024-05-15 09:10:37.916422] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:44:43.241 [2024-05-15 09:10:37.916497] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:43.241 EAL: No free 2048 kB hugepages reported on node 1 00:44:43.241 [2024-05-15 09:10:37.990284] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:44:43.500 [2024-05-15 09:10:38.078787] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:43.500 [2024-05-15 09:10:38.078841] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:43.500 [2024-05-15 09:10:38.078854] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:43.500 [2024-05-15 09:10:38.078865] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:43.500 [2024-05-15 09:10:38.078875] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:43.500 [2024-05-15 09:10:38.078958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:44:43.500 [2024-05-15 09:10:38.079022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:44:43.500 [2024-05-15 09:10:38.079086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:44:43.500 [2024-05-15 09:10:38.079089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:44:43.500 09:10:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:44:43.500 09:10:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@861 -- # return 0 00:44:43.500 09:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:44:43.500 09:10:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@727 -- # xtrace_disable 00:44:43.500 09:10:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:43.500 09:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:43.500 09:10:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:44:43.500 09:10:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:44:43.500 09:10:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:44:43.500 09:10:38 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:44:43.500 09:10:38 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:44:43.500 09:10:38 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:0b:00.0 ]] 00:44:43.500 09:10:38 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:44:43.500 09:10:38 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:44:43.500 09:10:38 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:0b:00.0 ]] 00:44:43.500 09:10:38 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:44:43.500 09:10:38 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:44:43.500 09:10:38 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:44:43.500 09:10:38 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:44:43.500 09:10:38 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:0b:00.0 00:44:43.500 09:10:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:44:43.500 09:10:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:0b:00.0 00:44:43.500 09:10:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:44:43.500 09:10:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:44:43.500 09:10:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1104 -- # xtrace_disable 00:44:43.500 09:10:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:43.500 ************************************ 00:44:43.500 START TEST spdk_target_abort 00:44:43.500 ************************************ 00:44:43.500 09:10:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1122 -- # spdk_target 00:44:43.500 09:10:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:44:43.500 09:10:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:0b:00.0 -b spdk_target 00:44:43.500 09:10:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:43.500 09:10:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:46.789 spdk_targetn1 00:44:46.789 09:10:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:46.789 09:10:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:44:46.789 09:10:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:46.789 09:10:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:46.789 [2024-05-15 09:10:41.118528] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:46.789 09:10:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:46.789 09:10:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:44:46.789 09:10:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:46.789 09:10:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:46.789 09:10:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:46.789 09:10:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:44:46.789 09:10:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:46.789 09:10:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:46.789 09:10:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:46.789 09:10:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:44:46.789 09:10:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:46.789 09:10:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:46.789 [2024-05-15 09:10:41.150532] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:44:46.789 [2024-05-15 09:10:41.150850] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:46.789 09:10:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:46.789 09:10:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:44:46.789 09:10:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:44:46.789 09:10:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:44:46.789 09:10:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:44:46.789 09:10:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:44:46.789 09:10:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:44:46.789 09:10:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:44:46.789 09:10:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:44:46.789 09:10:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:44:46.789 09:10:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:46.789 09:10:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:44:46.789 09:10:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:46.789 09:10:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:44:46.789 09:10:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:46.789 09:10:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:44:46.789 09:10:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:46.789 09:10:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:44:46.789 09:10:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:46.789 09:10:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:46.789 09:10:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:46.789 09:10:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:46.789 EAL: No free 2048 kB hugepages reported on node 1 00:44:50.074 Initializing NVMe Controllers 00:44:50.074 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:44:50.074 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:50.074 Initialization complete. Launching workers. 00:44:50.074 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11686, failed: 0 00:44:50.074 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1229, failed to submit 10457 00:44:50.074 success 780, unsuccess 449, failed 0 00:44:50.074 09:10:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:50.074 09:10:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:50.074 EAL: No free 2048 kB hugepages reported on node 1 00:44:53.362 Initializing NVMe Controllers 00:44:53.362 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:44:53.362 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:53.362 Initialization complete. Launching workers. 00:44:53.362 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8711, failed: 0 00:44:53.362 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1217, failed to submit 7494 00:44:53.362 success 363, unsuccess 854, failed 0 00:44:53.362 09:10:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:53.362 09:10:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:53.362 EAL: No free 2048 kB hugepages reported on node 1 00:44:56.691 Initializing NVMe Controllers 00:44:56.691 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:44:56.691 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:56.691 Initialization complete. Launching workers. 00:44:56.691 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30689, failed: 0 00:44:56.691 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2744, failed to submit 27945 00:44:56.691 success 505, unsuccess 2239, failed 0 00:44:56.691 09:10:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:44:56.691 09:10:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:56.691 09:10:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:56.691 09:10:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:56.691 09:10:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:44:56.691 09:10:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:56.691 09:10:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:57.258 09:10:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:57.258 09:10:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2463474 00:44:57.258 09:10:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@947 -- # '[' -z 2463474 ']' 00:44:57.258 09:10:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # kill -0 2463474 00:44:57.258 09:10:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # uname 00:44:57.258 09:10:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:44:57.258 09:10:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2463474 00:44:57.258 09:10:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:44:57.258 09:10:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:44:57.258 09:10:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2463474' 00:44:57.258 killing process with pid 2463474 00:44:57.258 09:10:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # kill 2463474 00:44:57.258 [2024-05-15 09:10:52.039087] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:44:57.258 09:10:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@971 -- # wait 2463474 00:44:57.516 00:44:57.516 real 0m13.977s 00:44:57.516 user 0m52.867s 00:44:57.516 sys 0m2.630s 00:44:57.516 09:10:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # xtrace_disable 00:44:57.516 09:10:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:57.516 ************************************ 00:44:57.516 END TEST spdk_target_abort 00:44:57.516 ************************************ 00:44:57.516 09:10:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:44:57.516 09:10:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:44:57.516 09:10:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1104 -- # xtrace_disable 00:44:57.516 09:10:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:57.516 ************************************ 00:44:57.516 START TEST kernel_target_abort 00:44:57.516 ************************************ 00:44:57.516 09:10:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1122 -- # kernel_target 00:44:57.516 09:10:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:44:57.516 09:10:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:44:57.516 09:10:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:44:57.516 09:10:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:44:57.516 09:10:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:44:57.516 09:10:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:44:57.516 09:10:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:44:57.516 09:10:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:44:57.516 09:10:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:44:57.516 09:10:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:44:57.516 09:10:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:44:57.776 09:10:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:44:57.776 09:10:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:44:57.776 09:10:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:44:57.776 09:10:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:44:57.776 09:10:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:44:57.776 09:10:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:44:57.776 09:10:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:44:57.776 09:10:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:44:57.776 09:10:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:44:57.776 09:10:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:44:57.777 09:10:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:44:59.155 Waiting for block devices as requested 00:44:59.155 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:44:59.155 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:44:59.155 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:44:59.155 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:44:59.155 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:44:59.155 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:44:59.413 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:44:59.413 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:44:59.413 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:44:59.413 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:44:59.672 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:44:59.672 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:44:59.672 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:44:59.672 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:44:59.672 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:44:59.931 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:44:59.931 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:44:59.931 09:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:44:59.931 09:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:44:59.931 09:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:44:59.931 09:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:44:59.931 09:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:44:59.931 09:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:44:59.931 09:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:44:59.931 09:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:44:59.931 09:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:44:59.931 No valid GPT data, bailing 00:44:59.931 09:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:44:59.931 09:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:44:59.931 09:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:44:59.931 09:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:44:59.931 09:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:44:59.931 09:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:44:59.931 09:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:45:00.190 09:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:45:00.190 09:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:45:00.190 09:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:45:00.190 09:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:45:00.190 09:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:45:00.190 09:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:45:00.190 09:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:45:00.190 09:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:45:00.190 09:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:45:00.190 09:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:45:00.190 09:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:45:00.190 00:45:00.190 Discovery Log Number of Records 2, Generation counter 2 00:45:00.190 =====Discovery Log Entry 0====== 00:45:00.190 trtype: tcp 00:45:00.190 adrfam: ipv4 00:45:00.190 subtype: current discovery subsystem 00:45:00.190 treq: not specified, sq flow control disable supported 00:45:00.190 portid: 1 00:45:00.190 trsvcid: 4420 00:45:00.190 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:45:00.190 traddr: 10.0.0.1 00:45:00.190 eflags: none 00:45:00.190 sectype: none 00:45:00.190 =====Discovery Log Entry 1====== 00:45:00.190 trtype: tcp 00:45:00.190 adrfam: ipv4 00:45:00.190 subtype: nvme subsystem 00:45:00.190 treq: not specified, sq flow control disable supported 00:45:00.190 portid: 1 00:45:00.190 trsvcid: 4420 00:45:00.190 subnqn: nqn.2016-06.io.spdk:testnqn 00:45:00.190 traddr: 10.0.0.1 00:45:00.190 eflags: none 00:45:00.190 sectype: none 00:45:00.190 09:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:45:00.190 09:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:45:00.190 09:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:45:00.190 09:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:45:00.190 09:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:45:00.190 09:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:45:00.190 09:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:45:00.190 09:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:45:00.190 09:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:45:00.190 09:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:00.190 09:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:45:00.190 09:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:00.190 09:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:45:00.190 09:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:00.190 09:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:45:00.190 09:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:00.190 09:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:45:00.190 09:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:00.190 09:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:45:00.190 09:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:45:00.190 09:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:45:00.190 EAL: No free 2048 kB hugepages reported on node 1 00:45:03.479 Initializing NVMe Controllers 00:45:03.480 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:45:03.480 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:45:03.480 Initialization complete. Launching workers. 00:45:03.480 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38438, failed: 0 00:45:03.480 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 38438, failed to submit 0 00:45:03.480 success 0, unsuccess 38438, failed 0 00:45:03.480 09:10:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:45:03.480 09:10:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:45:03.480 EAL: No free 2048 kB hugepages reported on node 1 00:45:06.768 Initializing NVMe Controllers 00:45:06.768 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:45:06.768 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:45:06.768 Initialization complete. Launching workers. 00:45:06.768 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 74327, failed: 0 00:45:06.768 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 18730, failed to submit 55597 00:45:06.768 success 0, unsuccess 18730, failed 0 00:45:06.768 09:11:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:45:06.768 09:11:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:45:06.768 EAL: No free 2048 kB hugepages reported on node 1 00:45:09.299 Initializing NVMe Controllers 00:45:09.299 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:45:09.299 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:45:09.299 Initialization complete. Launching workers. 00:45:09.299 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 72387, failed: 0 00:45:09.299 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 18070, failed to submit 54317 00:45:09.299 success 0, unsuccess 18070, failed 0 00:45:09.299 09:11:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:45:09.299 09:11:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:45:09.299 09:11:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:45:09.299 09:11:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:45:09.299 09:11:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:45:09.299 09:11:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:45:09.299 09:11:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:45:09.299 09:11:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:45:09.299 09:11:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:45:09.299 09:11:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:45:10.675 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:45:10.675 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:45:10.675 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:45:10.675 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:45:10.675 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:45:10.675 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:45:10.675 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:45:10.675 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:45:10.675 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:45:10.675 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:45:10.675 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:45:10.675 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:45:10.675 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:45:10.675 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:45:10.675 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:45:10.675 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:45:11.612 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:45:11.870 00:45:11.870 real 0m14.191s 00:45:11.870 user 0m5.994s 00:45:11.870 sys 0m3.382s 00:45:11.870 09:11:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # xtrace_disable 00:45:11.870 09:11:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:45:11.870 ************************************ 00:45:11.870 END TEST kernel_target_abort 00:45:11.870 ************************************ 00:45:11.870 09:11:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:45:11.870 09:11:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:45:11.870 09:11:06 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:45:11.870 09:11:06 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:45:11.870 09:11:06 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:45:11.870 09:11:06 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:45:11.870 09:11:06 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:45:11.870 09:11:06 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:45:11.870 rmmod nvme_tcp 00:45:11.870 rmmod nvme_fabrics 00:45:11.870 rmmod nvme_keyring 00:45:11.870 09:11:06 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:45:11.870 09:11:06 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:45:11.870 09:11:06 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:45:11.870 09:11:06 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 2463474 ']' 00:45:11.870 09:11:06 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 2463474 00:45:11.870 09:11:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@947 -- # '[' -z 2463474 ']' 00:45:11.870 09:11:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@951 -- # kill -0 2463474 00:45:11.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 951: kill: (2463474) - No such process 00:45:11.870 09:11:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@974 -- # echo 'Process with pid 2463474 is not found' 00:45:11.870 Process with pid 2463474 is not found 00:45:11.870 09:11:06 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:45:11.870 09:11:06 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:45:13.241 Waiting for block devices as requested 00:45:13.241 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:45:13.241 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:45:13.241 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:45:13.500 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:45:13.500 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:45:13.500 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:45:13.500 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:45:13.760 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:45:13.760 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:45:13.760 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:45:13.760 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:45:14.019 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:45:14.019 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:45:14.019 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:45:14.278 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:45:14.278 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:45:14.278 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:45:14.278 09:11:09 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:45:14.278 09:11:09 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:45:14.278 09:11:09 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:45:14.278 09:11:09 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:45:14.278 09:11:09 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:14.278 09:11:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:45:14.278 09:11:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:16.845 09:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:45:16.845 00:45:16.845 real 0m38.131s 00:45:16.845 user 1m1.164s 00:45:16.845 sys 0m9.787s 00:45:16.845 09:11:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # xtrace_disable 00:45:16.845 09:11:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:45:16.845 ************************************ 00:45:16.845 END TEST nvmf_abort_qd_sizes 00:45:16.845 ************************************ 00:45:16.845 09:11:11 -- spdk/autotest.sh@291 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:45:16.845 09:11:11 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:45:16.845 09:11:11 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:45:16.845 09:11:11 -- common/autotest_common.sh@10 -- # set +x 00:45:16.845 ************************************ 00:45:16.845 START TEST keyring_file 00:45:16.845 ************************************ 00:45:16.845 09:11:11 keyring_file -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:45:16.845 * Looking for test storage... 00:45:16.845 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:45:16.845 09:11:11 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:45:16.845 09:11:11 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:16.845 09:11:11 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:45:16.845 09:11:11 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:16.845 09:11:11 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:16.845 09:11:11 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:16.845 09:11:11 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:16.845 09:11:11 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:16.845 09:11:11 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:16.845 09:11:11 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:16.845 09:11:11 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:16.845 09:11:11 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:16.845 09:11:11 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:16.845 09:11:11 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:45:16.845 09:11:11 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:45:16.845 09:11:11 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:16.845 09:11:11 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:16.845 09:11:11 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:16.845 09:11:11 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:16.845 09:11:11 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:16.845 09:11:11 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:16.845 09:11:11 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:16.845 09:11:11 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:16.845 09:11:11 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:16.845 09:11:11 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:16.845 09:11:11 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:16.845 09:11:11 keyring_file -- paths/export.sh@5 -- # export PATH 00:45:16.845 09:11:11 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:16.845 09:11:11 keyring_file -- nvmf/common.sh@47 -- # : 0 00:45:16.845 09:11:11 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:45:16.845 09:11:11 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:45:16.845 09:11:11 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:16.845 09:11:11 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:16.845 09:11:11 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:16.845 09:11:11 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:45:16.845 09:11:11 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:45:16.845 09:11:11 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:45:16.845 09:11:11 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:45:16.845 09:11:11 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:45:16.845 09:11:11 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:45:16.845 09:11:11 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:45:16.845 09:11:11 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:45:16.845 09:11:11 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:45:16.845 09:11:11 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:45:16.845 09:11:11 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:45:16.845 09:11:11 keyring_file -- keyring/common.sh@17 -- # name=key0 00:45:16.845 09:11:11 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:45:16.845 09:11:11 keyring_file -- keyring/common.sh@17 -- # digest=0 00:45:16.846 09:11:11 keyring_file -- keyring/common.sh@18 -- # mktemp 00:45:16.846 09:11:11 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.FYcFGd1ls1 00:45:16.846 09:11:11 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:45:16.846 09:11:11 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:45:16.846 09:11:11 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:45:16.846 09:11:11 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:45:16.846 09:11:11 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:45:16.846 09:11:11 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:45:16.846 09:11:11 keyring_file -- nvmf/common.sh@705 -- # python - 00:45:16.846 09:11:11 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.FYcFGd1ls1 00:45:16.846 09:11:11 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.FYcFGd1ls1 00:45:16.846 09:11:11 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.FYcFGd1ls1 00:45:16.846 09:11:11 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:45:16.846 09:11:11 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:45:16.846 09:11:11 keyring_file -- keyring/common.sh@17 -- # name=key1 00:45:16.846 09:11:11 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:45:16.846 09:11:11 keyring_file -- keyring/common.sh@17 -- # digest=0 00:45:16.846 09:11:11 keyring_file -- keyring/common.sh@18 -- # mktemp 00:45:16.846 09:11:11 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.AYAxXLyVC6 00:45:16.846 09:11:11 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:45:16.846 09:11:11 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:45:16.846 09:11:11 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:45:16.846 09:11:11 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:45:16.846 09:11:11 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:45:16.846 09:11:11 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:45:16.846 09:11:11 keyring_file -- nvmf/common.sh@705 -- # python - 00:45:16.846 09:11:11 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.AYAxXLyVC6 00:45:16.846 09:11:11 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.AYAxXLyVC6 00:45:16.846 09:11:11 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.AYAxXLyVC6 00:45:16.846 09:11:11 keyring_file -- keyring/file.sh@30 -- # tgtpid=2469615 00:45:16.846 09:11:11 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:45:16.846 09:11:11 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2469615 00:45:16.846 09:11:11 keyring_file -- common/autotest_common.sh@828 -- # '[' -z 2469615 ']' 00:45:16.846 09:11:11 keyring_file -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:16.846 09:11:11 keyring_file -- common/autotest_common.sh@833 -- # local max_retries=100 00:45:16.846 09:11:11 keyring_file -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:16.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:16.846 09:11:11 keyring_file -- common/autotest_common.sh@837 -- # xtrace_disable 00:45:16.846 09:11:11 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:16.846 [2024-05-15 09:11:11.349471] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:45:16.846 [2024-05-15 09:11:11.349569] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2469615 ] 00:45:16.846 EAL: No free 2048 kB hugepages reported on node 1 00:45:16.846 [2024-05-15 09:11:11.416162] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:16.846 [2024-05-15 09:11:11.503822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:45:17.103 09:11:11 keyring_file -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:45:17.103 09:11:11 keyring_file -- common/autotest_common.sh@861 -- # return 0 00:45:17.103 09:11:11 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:45:17.103 09:11:11 keyring_file -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:17.103 09:11:11 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:17.103 [2024-05-15 09:11:11.754369] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:17.103 null0 00:45:17.103 [2024-05-15 09:11:11.786357] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:45:17.103 [2024-05-15 09:11:11.786434] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:45:17.103 [2024-05-15 09:11:11.786965] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:45:17.103 [2024-05-15 09:11:11.794419] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:45:17.103 09:11:11 keyring_file -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:17.103 09:11:11 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:45:17.103 09:11:11 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:45:17.103 09:11:11 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:45:17.103 09:11:11 keyring_file -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:45:17.103 09:11:11 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:45:17.103 09:11:11 keyring_file -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:45:17.103 09:11:11 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:45:17.103 09:11:11 keyring_file -- common/autotest_common.sh@652 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:45:17.103 09:11:11 keyring_file -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:17.103 09:11:11 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:17.103 [2024-05-15 09:11:11.802425] nvmf_rpc.c: 773:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:45:17.103 request: 00:45:17.103 { 00:45:17.103 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:45:17.103 "secure_channel": false, 00:45:17.103 "listen_address": { 00:45:17.103 "trtype": "tcp", 00:45:17.103 "traddr": "127.0.0.1", 00:45:17.103 "trsvcid": "4420" 00:45:17.103 }, 00:45:17.103 "method": "nvmf_subsystem_add_listener", 00:45:17.103 "req_id": 1 00:45:17.103 } 00:45:17.103 Got JSON-RPC error response 00:45:17.103 response: 00:45:17.103 { 00:45:17.103 "code": -32602, 00:45:17.103 "message": "Invalid parameters" 00:45:17.103 } 00:45:17.103 09:11:11 keyring_file -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:45:17.103 09:11:11 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:45:17.103 09:11:11 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:45:17.103 09:11:11 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:45:17.103 09:11:11 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:45:17.103 09:11:11 keyring_file -- keyring/file.sh@46 -- # bperfpid=2469629 00:45:17.103 09:11:11 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:45:17.103 09:11:11 keyring_file -- keyring/file.sh@48 -- # waitforlisten 2469629 /var/tmp/bperf.sock 00:45:17.103 09:11:11 keyring_file -- common/autotest_common.sh@828 -- # '[' -z 2469629 ']' 00:45:17.103 09:11:11 keyring_file -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:45:17.103 09:11:11 keyring_file -- common/autotest_common.sh@833 -- # local max_retries=100 00:45:17.103 09:11:11 keyring_file -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:45:17.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:45:17.103 09:11:11 keyring_file -- common/autotest_common.sh@837 -- # xtrace_disable 00:45:17.103 09:11:11 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:17.103 [2024-05-15 09:11:11.847964] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:45:17.103 [2024-05-15 09:11:11.848038] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2469629 ] 00:45:17.103 EAL: No free 2048 kB hugepages reported on node 1 00:45:17.360 [2024-05-15 09:11:11.911125] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:17.360 [2024-05-15 09:11:11.997598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:45:17.360 09:11:12 keyring_file -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:45:17.360 09:11:12 keyring_file -- common/autotest_common.sh@861 -- # return 0 00:45:17.360 09:11:12 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.FYcFGd1ls1 00:45:17.360 09:11:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.FYcFGd1ls1 00:45:17.619 09:11:12 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.AYAxXLyVC6 00:45:17.619 09:11:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.AYAxXLyVC6 00:45:17.877 09:11:12 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:45:17.877 09:11:12 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:45:17.877 09:11:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:17.877 09:11:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:17.877 09:11:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:18.135 09:11:12 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.FYcFGd1ls1 == \/\t\m\p\/\t\m\p\.\F\Y\c\F\G\d\1\l\s\1 ]] 00:45:18.135 09:11:12 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:45:18.135 09:11:12 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:45:18.135 09:11:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:18.135 09:11:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:18.135 09:11:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:18.393 09:11:13 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.AYAxXLyVC6 == \/\t\m\p\/\t\m\p\.\A\Y\A\x\X\L\y\V\C\6 ]] 00:45:18.393 09:11:13 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:45:18.393 09:11:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:18.393 09:11:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:18.393 09:11:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:18.393 09:11:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:18.393 09:11:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:18.651 09:11:13 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:45:18.651 09:11:13 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:45:18.651 09:11:13 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:45:18.651 09:11:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:18.651 09:11:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:18.651 09:11:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:18.651 09:11:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:18.908 09:11:13 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:45:18.908 09:11:13 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:18.908 09:11:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:19.165 [2024-05-15 09:11:13.835408] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:45:19.165 nvme0n1 00:45:19.165 09:11:13 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:45:19.165 09:11:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:19.165 09:11:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:19.165 09:11:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:19.165 09:11:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:19.165 09:11:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:19.423 09:11:14 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:45:19.423 09:11:14 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:45:19.423 09:11:14 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:45:19.423 09:11:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:19.423 09:11:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:19.423 09:11:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:19.423 09:11:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:19.681 09:11:14 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:45:19.681 09:11:14 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:45:19.939 Running I/O for 1 seconds... 00:45:20.877 00:45:20.877 Latency(us) 00:45:20.877 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:20.877 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:45:20.877 nvme0n1 : 1.01 7002.63 27.35 0.00 0.00 18191.01 5242.88 27767.85 00:45:20.877 =================================================================================================================== 00:45:20.877 Total : 7002.63 27.35 0.00 0.00 18191.01 5242.88 27767.85 00:45:20.877 0 00:45:20.877 09:11:15 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:45:20.877 09:11:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:45:21.135 09:11:15 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:45:21.135 09:11:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:21.135 09:11:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:21.135 09:11:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:21.135 09:11:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:21.135 09:11:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:21.393 09:11:16 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:45:21.393 09:11:16 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:45:21.393 09:11:16 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:45:21.393 09:11:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:21.393 09:11:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:21.393 09:11:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:21.393 09:11:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:21.650 09:11:16 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:45:21.650 09:11:16 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:45:21.650 09:11:16 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:45:21.650 09:11:16 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:45:21.650 09:11:16 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:45:21.650 09:11:16 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:45:21.650 09:11:16 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:45:21.650 09:11:16 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:45:21.650 09:11:16 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:45:21.650 09:11:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:45:21.908 [2024-05-15 09:11:16.507775] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:45:21.908 [2024-05-15 09:11:16.508365] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x132a1d0 (107): Transport endpoint is not connected 00:45:21.908 [2024-05-15 09:11:16.509359] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x132a1d0 (9): Bad file descriptor 00:45:21.908 [2024-05-15 09:11:16.510358] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:45:21.908 [2024-05-15 09:11:16.510379] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:45:21.908 [2024-05-15 09:11:16.510394] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:45:21.908 request: 00:45:21.908 { 00:45:21.908 "name": "nvme0", 00:45:21.908 "trtype": "tcp", 00:45:21.908 "traddr": "127.0.0.1", 00:45:21.908 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:21.908 "adrfam": "ipv4", 00:45:21.908 "trsvcid": "4420", 00:45:21.908 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:21.908 "psk": "key1", 00:45:21.908 "method": "bdev_nvme_attach_controller", 00:45:21.908 "req_id": 1 00:45:21.908 } 00:45:21.908 Got JSON-RPC error response 00:45:21.908 response: 00:45:21.908 { 00:45:21.908 "code": -32602, 00:45:21.908 "message": "Invalid parameters" 00:45:21.908 } 00:45:21.908 09:11:16 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:45:21.908 09:11:16 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:45:21.908 09:11:16 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:45:21.908 09:11:16 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:45:21.908 09:11:16 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:45:21.908 09:11:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:21.908 09:11:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:21.908 09:11:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:21.908 09:11:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:21.908 09:11:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:22.167 09:11:16 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:45:22.167 09:11:16 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:45:22.167 09:11:16 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:45:22.167 09:11:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:22.167 09:11:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:22.167 09:11:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:22.167 09:11:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:22.425 09:11:17 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:45:22.425 09:11:17 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:45:22.425 09:11:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:45:22.683 09:11:17 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:45:22.683 09:11:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:45:22.941 09:11:17 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:45:22.941 09:11:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:22.941 09:11:17 keyring_file -- keyring/file.sh@77 -- # jq length 00:45:23.199 09:11:17 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:45:23.200 09:11:17 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.FYcFGd1ls1 00:45:23.200 09:11:17 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.FYcFGd1ls1 00:45:23.200 09:11:17 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:45:23.200 09:11:17 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.FYcFGd1ls1 00:45:23.200 09:11:17 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:45:23.200 09:11:17 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:45:23.200 09:11:17 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:45:23.200 09:11:17 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:45:23.200 09:11:17 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.FYcFGd1ls1 00:45:23.200 09:11:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.FYcFGd1ls1 00:45:23.200 [2024-05-15 09:11:17.985344] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.FYcFGd1ls1': 0100660 00:45:23.200 [2024-05-15 09:11:17.985378] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:45:23.200 request: 00:45:23.200 { 00:45:23.200 "name": "key0", 00:45:23.200 "path": "/tmp/tmp.FYcFGd1ls1", 00:45:23.200 "method": "keyring_file_add_key", 00:45:23.200 "req_id": 1 00:45:23.200 } 00:45:23.200 Got JSON-RPC error response 00:45:23.200 response: 00:45:23.200 { 00:45:23.200 "code": -1, 00:45:23.200 "message": "Operation not permitted" 00:45:23.200 } 00:45:23.460 09:11:18 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:45:23.460 09:11:18 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:45:23.460 09:11:18 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:45:23.460 09:11:18 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:45:23.460 09:11:18 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.FYcFGd1ls1 00:45:23.460 09:11:18 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.FYcFGd1ls1 00:45:23.460 09:11:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.FYcFGd1ls1 00:45:23.720 09:11:18 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.FYcFGd1ls1 00:45:23.720 09:11:18 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:45:23.720 09:11:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:23.720 09:11:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:23.720 09:11:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:23.720 09:11:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:23.720 09:11:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:23.720 09:11:18 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:45:23.720 09:11:18 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:23.979 09:11:18 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:45:23.979 09:11:18 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:23.979 09:11:18 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:45:23.979 09:11:18 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:45:23.979 09:11:18 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:45:23.979 09:11:18 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:45:23.979 09:11:18 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:23.979 09:11:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:23.979 [2024-05-15 09:11:18.735388] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.FYcFGd1ls1': No such file or directory 00:45:23.979 [2024-05-15 09:11:18.735425] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:45:23.979 [2024-05-15 09:11:18.735468] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:45:23.979 [2024-05-15 09:11:18.735480] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:45:23.979 [2024-05-15 09:11:18.735492] bdev_nvme.c:6252:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:45:23.979 request: 00:45:23.979 { 00:45:23.979 "name": "nvme0", 00:45:23.979 "trtype": "tcp", 00:45:23.979 "traddr": "127.0.0.1", 00:45:23.979 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:23.979 "adrfam": "ipv4", 00:45:23.979 "trsvcid": "4420", 00:45:23.979 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:23.979 "psk": "key0", 00:45:23.979 "method": "bdev_nvme_attach_controller", 00:45:23.979 "req_id": 1 00:45:23.979 } 00:45:23.979 Got JSON-RPC error response 00:45:23.979 response: 00:45:23.979 { 00:45:23.979 "code": -19, 00:45:23.979 "message": "No such device" 00:45:23.979 } 00:45:23.979 09:11:18 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:45:23.979 09:11:18 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:45:23.979 09:11:18 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:45:23.979 09:11:18 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:45:23.979 09:11:18 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:45:23.979 09:11:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:45:24.238 09:11:18 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:45:24.238 09:11:18 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:45:24.238 09:11:18 keyring_file -- keyring/common.sh@17 -- # name=key0 00:45:24.238 09:11:18 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:45:24.238 09:11:18 keyring_file -- keyring/common.sh@17 -- # digest=0 00:45:24.238 09:11:18 keyring_file -- keyring/common.sh@18 -- # mktemp 00:45:24.238 09:11:18 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.ZsUt1hQLOK 00:45:24.238 09:11:18 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:45:24.238 09:11:18 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:45:24.238 09:11:18 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:45:24.238 09:11:18 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:45:24.238 09:11:18 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:45:24.238 09:11:18 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:45:24.238 09:11:18 keyring_file -- nvmf/common.sh@705 -- # python - 00:45:24.497 09:11:19 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ZsUt1hQLOK 00:45:24.497 09:11:19 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.ZsUt1hQLOK 00:45:24.497 09:11:19 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.ZsUt1hQLOK 00:45:24.497 09:11:19 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ZsUt1hQLOK 00:45:24.497 09:11:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ZsUt1hQLOK 00:45:24.757 09:11:19 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:24.757 09:11:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:25.016 nvme0n1 00:45:25.016 09:11:19 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:45:25.016 09:11:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:25.016 09:11:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:25.016 09:11:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:25.016 09:11:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:25.016 09:11:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:25.276 09:11:19 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:45:25.276 09:11:19 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:45:25.276 09:11:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:45:25.535 09:11:20 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:45:25.535 09:11:20 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:45:25.535 09:11:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:25.535 09:11:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:25.535 09:11:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:25.793 09:11:20 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:45:25.793 09:11:20 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:45:25.793 09:11:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:25.793 09:11:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:25.793 09:11:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:25.793 09:11:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:25.793 09:11:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:26.051 09:11:20 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:45:26.051 09:11:20 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:45:26.051 09:11:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:45:26.310 09:11:20 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:45:26.310 09:11:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:26.310 09:11:20 keyring_file -- keyring/file.sh@104 -- # jq length 00:45:26.310 09:11:21 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:45:26.310 09:11:21 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ZsUt1hQLOK 00:45:26.310 09:11:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ZsUt1hQLOK 00:45:26.568 09:11:21 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.AYAxXLyVC6 00:45:26.568 09:11:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.AYAxXLyVC6 00:45:26.826 09:11:21 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:26.826 09:11:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:27.396 nvme0n1 00:45:27.396 09:11:21 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:45:27.396 09:11:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:45:27.655 09:11:22 keyring_file -- keyring/file.sh@112 -- # config='{ 00:45:27.655 "subsystems": [ 00:45:27.655 { 00:45:27.655 "subsystem": "keyring", 00:45:27.655 "config": [ 00:45:27.655 { 00:45:27.655 "method": "keyring_file_add_key", 00:45:27.655 "params": { 00:45:27.655 "name": "key0", 00:45:27.655 "path": "/tmp/tmp.ZsUt1hQLOK" 00:45:27.655 } 00:45:27.655 }, 00:45:27.655 { 00:45:27.655 "method": "keyring_file_add_key", 00:45:27.655 "params": { 00:45:27.655 "name": "key1", 00:45:27.655 "path": "/tmp/tmp.AYAxXLyVC6" 00:45:27.655 } 00:45:27.655 } 00:45:27.655 ] 00:45:27.655 }, 00:45:27.655 { 00:45:27.655 "subsystem": "iobuf", 00:45:27.655 "config": [ 00:45:27.655 { 00:45:27.655 "method": "iobuf_set_options", 00:45:27.655 "params": { 00:45:27.655 "small_pool_count": 8192, 00:45:27.655 "large_pool_count": 1024, 00:45:27.655 "small_bufsize": 8192, 00:45:27.655 "large_bufsize": 135168 00:45:27.655 } 00:45:27.655 } 00:45:27.655 ] 00:45:27.655 }, 00:45:27.655 { 00:45:27.655 "subsystem": "sock", 00:45:27.655 "config": [ 00:45:27.655 { 00:45:27.655 "method": "sock_impl_set_options", 00:45:27.655 "params": { 00:45:27.655 "impl_name": "posix", 00:45:27.655 "recv_buf_size": 2097152, 00:45:27.655 "send_buf_size": 2097152, 00:45:27.655 "enable_recv_pipe": true, 00:45:27.655 "enable_quickack": false, 00:45:27.655 "enable_placement_id": 0, 00:45:27.655 "enable_zerocopy_send_server": true, 00:45:27.655 "enable_zerocopy_send_client": false, 00:45:27.655 "zerocopy_threshold": 0, 00:45:27.655 "tls_version": 0, 00:45:27.655 "enable_ktls": false 00:45:27.655 } 00:45:27.655 }, 00:45:27.655 { 00:45:27.655 "method": "sock_impl_set_options", 00:45:27.655 "params": { 00:45:27.655 "impl_name": "ssl", 00:45:27.655 "recv_buf_size": 4096, 00:45:27.655 "send_buf_size": 4096, 00:45:27.655 "enable_recv_pipe": true, 00:45:27.655 "enable_quickack": false, 00:45:27.655 "enable_placement_id": 0, 00:45:27.655 "enable_zerocopy_send_server": true, 00:45:27.655 "enable_zerocopy_send_client": false, 00:45:27.655 "zerocopy_threshold": 0, 00:45:27.655 "tls_version": 0, 00:45:27.655 "enable_ktls": false 00:45:27.655 } 00:45:27.655 } 00:45:27.655 ] 00:45:27.655 }, 00:45:27.655 { 00:45:27.655 "subsystem": "vmd", 00:45:27.655 "config": [] 00:45:27.655 }, 00:45:27.655 { 00:45:27.655 "subsystem": "accel", 00:45:27.655 "config": [ 00:45:27.655 { 00:45:27.655 "method": "accel_set_options", 00:45:27.655 "params": { 00:45:27.655 "small_cache_size": 128, 00:45:27.655 "large_cache_size": 16, 00:45:27.655 "task_count": 2048, 00:45:27.655 "sequence_count": 2048, 00:45:27.655 "buf_count": 2048 00:45:27.655 } 00:45:27.655 } 00:45:27.655 ] 00:45:27.655 }, 00:45:27.655 { 00:45:27.655 "subsystem": "bdev", 00:45:27.655 "config": [ 00:45:27.655 { 00:45:27.655 "method": "bdev_set_options", 00:45:27.655 "params": { 00:45:27.655 "bdev_io_pool_size": 65535, 00:45:27.655 "bdev_io_cache_size": 256, 00:45:27.655 "bdev_auto_examine": true, 00:45:27.655 "iobuf_small_cache_size": 128, 00:45:27.655 "iobuf_large_cache_size": 16 00:45:27.655 } 00:45:27.655 }, 00:45:27.655 { 00:45:27.655 "method": "bdev_raid_set_options", 00:45:27.655 "params": { 00:45:27.655 "process_window_size_kb": 1024 00:45:27.655 } 00:45:27.655 }, 00:45:27.655 { 00:45:27.655 "method": "bdev_iscsi_set_options", 00:45:27.655 "params": { 00:45:27.655 "timeout_sec": 30 00:45:27.655 } 00:45:27.655 }, 00:45:27.655 { 00:45:27.655 "method": "bdev_nvme_set_options", 00:45:27.655 "params": { 00:45:27.655 "action_on_timeout": "none", 00:45:27.655 "timeout_us": 0, 00:45:27.655 "timeout_admin_us": 0, 00:45:27.655 "keep_alive_timeout_ms": 10000, 00:45:27.655 "arbitration_burst": 0, 00:45:27.655 "low_priority_weight": 0, 00:45:27.655 "medium_priority_weight": 0, 00:45:27.655 "high_priority_weight": 0, 00:45:27.655 "nvme_adminq_poll_period_us": 10000, 00:45:27.655 "nvme_ioq_poll_period_us": 0, 00:45:27.655 "io_queue_requests": 512, 00:45:27.655 "delay_cmd_submit": true, 00:45:27.655 "transport_retry_count": 4, 00:45:27.655 "bdev_retry_count": 3, 00:45:27.655 "transport_ack_timeout": 0, 00:45:27.655 "ctrlr_loss_timeout_sec": 0, 00:45:27.655 "reconnect_delay_sec": 0, 00:45:27.655 "fast_io_fail_timeout_sec": 0, 00:45:27.655 "disable_auto_failback": false, 00:45:27.655 "generate_uuids": false, 00:45:27.655 "transport_tos": 0, 00:45:27.655 "nvme_error_stat": false, 00:45:27.655 "rdma_srq_size": 0, 00:45:27.655 "io_path_stat": false, 00:45:27.655 "allow_accel_sequence": false, 00:45:27.655 "rdma_max_cq_size": 0, 00:45:27.655 "rdma_cm_event_timeout_ms": 0, 00:45:27.655 "dhchap_digests": [ 00:45:27.655 "sha256", 00:45:27.655 "sha384", 00:45:27.655 "sha512" 00:45:27.655 ], 00:45:27.655 "dhchap_dhgroups": [ 00:45:27.655 "null", 00:45:27.655 "ffdhe2048", 00:45:27.655 "ffdhe3072", 00:45:27.655 "ffdhe4096", 00:45:27.655 "ffdhe6144", 00:45:27.655 "ffdhe8192" 00:45:27.655 ] 00:45:27.655 } 00:45:27.655 }, 00:45:27.655 { 00:45:27.655 "method": "bdev_nvme_attach_controller", 00:45:27.655 "params": { 00:45:27.656 "name": "nvme0", 00:45:27.656 "trtype": "TCP", 00:45:27.656 "adrfam": "IPv4", 00:45:27.656 "traddr": "127.0.0.1", 00:45:27.656 "trsvcid": "4420", 00:45:27.656 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:27.656 "prchk_reftag": false, 00:45:27.656 "prchk_guard": false, 00:45:27.656 "ctrlr_loss_timeout_sec": 0, 00:45:27.656 "reconnect_delay_sec": 0, 00:45:27.656 "fast_io_fail_timeout_sec": 0, 00:45:27.656 "psk": "key0", 00:45:27.656 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:27.656 "hdgst": false, 00:45:27.656 "ddgst": false 00:45:27.656 } 00:45:27.656 }, 00:45:27.656 { 00:45:27.656 "method": "bdev_nvme_set_hotplug", 00:45:27.656 "params": { 00:45:27.656 "period_us": 100000, 00:45:27.656 "enable": false 00:45:27.656 } 00:45:27.656 }, 00:45:27.656 { 00:45:27.656 "method": "bdev_wait_for_examine" 00:45:27.656 } 00:45:27.656 ] 00:45:27.656 }, 00:45:27.656 { 00:45:27.656 "subsystem": "nbd", 00:45:27.656 "config": [] 00:45:27.656 } 00:45:27.656 ] 00:45:27.656 }' 00:45:27.656 09:11:22 keyring_file -- keyring/file.sh@114 -- # killprocess 2469629 00:45:27.656 09:11:22 keyring_file -- common/autotest_common.sh@947 -- # '[' -z 2469629 ']' 00:45:27.656 09:11:22 keyring_file -- common/autotest_common.sh@951 -- # kill -0 2469629 00:45:27.656 09:11:22 keyring_file -- common/autotest_common.sh@952 -- # uname 00:45:27.656 09:11:22 keyring_file -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:45:27.656 09:11:22 keyring_file -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2469629 00:45:27.656 09:11:22 keyring_file -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:45:27.656 09:11:22 keyring_file -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:45:27.656 09:11:22 keyring_file -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2469629' 00:45:27.656 killing process with pid 2469629 00:45:27.656 09:11:22 keyring_file -- common/autotest_common.sh@966 -- # kill 2469629 00:45:27.656 Received shutdown signal, test time was about 1.000000 seconds 00:45:27.656 00:45:27.656 Latency(us) 00:45:27.656 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:27.656 =================================================================================================================== 00:45:27.656 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:45:27.656 09:11:22 keyring_file -- common/autotest_common.sh@971 -- # wait 2469629 00:45:27.916 09:11:22 keyring_file -- keyring/file.sh@117 -- # bperfpid=2470970 00:45:27.916 09:11:22 keyring_file -- keyring/file.sh@119 -- # waitforlisten 2470970 /var/tmp/bperf.sock 00:45:27.916 09:11:22 keyring_file -- common/autotest_common.sh@828 -- # '[' -z 2470970 ']' 00:45:27.916 09:11:22 keyring_file -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:45:27.916 09:11:22 keyring_file -- common/autotest_common.sh@833 -- # local max_retries=100 00:45:27.916 09:11:22 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:45:27.916 09:11:22 keyring_file -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:45:27.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:45:27.916 09:11:22 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:45:27.916 "subsystems": [ 00:45:27.916 { 00:45:27.916 "subsystem": "keyring", 00:45:27.916 "config": [ 00:45:27.916 { 00:45:27.916 "method": "keyring_file_add_key", 00:45:27.916 "params": { 00:45:27.916 "name": "key0", 00:45:27.916 "path": "/tmp/tmp.ZsUt1hQLOK" 00:45:27.916 } 00:45:27.916 }, 00:45:27.916 { 00:45:27.916 "method": "keyring_file_add_key", 00:45:27.916 "params": { 00:45:27.916 "name": "key1", 00:45:27.916 "path": "/tmp/tmp.AYAxXLyVC6" 00:45:27.916 } 00:45:27.917 } 00:45:27.917 ] 00:45:27.917 }, 00:45:27.917 { 00:45:27.917 "subsystem": "iobuf", 00:45:27.917 "config": [ 00:45:27.917 { 00:45:27.917 "method": "iobuf_set_options", 00:45:27.917 "params": { 00:45:27.917 "small_pool_count": 8192, 00:45:27.917 "large_pool_count": 1024, 00:45:27.917 "small_bufsize": 8192, 00:45:27.917 "large_bufsize": 135168 00:45:27.917 } 00:45:27.917 } 00:45:27.917 ] 00:45:27.917 }, 00:45:27.917 { 00:45:27.917 "subsystem": "sock", 00:45:27.917 "config": [ 00:45:27.917 { 00:45:27.917 "method": "sock_impl_set_options", 00:45:27.917 "params": { 00:45:27.917 "impl_name": "posix", 00:45:27.917 "recv_buf_size": 2097152, 00:45:27.917 "send_buf_size": 2097152, 00:45:27.917 "enable_recv_pipe": true, 00:45:27.917 "enable_quickack": false, 00:45:27.917 "enable_placement_id": 0, 00:45:27.917 "enable_zerocopy_send_server": true, 00:45:27.917 "enable_zerocopy_send_client": false, 00:45:27.917 "zerocopy_threshold": 0, 00:45:27.917 "tls_version": 0, 00:45:27.917 "enable_ktls": false 00:45:27.917 } 00:45:27.917 }, 00:45:27.917 { 00:45:27.917 "method": "sock_impl_set_options", 00:45:27.917 "params": { 00:45:27.917 "impl_name": "ssl", 00:45:27.917 "recv_buf_size": 4096, 00:45:27.917 "send_buf_size": 4096, 00:45:27.917 "enable_recv_pipe": true, 00:45:27.917 "enable_quickack": false, 00:45:27.917 "enable_placement_id": 0, 00:45:27.917 "enable_zerocopy_send_server": true, 00:45:27.917 "enable_zerocopy_send_client": false, 00:45:27.917 "zerocopy_threshold": 0, 00:45:27.917 "tls_version": 0, 00:45:27.917 "enable_ktls": false 00:45:27.917 } 00:45:27.917 } 00:45:27.917 ] 00:45:27.917 }, 00:45:27.917 { 00:45:27.917 "subsystem": "vmd", 00:45:27.917 "config": [] 00:45:27.917 }, 00:45:27.917 { 00:45:27.917 "subsystem": "accel", 00:45:27.917 "config": [ 00:45:27.917 { 00:45:27.917 "method": "accel_set_options", 00:45:27.917 "params": { 00:45:27.917 "small_cache_size": 128, 00:45:27.917 "large_cache_size": 16, 00:45:27.917 "task_count": 2048, 00:45:27.917 "sequence_count": 2048, 00:45:27.917 "buf_count": 2048 00:45:27.917 } 00:45:27.917 } 00:45:27.917 ] 00:45:27.917 }, 00:45:27.917 { 00:45:27.917 "subsystem": "bdev", 00:45:27.917 "config": [ 00:45:27.917 { 00:45:27.917 "method": "bdev_set_options", 00:45:27.917 "params": { 00:45:27.917 "bdev_io_pool_size": 65535, 00:45:27.917 "bdev_io_cache_size": 256, 00:45:27.917 "bdev_auto_examine": true, 00:45:27.917 "iobuf_small_cache_size": 128, 00:45:27.917 "iobuf_large_cache_size": 16 00:45:27.917 } 00:45:27.917 }, 00:45:27.917 { 00:45:27.917 "method": "bdev_raid_set_options", 00:45:27.917 "params": { 00:45:27.917 "process_window_size_kb": 1024 00:45:27.917 } 00:45:27.917 }, 00:45:27.917 { 00:45:27.917 "method": "bdev_iscsi_set_options", 00:45:27.917 "params": { 00:45:27.917 "timeout_sec": 30 00:45:27.917 } 00:45:27.917 }, 00:45:27.917 { 00:45:27.917 "method": "bdev_nvme_set_options", 00:45:27.917 "params": { 00:45:27.917 "action_on_timeout": "none", 00:45:27.917 "timeout_us": 0, 00:45:27.917 "timeout_admin_us": 0, 00:45:27.917 "keep_alive_timeout_ms": 10000, 00:45:27.917 "arbitration_burst": 0, 00:45:27.917 "low_priority_weight": 0, 00:45:27.917 "medium_priority_weight": 0, 00:45:27.917 "high_priority_weight": 0, 00:45:27.917 "nvme_adminq_poll_period_us": 10000, 00:45:27.917 "nvme_ioq_poll_period_us": 0, 00:45:27.917 "io_queue_requests": 512, 00:45:27.917 "delay_cmd_submit": true, 00:45:27.917 "transport_retry_count": 4, 00:45:27.917 "bdev_retry_count": 3, 00:45:27.917 "transport_ack_timeout": 0, 00:45:27.917 "ctrlr_loss_timeout_sec": 0, 00:45:27.917 "reconnect_delay_sec": 0, 00:45:27.917 "fast_io_fail_timeout_sec": 0, 00:45:27.917 "disable_auto_failback": false, 00:45:27.917 "generate_uuids": false, 00:45:27.917 "transport_tos": 0, 00:45:27.917 "nvme_error_stat": false, 00:45:27.917 "rdma_srq_size": 0, 00:45:27.917 "io_path_stat": false, 00:45:27.917 "allow_accel_sequence": false, 00:45:27.917 "rdma_max_cq_size": 0, 00:45:27.917 "rdma_cm_event_timeout_ms": 0, 00:45:27.917 "dhchap_digests": [ 00:45:27.917 "sha256", 00:45:27.917 "sha384", 00:45:27.917 "sha512" 00:45:27.917 ], 00:45:27.917 "dhchap_dhgroups": [ 00:45:27.917 "null", 00:45:27.917 "ffdhe2048", 00:45:27.917 "ffdhe3072", 00:45:27.917 "ffdhe4096", 00:45:27.917 "ffdhe6144", 00:45:27.917 "ffdhe8192" 00:45:27.917 ] 00:45:27.917 } 00:45:27.917 }, 00:45:27.917 { 00:45:27.917 "method": "bdev_nvme_attach_controller", 00:45:27.917 "params": { 00:45:27.917 "name": "nvme0", 00:45:27.917 "trtype": "TCP", 00:45:27.917 "adrfam": "IPv4", 00:45:27.917 "traddr": "127.0.0.1", 00:45:27.917 "trsvcid": "4420", 00:45:27.917 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:27.917 "prchk_reftag": false, 00:45:27.917 "prchk_guard": false, 00:45:27.917 "ctrlr_loss_timeout_sec": 0, 00:45:27.917 "reconnect_delay_sec": 0, 00:45:27.917 "fast_io_fail_timeout_sec": 0, 00:45:27.917 "psk": "key0", 00:45:27.917 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:27.917 "hdgst": false, 00:45:27.917 "ddgst": false 00:45:27.917 } 00:45:27.917 }, 00:45:27.917 { 00:45:27.917 "method": "bdev_nvme_set_hotplug", 00:45:27.917 "params": { 00:45:27.917 "period_us": 100000, 00:45:27.917 "enable": false 00:45:27.917 } 00:45:27.917 }, 00:45:27.917 { 00:45:27.917 "method": "bdev_wait_for_examine" 00:45:27.917 } 00:45:27.917 ] 00:45:27.917 }, 00:45:27.917 { 00:45:27.917 "subsystem": "nbd", 00:45:27.917 "config": [] 00:45:27.917 } 00:45:27.917 ] 00:45:27.917 }' 00:45:27.917 09:11:22 keyring_file -- common/autotest_common.sh@837 -- # xtrace_disable 00:45:27.917 09:11:22 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:27.917 [2024-05-15 09:11:22.489812] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:45:27.917 [2024-05-15 09:11:22.489903] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2470970 ] 00:45:27.917 EAL: No free 2048 kB hugepages reported on node 1 00:45:27.917 [2024-05-15 09:11:22.567028] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:27.917 [2024-05-15 09:11:22.656595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:45:28.182 [2024-05-15 09:11:22.833682] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:45:28.752 09:11:23 keyring_file -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:45:28.752 09:11:23 keyring_file -- common/autotest_common.sh@861 -- # return 0 00:45:28.752 09:11:23 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:45:28.752 09:11:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:28.752 09:11:23 keyring_file -- keyring/file.sh@120 -- # jq length 00:45:29.010 09:11:23 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:45:29.010 09:11:23 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:45:29.010 09:11:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:29.010 09:11:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:29.010 09:11:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:29.010 09:11:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:29.010 09:11:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:29.268 09:11:23 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:45:29.268 09:11:23 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:45:29.268 09:11:23 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:45:29.268 09:11:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:29.268 09:11:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:29.268 09:11:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:29.268 09:11:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:29.526 09:11:24 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:45:29.526 09:11:24 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:45:29.526 09:11:24 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:45:29.526 09:11:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:45:29.786 09:11:24 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:45:29.786 09:11:24 keyring_file -- keyring/file.sh@1 -- # cleanup 00:45:29.786 09:11:24 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.ZsUt1hQLOK /tmp/tmp.AYAxXLyVC6 00:45:29.786 09:11:24 keyring_file -- keyring/file.sh@20 -- # killprocess 2470970 00:45:29.786 09:11:24 keyring_file -- common/autotest_common.sh@947 -- # '[' -z 2470970 ']' 00:45:29.786 09:11:24 keyring_file -- common/autotest_common.sh@951 -- # kill -0 2470970 00:45:29.786 09:11:24 keyring_file -- common/autotest_common.sh@952 -- # uname 00:45:29.786 09:11:24 keyring_file -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:45:29.786 09:11:24 keyring_file -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2470970 00:45:29.786 09:11:24 keyring_file -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:45:29.786 09:11:24 keyring_file -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:45:29.786 09:11:24 keyring_file -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2470970' 00:45:29.786 killing process with pid 2470970 00:45:29.786 09:11:24 keyring_file -- common/autotest_common.sh@966 -- # kill 2470970 00:45:29.786 Received shutdown signal, test time was about 1.000000 seconds 00:45:29.786 00:45:29.786 Latency(us) 00:45:29.786 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:29.786 =================================================================================================================== 00:45:29.786 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:45:29.786 09:11:24 keyring_file -- common/autotest_common.sh@971 -- # wait 2470970 00:45:30.045 09:11:24 keyring_file -- keyring/file.sh@21 -- # killprocess 2469615 00:45:30.045 09:11:24 keyring_file -- common/autotest_common.sh@947 -- # '[' -z 2469615 ']' 00:45:30.045 09:11:24 keyring_file -- common/autotest_common.sh@951 -- # kill -0 2469615 00:45:30.045 09:11:24 keyring_file -- common/autotest_common.sh@952 -- # uname 00:45:30.045 09:11:24 keyring_file -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:45:30.045 09:11:24 keyring_file -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2469615 00:45:30.045 09:11:24 keyring_file -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:45:30.045 09:11:24 keyring_file -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:45:30.045 09:11:24 keyring_file -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2469615' 00:45:30.045 killing process with pid 2469615 00:45:30.045 09:11:24 keyring_file -- common/autotest_common.sh@966 -- # kill 2469615 00:45:30.045 [2024-05-15 09:11:24.717668] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:45:30.045 [2024-05-15 09:11:24.717726] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:45:30.045 09:11:24 keyring_file -- common/autotest_common.sh@971 -- # wait 2469615 00:45:30.304 00:45:30.304 real 0m13.956s 00:45:30.304 user 0m34.738s 00:45:30.304 sys 0m3.294s 00:45:30.304 09:11:25 keyring_file -- common/autotest_common.sh@1123 -- # xtrace_disable 00:45:30.304 09:11:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:30.304 ************************************ 00:45:30.304 END TEST keyring_file 00:45:30.304 ************************************ 00:45:30.595 09:11:25 -- spdk/autotest.sh@292 -- # [[ n == y ]] 00:45:30.595 09:11:25 -- spdk/autotest.sh@304 -- # '[' 0 -eq 1 ']' 00:45:30.595 09:11:25 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:45:30.595 09:11:25 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:45:30.595 09:11:25 -- spdk/autotest.sh@317 -- # '[' 0 -eq 1 ']' 00:45:30.595 09:11:25 -- spdk/autotest.sh@326 -- # '[' 0 -eq 1 ']' 00:45:30.595 09:11:25 -- spdk/autotest.sh@331 -- # '[' 0 -eq 1 ']' 00:45:30.595 09:11:25 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:45:30.595 09:11:25 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:45:30.595 09:11:25 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:45:30.595 09:11:25 -- spdk/autotest.sh@348 -- # '[' 0 -eq 1 ']' 00:45:30.595 09:11:25 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:45:30.595 09:11:25 -- spdk/autotest.sh@359 -- # [[ 0 -eq 1 ]] 00:45:30.595 09:11:25 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:45:30.595 09:11:25 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:45:30.595 09:11:25 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:45:30.595 09:11:25 -- spdk/autotest.sh@376 -- # trap - SIGINT SIGTERM EXIT 00:45:30.595 09:11:25 -- spdk/autotest.sh@378 -- # timing_enter post_cleanup 00:45:30.595 09:11:25 -- common/autotest_common.sh@721 -- # xtrace_disable 00:45:30.595 09:11:25 -- common/autotest_common.sh@10 -- # set +x 00:45:30.595 09:11:25 -- spdk/autotest.sh@379 -- # autotest_cleanup 00:45:30.595 09:11:25 -- common/autotest_common.sh@1389 -- # local autotest_es=0 00:45:30.595 09:11:25 -- common/autotest_common.sh@1390 -- # xtrace_disable 00:45:30.595 09:11:25 -- common/autotest_common.sh@10 -- # set +x 00:45:32.497 INFO: APP EXITING 00:45:32.497 INFO: killing all VMs 00:45:32.497 INFO: killing vhost app 00:45:32.497 INFO: EXIT DONE 00:45:33.431 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:45:33.431 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:45:33.431 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:45:33.431 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:45:33.431 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:45:33.431 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:45:33.431 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:45:33.431 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:45:33.431 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:45:33.691 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:45:33.691 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:45:33.691 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:45:33.691 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:45:33.691 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:45:33.691 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:45:33.691 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:45:33.691 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:45:35.215 Cleaning 00:45:35.215 Removing: /var/run/dpdk/spdk0/config 00:45:35.215 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:45:35.215 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:45:35.215 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:45:35.215 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:45:35.215 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:45:35.215 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:45:35.215 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:45:35.215 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:45:35.215 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:45:35.215 Removing: /var/run/dpdk/spdk0/hugepage_info 00:45:35.215 Removing: /var/run/dpdk/spdk1/config 00:45:35.215 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:45:35.215 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:45:35.215 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:45:35.215 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:45:35.215 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:45:35.215 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:45:35.215 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:45:35.215 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:45:35.215 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:45:35.215 Removing: /var/run/dpdk/spdk1/hugepage_info 00:45:35.215 Removing: /var/run/dpdk/spdk1/mp_socket 00:45:35.215 Removing: /var/run/dpdk/spdk2/config 00:45:35.215 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:45:35.215 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:45:35.215 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:45:35.215 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:45:35.215 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:45:35.215 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:45:35.215 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:45:35.215 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:45:35.215 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:45:35.215 Removing: /var/run/dpdk/spdk2/hugepage_info 00:45:35.215 Removing: /var/run/dpdk/spdk3/config 00:45:35.215 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:45:35.215 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:45:35.215 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:45:35.215 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:45:35.215 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:45:35.215 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:45:35.215 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:45:35.215 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:45:35.215 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:45:35.215 Removing: /var/run/dpdk/spdk3/hugepage_info 00:45:35.215 Removing: /var/run/dpdk/spdk4/config 00:45:35.215 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:45:35.215 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:45:35.215 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:45:35.215 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:45:35.215 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:45:35.215 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:45:35.215 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:45:35.215 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:45:35.215 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:45:35.215 Removing: /var/run/dpdk/spdk4/hugepage_info 00:45:35.215 Removing: /dev/shm/bdev_svc_trace.1 00:45:35.215 Removing: /dev/shm/nvmf_trace.0 00:45:35.215 Removing: /dev/shm/spdk_tgt_trace.pid2136630 00:45:35.215 Removing: /var/run/dpdk/spdk0 00:45:35.215 Removing: /var/run/dpdk/spdk1 00:45:35.215 Removing: /var/run/dpdk/spdk2 00:45:35.215 Removing: /var/run/dpdk/spdk3 00:45:35.215 Removing: /var/run/dpdk/spdk4 00:45:35.215 Removing: /var/run/dpdk/spdk_pid2135083 00:45:35.215 Removing: /var/run/dpdk/spdk_pid2135810 00:45:35.215 Removing: /var/run/dpdk/spdk_pid2136630 00:45:35.215 Removing: /var/run/dpdk/spdk_pid2137069 00:45:35.215 Removing: /var/run/dpdk/spdk_pid2137756 00:45:35.215 Removing: /var/run/dpdk/spdk_pid2137893 00:45:35.215 Removing: /var/run/dpdk/spdk_pid2138609 00:45:35.215 Removing: /var/run/dpdk/spdk_pid2138619 00:45:35.215 Removing: /var/run/dpdk/spdk_pid2138864 00:45:35.215 Removing: /var/run/dpdk/spdk_pid2140172 00:45:35.215 Removing: /var/run/dpdk/spdk_pid2141093 00:45:35.215 Removing: /var/run/dpdk/spdk_pid2141392 00:45:35.215 Removing: /var/run/dpdk/spdk_pid2141590 00:45:35.215 Removing: /var/run/dpdk/spdk_pid2141800 00:45:35.215 Removing: /var/run/dpdk/spdk_pid2141988 00:45:35.215 Removing: /var/run/dpdk/spdk_pid2142143 00:45:35.215 Removing: /var/run/dpdk/spdk_pid2142297 00:45:35.215 Removing: /var/run/dpdk/spdk_pid2142481 00:45:35.215 Removing: /var/run/dpdk/spdk_pid2143062 00:45:35.215 Removing: /var/run/dpdk/spdk_pid2145413 00:45:35.215 Removing: /var/run/dpdk/spdk_pid2145575 00:45:35.215 Removing: /var/run/dpdk/spdk_pid2145738 00:45:35.215 Removing: /var/run/dpdk/spdk_pid2145863 00:45:35.215 Removing: /var/run/dpdk/spdk_pid2146186 00:45:35.215 Removing: /var/run/dpdk/spdk_pid2146303 00:45:35.215 Removing: /var/run/dpdk/spdk_pid2146624 00:45:35.215 Removing: /var/run/dpdk/spdk_pid2146737 00:45:35.215 Removing: /var/run/dpdk/spdk_pid2146910 00:45:35.215 Removing: /var/run/dpdk/spdk_pid2147036 00:45:35.215 Removing: /var/run/dpdk/spdk_pid2147205 00:45:35.215 Removing: /var/run/dpdk/spdk_pid2147210 00:45:35.215 Removing: /var/run/dpdk/spdk_pid2147587 00:45:35.215 Removing: /var/run/dpdk/spdk_pid2147853 00:45:35.215 Removing: /var/run/dpdk/spdk_pid2148051 00:45:35.215 Removing: /var/run/dpdk/spdk_pid2148219 00:45:35.215 Removing: /var/run/dpdk/spdk_pid2148249 00:45:35.215 Removing: /var/run/dpdk/spdk_pid2148434 00:45:35.215 Removing: /var/run/dpdk/spdk_pid2148592 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2148745 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2148924 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2149179 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2149331 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2149490 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2149762 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2149924 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2150078 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2150318 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2150512 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2150670 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2150823 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2151094 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2151257 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2151410 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2151689 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2151848 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2152014 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2152201 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2152359 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2152563 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2155035 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2210999 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2213906 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2221646 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2225227 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2227867 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2228268 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2236080 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2236089 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2236621 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2237279 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2237937 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2238339 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2238343 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2238571 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2238614 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2238622 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2239278 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2239928 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2240489 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2240876 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2240990 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2241135 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2242017 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2242740 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2248376 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2248648 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2251555 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2256163 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2258220 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2265189 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2271080 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2272276 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2272932 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2283998 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2286490 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2310451 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2313637 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2314831 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2316632 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2316766 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2316830 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2316927 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2317360 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2318649 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2319280 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2319702 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2321242 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2321622 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2322180 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2324869 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2328535 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2331950 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2356727 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2359359 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2363544 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2364366 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2365461 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2368293 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2370940 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2375841 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2375845 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2378909 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2379153 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2379323 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2379680 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2379685 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2381264 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2382443 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2383722 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2384917 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2386095 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2387271 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2391230 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2391569 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2392579 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2393169 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2397023 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2398879 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2402576 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2406321 00:45:35.216 Removing: /var/run/dpdk/spdk_pid2413447 00:45:35.473 Removing: /var/run/dpdk/spdk_pid2418286 00:45:35.473 Removing: /var/run/dpdk/spdk_pid2418288 00:45:35.473 Removing: /var/run/dpdk/spdk_pid2431173 00:45:35.473 Removing: /var/run/dpdk/spdk_pid2431573 00:45:35.473 Removing: /var/run/dpdk/spdk_pid2431985 00:45:35.473 Removing: /var/run/dpdk/spdk_pid2432385 00:45:35.473 Removing: /var/run/dpdk/spdk_pid2432968 00:45:35.473 Removing: /var/run/dpdk/spdk_pid2433374 00:45:35.473 Removing: /var/run/dpdk/spdk_pid2433784 00:45:35.473 Removing: /var/run/dpdk/spdk_pid2434196 00:45:35.473 Removing: /var/run/dpdk/spdk_pid2436987 00:45:35.473 Removing: /var/run/dpdk/spdk_pid2437232 00:45:35.473 Removing: /var/run/dpdk/spdk_pid2441308 00:45:35.473 Removing: /var/run/dpdk/spdk_pid2441366 00:45:35.473 Removing: /var/run/dpdk/spdk_pid2443085 00:45:35.473 Removing: /var/run/dpdk/spdk_pid2448919 00:45:35.473 Removing: /var/run/dpdk/spdk_pid2448924 00:45:35.473 Removing: /var/run/dpdk/spdk_pid2452199 00:45:35.473 Removing: /var/run/dpdk/spdk_pid2453599 00:45:35.473 Removing: /var/run/dpdk/spdk_pid2454995 00:45:35.473 Removing: /var/run/dpdk/spdk_pid2455862 00:45:35.473 Removing: /var/run/dpdk/spdk_pid2457259 00:45:35.473 Removing: /var/run/dpdk/spdk_pid2458017 00:45:35.473 Removing: /var/run/dpdk/spdk_pid2463881 00:45:35.473 Removing: /var/run/dpdk/spdk_pid2464269 00:45:35.473 Removing: /var/run/dpdk/spdk_pid2464666 00:45:35.473 Removing: /var/run/dpdk/spdk_pid2466193 00:45:35.473 Removing: /var/run/dpdk/spdk_pid2466590 00:45:35.474 Removing: /var/run/dpdk/spdk_pid2466984 00:45:35.474 Removing: /var/run/dpdk/spdk_pid2469615 00:45:35.474 Removing: /var/run/dpdk/spdk_pid2469629 00:45:35.474 Removing: /var/run/dpdk/spdk_pid2470970 00:45:35.474 Clean 00:45:35.474 09:11:30 -- common/autotest_common.sh@1448 -- # return 0 00:45:35.474 09:11:30 -- spdk/autotest.sh@380 -- # timing_exit post_cleanup 00:45:35.474 09:11:30 -- common/autotest_common.sh@727 -- # xtrace_disable 00:45:35.474 09:11:30 -- common/autotest_common.sh@10 -- # set +x 00:45:35.474 09:11:30 -- spdk/autotest.sh@382 -- # timing_exit autotest 00:45:35.474 09:11:30 -- common/autotest_common.sh@727 -- # xtrace_disable 00:45:35.474 09:11:30 -- common/autotest_common.sh@10 -- # set +x 00:45:35.474 09:11:30 -- spdk/autotest.sh@383 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:45:35.474 09:11:30 -- spdk/autotest.sh@385 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:45:35.474 09:11:30 -- spdk/autotest.sh@385 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:45:35.474 09:11:30 -- spdk/autotest.sh@387 -- # hash lcov 00:45:35.474 09:11:30 -- spdk/autotest.sh@387 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:45:35.474 09:11:30 -- spdk/autotest.sh@389 -- # hostname 00:45:35.474 09:11:30 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-06 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:45:35.732 geninfo: WARNING: invalid characters removed from testname! 00:46:02.263 09:11:56 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:06.447 09:12:00 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:09.728 09:12:03 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:12.254 09:12:06 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:14.818 09:12:09 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:18.098 09:12:12 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:20.626 09:12:15 -- spdk/autotest.sh@396 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:46:20.626 09:12:15 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:46:20.626 09:12:15 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:46:20.626 09:12:15 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:20.626 09:12:15 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:20.626 09:12:15 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:20.626 09:12:15 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:20.626 09:12:15 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:20.626 09:12:15 -- paths/export.sh@5 -- $ export PATH 00:46:20.626 09:12:15 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:20.626 09:12:15 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:46:20.626 09:12:15 -- common/autobuild_common.sh@437 -- $ date +%s 00:46:20.626 09:12:15 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715757135.XXXXXX 00:46:20.626 09:12:15 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715757135.lalOE3 00:46:20.626 09:12:15 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:46:20.626 09:12:15 -- common/autobuild_common.sh@443 -- $ '[' -n v22.11.4 ']' 00:46:20.626 09:12:15 -- common/autobuild_common.sh@444 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:46:20.626 09:12:15 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:46:20.626 09:12:15 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:46:20.626 09:12:15 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:46:20.626 09:12:15 -- common/autobuild_common.sh@453 -- $ get_config_params 00:46:20.626 09:12:15 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:46:20.626 09:12:15 -- common/autotest_common.sh@10 -- $ set +x 00:46:20.626 09:12:15 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:46:20.626 09:12:15 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:46:20.626 09:12:15 -- pm/common@17 -- $ local monitor 00:46:20.626 09:12:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:46:20.626 09:12:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:46:20.626 09:12:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:46:20.626 09:12:15 -- pm/common@21 -- $ date +%s 00:46:20.626 09:12:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:46:20.626 09:12:15 -- pm/common@21 -- $ date +%s 00:46:20.626 09:12:15 -- pm/common@25 -- $ sleep 1 00:46:20.626 09:12:15 -- pm/common@21 -- $ date +%s 00:46:20.626 09:12:15 -- pm/common@21 -- $ date +%s 00:46:20.626 09:12:15 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715757135 00:46:20.626 09:12:15 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715757135 00:46:20.626 09:12:15 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715757135 00:46:20.626 09:12:15 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715757135 00:46:20.626 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715757135_collect-vmstat.pm.log 00:46:20.626 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715757135_collect-cpu-load.pm.log 00:46:20.626 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715757135_collect-cpu-temp.pm.log 00:46:20.626 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715757135_collect-bmc-pm.bmc.pm.log 00:46:21.560 09:12:16 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:46:21.560 09:12:16 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:46:21.560 09:12:16 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:46:21.560 09:12:16 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:46:21.560 09:12:16 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:46:21.560 09:12:16 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:46:21.560 09:12:16 -- spdk/autopackage.sh@19 -- $ timing_finish 00:46:21.560 09:12:16 -- common/autotest_common.sh@733 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:46:21.560 09:12:16 -- common/autotest_common.sh@734 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:46:21.560 09:12:16 -- common/autotest_common.sh@736 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:46:21.560 09:12:16 -- spdk/autopackage.sh@20 -- $ exit 0 00:46:21.560 09:12:16 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:46:21.560 09:12:16 -- pm/common@29 -- $ signal_monitor_resources TERM 00:46:21.560 09:12:16 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:46:21.560 09:12:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:46:21.560 09:12:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:46:21.560 09:12:16 -- pm/common@44 -- $ pid=2482819 00:46:21.560 09:12:16 -- pm/common@50 -- $ kill -TERM 2482819 00:46:21.560 09:12:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:46:21.560 09:12:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:46:21.560 09:12:16 -- pm/common@44 -- $ pid=2482821 00:46:21.560 09:12:16 -- pm/common@50 -- $ kill -TERM 2482821 00:46:21.560 09:12:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:46:21.560 09:12:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:46:21.560 09:12:16 -- pm/common@44 -- $ pid=2482823 00:46:21.560 09:12:16 -- pm/common@50 -- $ kill -TERM 2482823 00:46:21.560 09:12:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:46:21.560 09:12:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:46:21.560 09:12:16 -- pm/common@44 -- $ pid=2482855 00:46:21.560 09:12:16 -- pm/common@50 -- $ sudo -E kill -TERM 2482855 00:46:21.818 + [[ -n 2028483 ]] 00:46:21.818 + sudo kill 2028483 00:46:21.829 [Pipeline] } 00:46:21.846 [Pipeline] // stage 00:46:21.852 [Pipeline] } 00:46:21.869 [Pipeline] // timeout 00:46:21.874 [Pipeline] } 00:46:21.890 [Pipeline] // catchError 00:46:21.895 [Pipeline] } 00:46:21.912 [Pipeline] // wrap 00:46:21.918 [Pipeline] } 00:46:21.935 [Pipeline] // catchError 00:46:21.945 [Pipeline] stage 00:46:21.947 [Pipeline] { (Epilogue) 00:46:21.962 [Pipeline] catchError 00:46:21.964 [Pipeline] { 00:46:21.978 [Pipeline] echo 00:46:21.979 Cleanup processes 00:46:21.985 [Pipeline] sh 00:46:22.270 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:46:22.270 2482979 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:46:22.270 2483087 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:46:22.285 [Pipeline] sh 00:46:22.570 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:46:22.570 ++ grep -v 'sudo pgrep' 00:46:22.570 ++ awk '{print $1}' 00:46:22.570 + sudo kill -9 2482979 00:46:22.581 [Pipeline] sh 00:46:22.864 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:46:35.117 [Pipeline] sh 00:46:35.403 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:46:35.403 Artifacts sizes are good 00:46:35.419 [Pipeline] archiveArtifacts 00:46:35.426 Archiving artifacts 00:46:35.658 [Pipeline] sh 00:46:35.968 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:46:35.981 [Pipeline] cleanWs 00:46:35.991 [WS-CLEANUP] Deleting project workspace... 00:46:35.991 [WS-CLEANUP] Deferred wipeout is used... 00:46:35.998 [WS-CLEANUP] done 00:46:36.000 [Pipeline] } 00:46:36.019 [Pipeline] // catchError 00:46:36.031 [Pipeline] sh 00:46:36.313 + logger -p user.info -t JENKINS-CI 00:46:36.320 [Pipeline] } 00:46:36.335 [Pipeline] // stage 00:46:36.341 [Pipeline] } 00:46:36.356 [Pipeline] // node 00:46:36.360 [Pipeline] End of Pipeline 00:46:36.393 Finished: SUCCESS